Technology

There’s a technique called the Polaroid emulsion lift that allows you to transfer a Polaroid print to different materials, such as glass, rock, or watercolor paper. Essentially, you soak a completely dry photo (one that has set for at least 24 hours) in hot water and carefully peel the emulsion layer from the photo backing. The thin layer of film can then be carefully removed, spread over a new surface and allowed to dry.

The video above shows how the process works, and there’s a link to more specific details below. This is supposed to work best with type 669 film due to it’s really think emulsion layer. That said, people have had great results with other types of film, and a thinner emulsion layer will produce cool crinkly effects and tears, which can also be desirable.

Polaroid emulsion lifts
Emulsion lift example on YouTube

8 thoughts on “Polaroid emulsion lift

  1. Hmm. I don’t think I’m convinced yet. It seems like the Musion system is really just projecting a 2d image onto an invisible plane that sits on a stage. If I’m understanding things correctly, it really works well when the audience is positioned roughly perpendicular to the plane and their point of view isn’t moving.

    I’m likely wrong, but the way the cameras are sliding all the way around Yellin makes me think this wouldn’t be possible with the Eyeliner tech.

    The one thing I’m trying to figure out is why the keying looks so crummy. Doesn’t the local weather report do a better job typically? There must be some additional complexity that I’m missing.

  2. The correspondent said that the cameras around her ‘knew’ what the cameras in the studio were doing, and while the shot was impressively dynamic, showing her from front and back, they only seemed to use about two cameras for the bit. I didn’t notice the circle on the floor, but it all adds up to Mr. B talking into empty air.

    Regards.

  3. I’m sure what it is is a motion control camera. Digital effects usually employs these. Someone can shoot a scene with a camera on a robotic arm, while a computer reads where that arm is. Then the computer can replicate that exact move again later. You can then put two separate shots together (i.e. an actor walking around talking to themselves, or a lion chasing an actor that you wouldn’t want together for safety reasons). But CNN has instead got a camera at one end recording, and another at the other end mimicking the move in real time. This is probably why the camera move is so slow. The robotic motors are usually very noisy, though, so maybe they slowed down the move to keep things down.

    The blue halo is amusing as well. Although then having her directly refer to Princess Leia was a bit redundent

  4. This is an implementation of virtual set technology used in reverse and hardly a hologram. Normally a person is shot on blue/green screen and a background added and tracked according to camera movement and camera cutting. Here the inverse alpha or “key” is used. Another example of this kind of overlay is virtual lines on sport programs (football, horse racing etc). One very good point was the lack of different shots. Maybe more due to common TV practices for choosing shots but could also mean the number of cameras was a bit of a lie.

    If there were 35 cameras, then the computers just chose the camera the closest matched the shot on screen. Most probably a choice made by a human and then the setting saved for later use. For example. The two camera style they used could mean cameras 1 and 17 are used to match cameras 1 and 2 on set respectively. But if a third wide shot is included, then camera 23 matched that shot and is added to the setup. The vision mixer in the main studio tells the computer in the remote studio which camera is in use and the computer the switches to the corresponding camera and feeds the video through.

    None of the technologies in play are new in live TV. The main ones are:

    Chroma Key; most people know about blue/green screen now. The blue halo is caused by noise in the picture when it’s compressed for transmission to the main studio. While you can’t see the noise yourself it’s enough to cause trouble when “keying” something. There are ways to overcome this.

    Camera Tally; this is the little red light on top of the camera telling the talent on set which camera is currently in use. This system can easily be interfaced so a remote location can switch cameras on the same cue. Many system today have “read ahead tally” allowing not only the camera in use but the next shot to be indicated.

    Robotic camera heads and pedestals; used in many studios today (particularly news). With a full robotic pedestal there is no need for a cameraman at all. Again the local and remote cameras could be ganged together and moved in unison. On set robots are actually rather quiet specifically for live TV. It’s dangerous to stand close to them, you can’t hear them coming.

    Camera tracking head; these tripod heads tell a graphics computer the direction a manually operated camera is pointing. The computer then augments the graphics to this indicated angles. These heads could also be used to move a 2nd robotic camera. Panasonic demonstrated a system such as this at a trade show here in Australia last year. The same information can be read from a robotic camera as it’s required for normal operation.

    With the right cameras and interfaces, this idea could actually be done without the use of a graphics computer. If you measured up two studios and placed the cameras in the same locations, set to the same heights and using the same lens settings this effect could easily be done on a large vision mixing desk. Simple camera moves could also be included.

  5. Hello, I think your website might be having browser compatibility issues. When I look at your blog in Chrome, it looks fine but when opening in Internet Explorer, it has some overlapping. I just wanted to give you a quick heads up! Other then that, awesome blog!

Comments are closed.

Tagged