What if Different Artists Made This Sculpture? Using Drones to Find Out

3D Printing & Imaging Digital Fabrication Drones & Vehicles Maker News
What if Different Artists Made This Sculpture? Using Drones to Find Out

“No way!” is a more or less accurate, but less profane version of what I said when I made my first styled photogram. Perhaps the very first styled photogram ever. I had dared not hope it would work. It seemed extraordinarily unlikely that it would. And then it did. And it was awesome.

I had just fed just 17 cell-phone photographs of the totem pole at my childhood mini golf course through one of the style filters on the popular mobile app Prisma. Then I fed those 17 new images into a second program made by Autodesk, called ReMake.

The result was a photogram – a 3D model generated from photographic images. It was lo-res, imperfect and unusable by my current standards. But it was definitely the totem pole. Except that it looked like a cartoon, thanks to Prisma.

Here’s a gif of the 13 of the 17 photos, post-Prisma. 4 detail-cutaways were removed to preserve the looping nature, for reference.

And here’s a turntable animation of the resulting 3D mesh

YouTube player

And here’s a playlist of styled photograms I’ve made since I really figured out more of what I was doing (last month)

YouTube player

And That’s it. That’s really all you do. Take a bunch of photos encircling an object like you see in the gif, run them through Prisma or your favorite style transfer application (mine is Painnt), and then again through ReMake or your favorite photogrammetry software. Done. Go forth! Be free! Photogram all the things! Style all the photograms!

Of course, if you’d like to learn how to do it well, and also how to sound super brainy when explaining it to others, then read on…

Let’s deal with the brainy part first, since the veil has mostly been lifted on that. This stuff is simple, from a practical standpoint. Take some photos, process them in one software package, reprocess them in another, and you’re good to go. But make no mistake, the technologies you’re employing really live on the buzzwordy bleeding-edge of what’s happening right now. Let’s break it down.

Photogrammetry

Okay, this one’s actually old. Photogrammetry, which is the process of measuring the distance between points using a succession of photographs, has been around for more than a century. Originally used primarily for surveying and mapping, as computing power increased, photogrammetry later found a home among game developer and Visual FX artists, as a way to quickly and cheaply generate high-quality 3D assets for their games and films.

Artificial Intelligence/Machine Learning

Well, you know what? This one’s kind of old, too. All the stuff that’s happening with AI today really has its roots in work that was done in the 1950s and ‘60s. Like photogrammetry, advances in processing power are making possible new applications in AI and machine learning.

For instance, I’m not typing this article. I’m speaking it directly into my tablet, via Google docs. It’s not perfect, but it mostly works, and it wouldn’t be possible without machine learning. Ditto if you use Alexa, or Siri, or Google Now. The list goes on. This stuff is everywhere now.

Some of the most mind bending advances have come from a specific discipline within AI research, neural networks. Even more specifically, style transfer neural networks.

Style Transfer Neural Networks

Aaaand I’m tapping out on the brainy stuff right about here. The problem with talking about smart stuff, when you’re less smart than the stuff you’re talking about, is that you run the risk of that becoming obvious. But if I were to roll the dice on that gambit, i’d probably mumble something about convolutional neural networks, and machine and computer vision. I’d confidently say “TensorFlow,” absent context, so you would know that I know that’s something that people use. For AI stuff. With the neural networks. From the Google. I’d definitely talk about how the style transfer neural network code has been freely released, and has its own GitHub repository.

But really, all you need to know is that style transfer neural networks allow you transfer a near limitless assortment of artistic and aesthetic styles to your photos and videos. And photograms!

 

Drones

A surprise entry! These lovable flying robots do everything from taking our photographs, and delivering our consumer goods, to wholesale murder committed in our names. Definitely the bloodiest of our bleeding-edge tech.

I haven’t mentioned drones until now, and technically speaking, you’re not going to need one. But if, like me, you want to create your photograms outdoors, you’re probably going to want a drone. The photography kind.

Putting It All Together

Let’s take what we’ve learned so far, reduce it down to a gag-inducing slurry of buzzwords, and somehow construct a 100% truthful statement. VC’s and game publishers – for your consideration:

“Ever since I got my drone, my photogrammetry work has just soared to a whole other level. My photograms used to be poor quality because they were made from only a couple of dozen or so photographs, but now it’s easy and fun to capture the 100-250 photos I need. The results have been superlative. And the onboard machine vision systems in the drone mean I don’t have to worry about crashing into things. Speaking of machine and computer vision, the real way AI has just exploded my asset creation pipeline are through style transfer neural networks. I’ve been using them for photogrammetric preprocessing, and the results have been nothing short of astounding. It’s opened up a myriad of possibilities for the VR project I’m working on… Also, TensorFlow.”

Specific Tips: Photogrammetry

1 – Get a drone, a tripod, or massive quads. Blurry or inconsistent photographs will result in bad photograms. Drones and tripods produce good, stable photos. Use them. If you go down the handheld route, take note: you will be doing a lot of squatting to capture lower angle shots. Like, the amount of squats that prisoners do so they don’t get harassed out in the yard. That many squats. Your legs will scream in pain the next day. And the next. I assume at some point, your thighs will become like two immense pistons, laser-cut from steel. I wouldn’t know. I just bought a drone instead.

2 – Watch your shadows. Watch your light. The textures on your meshes will be “baked in”, and so will any shadows. Keep your light uniform and flat. If you’re shooting outside, the best times to shoot are on overcast days, when the sun is overhead. If, like me, you live in the Pacific Northwest, congratulations! Your drab winters just became a playground. But stay out of the rain. Rain doesn’t play well with photogrammetry.

3 – That said, if there’s not going to be a lot of dynamic lighting used in your end product, baked in shadows can be very useful and dramatic. If you’re shooting outside, however, you should note that time is of the essence. Take too long capturing your scene, and you will notice inconsistent, “streaking” shadows caused by the movement of the sun across the sky. Even worse, it could confuse the photogrammetric algorithm, resulting in holes or deformities in your mesh. Worse still, if the shadow irregularities are too severe, the photogrammetric processing may just fail entirely. No photogram for you! Again, a drone really comes in handy in mitigating these situations.

When choosing subjects, play to photogrammetry’s strengths, and avoid its weaknesses. Stone, wood, simple fabrics, distressed or worn leather, cast and weathered metal. Photogrammetry loves these materials. Basically, anything unmoving, dull-surfaced, and topologically and texturally varied should work.

Avoid:
Objects with lots of flat, uniform or highly regular surfaces, right angles and straight lines
Highly specular (shiny) surfaces like glass, polished metals, glossy plastics, or liquids
Inconsistent lighting or shadows
Lens flares or other visual artifacts in your photos
Overly complex objects
Highly occluded objects – if a lot of stuff is in the way of the object you’re trying to capture, it’s going to be difficult to get good results

Note:
It’s hard, but not impossible, to get a high-quality photograms from living subjects without resorting to expensive, impractical multi-camera setups. Trees and plants are geometrically complex, and they sway in the wind. People breathe and bllink, and shift imperceptibly. Animals and children don’t take direction very well. It’s basically the ultimate mannequin challenge. If you think your subject and shooting conditions are up to the task, then go for it. But don’t be surprised if the results disappoint.

4 – Think spatially. Make sure to get all your angles. It’s a real bummer to find out after-the-fact that you forgot to take enough photos of the top of something (it’s always the tops of things), or failed to get any of that one piece that jutted out in that weird way (it’s always the tops of things except when it’s that one weird overhanging piece that you forgot to factor in).

5 – If your camera supports HDR mode, use it. Don’t use the zoom function. Auto focus is OK. Auto focus is your friend.

6 – Get a circular polarizing filter for your lens. It gives you a little more leeway in dealing with specular objects.

7 – make sure your photographs have plenty of overlap. Overkill is OK. I get good results with photos taken at intervals of between 8 to 14 degrees. This means you’ll have about 26 to 45 photos as you make a single 360 degree revolution around an object.

8 – you probably want to make more than one revolution around an object. A good rule of thumb is one revolution rotating around the center of an object, one revolution from at or near the base of the object, one revolution at or near the top of the object, and a few from above (and below,when possible), a few more for coverage of those hard-to-reach places, and even a few more for details that you really want to call out on your mesh.

In these images, you can see the points where all the photographs that comprise these photograms were taken. Those three platonically ideal rotations around your object generally give way to something a little more scattered looking. That’s OK.

Screen Shot 2017-01-25 at 2.20.19 PM

9 – Take the right number of photographs. ReMake, the photogrammetry software I’ve been using lately, can accept up to 250 photographs. Complex scenes or objects may require that many or more, but be aware that having a large amount of source photos to draw from is a double edged sword. Having a surfeit of photos could result in a fabulous high-fidelity 3-D mesh, or a terrible 3-D mess. More photos means more detail, but it also creates more chances for error, and makes it more difficult to hunt for that one problematic photograph in the batch. Personally, I’ve found a photogrammetric sweet spot for most objects to be in the 100-130 photo range. More complex objects will naturally require more.

10 – Pick the photogrammetry software that best suits your needs. I like ReMake, because the software is dead simple, and processing happens in the cloud, therefore obviating the need for dedicated hardware. Whatever their setup is, they will always be able to process more photos faster than most lone enthusiasts. Plus, if it’s their hardware that’s chugging through all your photos, that frees up your own hardware for other development tasks.

Don’t be afraid to scout around for the software that best suits your needs. There is certainly no shortage of photogrammetry packages to choose from.

Specific Tips: Style Transfer

The tips here are pretty thin on the ground. There just hasn’t been a lot of time. Like a lot of you, Prisma’s release last summer was my first introduction to style transfer. I only thought to conduct my first styled photogram experiment in early September, and then waited extremely impatiently until mid-December for the drone I immediately pre-ordered following that experiment’s success to finally arrive. This was annoying, but also semi-fortuitous, because there was no easy way to process batches of files back in the dark days of September 2016, and I knew I would need to process hundreds, if not thousands at a time. I was considering cockamamie ideas like distributing the work (relatively) cheaply and quickly via mechanical Turk. I’m glad I never had to follow through with that. Anyhow, my first drone-captured styled photograms were created on December 20. The quality was orders of magnitude better than any photogram I had produced before, styled or otherwise.

That said, here’s what I’ve learned so far:

1 – Not all style filters will work well with photogrammetry software. Styles that use extremely dark tones may not give your photogrammetry software enough information to work with, resulting in failed projects, or photograms with unwanted deformations.

Similarly, there is an entire classification of styles that will not work well with photograms by definition. There’s probably a name describing a specific type of neural network, but I lack the knowledge presently to tell you what it is. As you browse styles, you will find a few that will make your photogram look like objects. One that stands out in my mind was a style that made photos look like they were formed from rows of neatly arranged tomatoes. Another one made them look like a mosaic formed from regularly shaped pool tiles.

The reason the tomato filter failed was because the neural network is looking for some threshold for “tomato-ness”. As the perspective shifts from photo to photo, so does the value for “tomato-ness”. So, every photo may be filled with rows of perfect tomatoes, but they’re not always the same ones from photo to photo. This confuses the software, and results in failed or bad photograms. The pool tile filter failed, because the tiles are always facing the camera directly, as perfect little rectangles, unchanged from photo to photo by shifting perspective. So basically, look for filters that defy the rules of perspective. If you find any, avoid them.

That said, all is not lost. Photogrammetry software hiccups happen, and occasionally photos just need to be run a second time. Or maybe there are one or two photos that just need to be removed in order to get a good result. On one or two occasions, I managed to get good results from styles that I had just about given up on. For instance, that pool tile style filter failed, but not entirely. There may still be hope. Maybe.

Bonus: bad photograms usually end up being visually captivating in some way. You may not want to use them, but you might get a chuckle or two in the process. Also, the occasional nightmare.

2 – There is such a thing as too much and too little style! Prisma, where I conducted my initial experiment, used to only output images at a maximum resolution of 1080 pixels. The resulting meshes were low-res and rough, and the textures even rougher. Get too close to your mesh, and you’ll notice lots of aliasing and pixelization.

When I moved to the Painnt software for my style processing, I initially processed my photos at the size they were originally taken, 12 megapixels. The resulting photograms were such a disappointment! The geometry of the mesh was perfect, but the texture map on the model was muddy and indistinct. At least, that’s what I thought at first. It turns out, in reality, the textures were just too fine for the human eye to pick up at a distance. You could clearly see them if you got very close to your model. In the end, styled photos rendered at about six megapixels seem to live in a sweet spot for accuracy, acuity, and fidelity. This has become my go to resolution for outputting styled photographs in most cases.

But this is still something of a compromise. The difference, I believe, is the distinction between style level of detail, and image resolution. Currently, in the two pieces of software that I have evaluated (Prisma and Painnt), these two concepts are effectively the same. I lack the knowledge to say conclusively whether or not they can even be separated. If they can, it really opens the door for better and more dramatically styled photograms.

To demonstrate what I mean, here are two images. Both pull from the same source photograph, and utilize the same style. However, the first was rendered at a low resolution, while the second was rendered at its original 12 megapixel size, and then scaled down to match the dimensions of the first. See how different they are!

What I really need is a way to output images that look like the first, but have the (original) resolution of the second. If you plan on making styled photograms of your own, you will eventually want this too. The original style transfer code library is freely available, so if any noble coders out there are looking for a cool project, this would be a great place to start. (Hint hint)

3 – Experiment! If you read this article, and make your own styled photogram, congratulations! You are quite possibly now one of the worlds foremost experts on styled photograms. Well done, brainiac. Now go and create a photogram the likes of which the world has never seen before.

It shouldn’t be all that hard, actually. The Painnt software I use has 180+ style filters now, and more are added every week. I personally have probably used maybe 20% of them so far, tops. Some of my favorite styles are ones that I processed on a lark, thinking they’d either be underwhelming or fail completely. The thing is, you won’t find out until you try. So try!

And speaking of experimentation:

 

4 – Yes, you can chain styles. Processing a set of photos in one style, and then reprocessing those new images in a second, or even third style, seems to work fairly well. This is exciting! It does open the door to decreased mesh fidelity, however, so keep that in mind. Assuming lots of people will be making styled photograms soon, chained styles are a terrific way to make your meshes visually and aesthetically distinct. Use it!

Here’s a three video, 30-second playlist illustrating chained styles. The first video is the original, unstyled mesh. The second video is a dramatically styled mash, and the last video introduces a little more subtlety by running the second style overtop the first dramatically styled mesh, and the last video introduces a little more subtlety by running a second style overtop the first:

YouTube player

5

Because my initial expectations when I created my first styled photogram were very, very low in terms of chance of success, I also tried to directly style the texture map on one of my photograms, sure that it wouldn’t work. The results were… interesting. Not the least bit usable, but interesting.

 

Here’s the thing – being able to directly style a texture map would be really, really good. It would probably improve my asset pipeline productivity and efficiency 100-fold or more (and bear in mind, it’s already been super charged by this process!).  At the same time, it would no longer be necessary to generate a new mesh for each styled photogram. You’d just make lots and lots of styled texture maps, and apply them to the same model. So it’s pretty important.

 

Moreover, even though my experiment with styled texture maps didn’t pan out, I’m pretty sure that this is an area where a skilled coder could build a tool that makes this possible. I believe this to be true, because it’s basically being done now. Check out this video of real-time style transfer being applied to 3D objects:

YouTube player

 

Apparently, this is an upcoming Adobe product called StyLit. It looks like it will be amazing, and there’s even a demo you can try.

 

6 – Less is more. I found the key to producing the most dramatically styled photograms is reducing your dataset (the photos used) to the fewest possible necessary to produce a high-quality mesh. In many cases, fine details that you would have preferred to have kept on your model, get averaged out by the photogrammetry software, as it analyzes all your photos. Reducing the number of photos in play is a great way to recapture some of this lost detail, at least until somebody untangles my level of detail versus resolution issue. There is no fixed “right” number for this, so you’ll have to experiment.

7– There aren’t many style transfer software packages out there. I’ve really only worked with Prisma and Painnt. Prisma has added a lot of very welcome features since I last used it, but Painnt is available for Windows and Mac, whereas Prisma is available only for Android and iOS. This made Painnt the clear choice for me for every day use. That being said, Prisma has a couple of my favorite styles. I’ll probably end up using it as well, at least in certain cases.

Ideally, I would use style transfer software made specifically with styled photograms in mind. Another idea for the noble coder.

Specific Tips: Drones

I’ve only ever used one drone, and only for the last month. I’m not gonna have a lot of drone tips for you. But I will say this: DJI, the manufacturer of the Mavic Pro drone that I own, and clear leader in the industry, cost me literally 20 hours of my vacation trapped in customer service hell. It was hands-down the worst customer support experience I’ve ever encountered, and by no small margin. I will also say this, just as emphatically: The Mavic Pro is an absolutely outstanding drone. It’s easy and fun to fly, takes great photos, and folds down into a form factor so small that it fits in a camera bag. You want this drone!

So, in short:
DJI!!! Get your house in order! There’s no excuse for what I went through!

DJI Mavic Pro design and engineering team!!! I love you. Do you love me?

More concrete tips:

1 – for operators within the United States, you will need to register your drone with the FAA. The cost is five dollars, and it needs to be renewed every four years. You will also need to place the FAA registration code that you receive somewhere on your drone.

2 – If your drone weighs more than half a pound, and you intend to do commercial work with your drone in the U.S., for photograms or anything else, you will need to obtain a commercial drone operators license from the FAA. It requires a test. You will need to study for it. You also need to schedule a time to take it with the FAA when you are ready. It will also require a background check. There will be a fee for this as well, and you’ll have to renew this every two years.

3 – Read the FAA guidelines for the safe operation of remotely piloted aircraft, and exercise good judgment while flying. Stay away from populated areas, and make sure you’re in a class of air space that permits you to fly. Always stay below 400 feet, and realize that you may need to receive authorization from air traffic control in your area in order to fly in some places.

Flying while failling to meet any of these puts you at odds with the law. In some cases, large fines can be levied against you. Counterpoint: this would also technically make you a badass outlaw with a rogue flying robot. I literally can’t think of anything that sounds cooler than that. Still, follow the law. It’s good for you.

Hey, You Never Gave Us An Overview of the Software!

None of this is hard, but that’s the easiest part of it all. You generally click a button, add your photos, and you’re good to go. Plus, you’re practically an expert on styled photograms by now. When you finish reading this, it’s quite possible you’ll have read everything there is to know on the subject. A piece of new software isn’t going to trip you up. I believe in you. You got this.

Conclusion

I hereby conclude that styled photograms are awesome! You should make them!
Then, you should tell me about what you made and what you learned. Show me, too. We might be the only people doing this, so we should share stories.

I’ve created a Facebook group specifically geared for Styled Photogrammetrists such as you and I. Our first order of business can be to think of a better name to call ourselves. A subreddit is coming, as soon as my new account under the name Solipsystems gets enough Karma and time to do so.

Seek me out. I want to hear from you about your adventures in Styled Photogrammetry, you can find me on Facebook

Discuss this article with the rest of the community on our Discord server!
Tagged
Craig Schwartz

Craig is currently a badass outlaw with a rogue flying robot. He is working to change the "outlaw" and "rogue" portions of that sentence.

View more articles by Craig Schwartz

ADVERTISEMENT

Maker Faire Bay Area 2023 - Mare Island, CA

Escape to an island of imagination + innovation as Maker Faire Bay Area returns for its 15th iteration!

Buy Tickets today! SAVE 15% and lock-in your preferred date(s).

FEEDBACK