How I Created My 2018 Christmas Card

The image above is my 2018 Christmas card. It was created using the Persistence on Vision Ray Tracer also known as POV-Ray. I’ve created a Christmas card every year since 1995 using the software.

This year’s image is a little bit simpler than most. I’ve sort of run out of ideas. You can see a gallery of all my Christmas card images on my Facebook page at this link.

Here is a YouTube video explaining a little bit about the history of the card and other images which inspired it.

Years ago there was a program called “SUDS” that was supposed to create spheres that touched one another like bubbles in soapsuds. It was used to create some other classic POV-Ray images. I couldn’t find a copy of the program but POV-Team leader Chris Cason was able to locate an old copy. Unfortunately it was compiled for 16-bit Windows and would not run on a modern 64-bit Windows system. It did include the source code and I tried to translate the code into the POV scene description language. But I never did get decent results. I think it was designed to create spheres of widely different diameters large and small so that the small ones would fill in the gaps. I wanted something where the variability of the diameters was not so significant.

In the end, I just created my own algorithm for randomly placing the spheres. It would pick a random X/Y coordinate and then dropped the ball down on the Z-axis until it hit the bottom or it touched another sphere. My intent was to continue the process so that once it touched it would then rotate downward until it made contact with at least one or two other spheres but that was going to be too complicated and I never did implement it.

When you randomly place objects, you can still have large gaps so I had to put way more spheres than would normally be necessary to fill the image. I just kept adding more and more spheres until I had filled almost all the gaps. Residency in the video I cropped the image so that many of the spheres are outside the actual visible area and many others are hidden behind the top layers.

I then manually placed the 14 spheres spelling out the words “Merry Christmas”. Each of the 14 spheres has an image map of the letter embossed on to it.

Although it looks likely Merry Christmas ornaments are approximately the same level, if you watch the YouTube video which rotates the scene at a different angle, you can see that those lettered ornaments are actually as one quite different heights. Only when you rotate the into the right perspective doesn’t make the words visible.

Some point I will go back and rewrite the algorithm so that the spheres drop down into place properly and fill in the gaps more realistically but for purposes of this image it was good enough as well as viewed from a straight on perspective.

Here is what the interior of the card looks like.


My 2017 Christmas Card: Santa Adopts New Technology

Every year since 1995 I created my own custom designed Christmas cards. All but one of them have used computer graphics rendering using the Persistence of Vision Ray Tracer (POV-Ray). Last year I did a photograph of some 3D printed figures that re-created the scene from my 1995 card. In recent years because I’ve run out of ideas, I’ve sort of recycled previous years cards with updates and tweaks. It’s been since 2014 that I created an original card.

For 2017 I finally came up with an idea for new card. Inspired by my fascination for 3D printing I began to wonder what would happen if Santa Claus adopted this new technology for creating toys. Here is the image that resulted from that concept. Note you can click on any of the images in this blog to see a larger version.

I thought I would describe how I created the image and point out some of the little references that inspired the card.

First of all, this 3D printer is loosely inspired by a printer called the Creality CR-10. It is a very popular relatively new printer with a large build area and can be purchased for under $400 for the standard model and under $500 for the larger version. Here are a couple of views of this printer followed by a close-up of my model.


The color trim stripes on these printers are usually either a light blue or yellow/beige color but I decided to give these in more Christmas feel by making them either red or green. When creating such images, you have to decide how much detail you want to create in the model. In the finally mage, even the largest printer in the foreground is only going to be a couple of inches square on the final Christmas card so you don’t really need to put in a lot of detail. For example from some angles that you look at the printer, you don’t see the extruder motor that is on the left side of the cross beam because it is mostly hidden by the upright. So I didn’t bother to model the extruder motor.

The image below is a close-up view of the backside which you never see, none of the threaded rods that elevate the crossbeam and extruder are there. As mentioned before the extruder motor is missing. The piece of plastic filament that runs from the spool to where the extruder motor should be just stops in midair. The Bowden tube that feeds the filament into the nozzle similarly hangs in midair above where the motor should be. In fact there is no extruder nozzle because from the front view it would be hidden by the cooling fan. So I didn’t even bother to model any of the details. If I was ever going to reuse this model for a different scene that would view the printers from different angles then I would have bothered to create the details.

The filament running from the spool to where the extruder motor should be, the Bowden tube, and the cables running to the nozzle and fan were all modeled using Bezier splines. I did that so that I could move the nozzle left or right in the X direction or move the crossbeam up and down in the Z direction on different printers and the filament pieces and cables would continue to lineup properly. That is because the endpoints were attached to the objects. Had I wanted to, I could create an animation moving the nozzle up and down and left and right and all of the filament pieces and cables would have flexed properly.

Sometimes I gave into my urge to go ahead and model some details that may or may not show up in the final image.

In this close-up you can see that there is an image map in the display window with text telling how much print time is remaining on the current print. Had I been a real fanatic I would have created different image maps for each of the printers because they would naturally have different amounts of time remaining. But I was sure we would not be able to read the text anyway so they all read the same amount. One detail where I put a lot of effort was modeling the little document clips that hold the removable print bed in place. It turns out that in the full image these little clips really stand out and people who are familiar with this technique for securing 3D printer beds will instantly recognize this detail.

Another bit of poetic license that I took in the design of the printer was that the power supply, display, and spool holder are not black like they are in the actual printer. I was concerned that with a dark printer and a lot of clutter in the image, this little box would not show up. So I took the liberty to make it a sort of beige that was supposed to be reminiscent of an old school IBM PC computer case.

The next job was designing some toys to be printed. The most visible toy is a little robot being printed on the printer in the foreground. It is inspired by “Adabot” — the mascot of Adafruit Industries where I buy all of my electronic parts and maker supplies.

I wanted to have 2 different versions of each toy. One would be a completed version that would sit on the shelf next to the printer and the other would be the parts of the toy as they would sit on the printer platform. A shape like the robot cannot be printed all at once. You could print the body and head together although they would likely be separate. But you would definitely print the knobs and legs separately. Here is a close-up.

I didn’t put a lot of detail into the robot and he is not an exact duplicate of Adabot but people who are familiar with the character will immediately recognize him. In addition to using the robot as one of the toys, I also wanted to depict a robot being printed on the computer tablet that is being held by the elf in the foreground of the picture. Here is the image map I used on that tablet.

The image above is a screen grab from Simplify 3D which is the program I use to actually 3D print objects. Unfortunately POV-Ray doesn’t export objects in a format that can be 3D printed or can be loaded by Simplify 3D. So I had to use CAD software Blender 3D to create another version of the robot just so that I could load it into Simplify 3D and create that screen grab. It only took me about a half-hour to re-create the robot using a different program because I had all of the dimensions and proportions already figured out in POV-Ray. It was pretty easy to model, export as a STL file, load it into Simplify and do the screen grab. I probably had way too much detail because you can barely see the robot on the tablet. But you can certainly tell that he is looking at a screen representation of what’s being printed in front of him and that was the point. Here’s a close-up that shows more detail than you can see in the original image.

The second toy that I wanted to model was a racecar. I’ve thought about doing a regular stock-car or perhaps it toy truck but it turns out that the racecar was easier to model and being from Indianapolis an IndyCar was a good choice. Again I needed to create two different versions of the model. One of them was a car assembled sitting next to the printer. The other was the car as it would be printed on the 3D printer. I decided the if I was really going to 3D print such a model, the easiest way to do it would be to split it lengthwise down the center and print each half separately. Also the tires would be printed separately. You can see the wheels being printed on a printer in the second row.

After the image was nearly completed I noticed I had made a mistake. The 3D printer doing the racecars was actually printing 2 left sides of the racecar rather than a right and a left. Probably no one would notice but I was glad I caught the mistake in time. I created the model so that it was easy to change the color of the car. In addition to the red car on the printer in the foreground you can see green cars being printed in the background and there is an elf pushing a tray full of assembled orange cars. Here’s a close-up of the IndyCar and of the wheels as they appear on one of the 3D printers in the background.


Additionally in the background you can see there are printers creating 2 different colors of a toy rocket. This of course was the simplest of the toys to create. I thought about creating other toys. One I seriously considered was a little toy boat affectionately known as “Benchy”. It is a classic 3D printer model used as a benchmark to judge the print quality of various 3D printers. The problem was it is only available in STL or other CAD formats and it would’ve been difficult to import into POV-Ray. I decided that with different colored racecars and different color rockets and given the fact that you could barely see what was printing on the back rows of printers, it wasn’t worth my time to model additional toys.

One of the nice things about creating computer models for scenes like this is that you can duplicate multiple copies of the printers and toys and place them however you want. I just created a loop that created an entire row printers on a table and then created another loop that created multiple tables with one in each row. Here is a wide-angle elevated view of the entire scene.

I thought about doing something fancy for the background but the image was already looking sort of “busy” so I just added a blank wall. I was also going to put boxes of supplies and perhaps additional spools of filament underneath the tables but that was going to be a lot of work and as I said the image was perhaps already a bit cluttered. I decided to just put down a red and green tile floor.

The final piece was of course to design an elf. The design was very loosely inspired by the elves in the animated Christmas special “Rudolph the Red Nosed Reindeer”. I repurposed the hat from a Santa Claus model I had created for my 2011 Christmas card which depicted a dining room table set for Christmas dinner. A little Santa doll was a decoration on the table. That Santa also appeared on the roof of my house in the 2014 Christmas card. Lots of my Christmas card designs reuse different models from year-to-year or adapt previous models for new purposes. But this year everything except the hat on the elves which was redesigned from Santa’s previous hat was completely original for this card. I have not created such a totally original design in a long time. Below is the 2011 card which was the most complex Christmas card I ever created with dozens of elements repurposed from previous cards. Also here is the 2014 card depicting Olaf the snowman in front of my house with Santa on the roof. Don’t forget you can click on the images for larger versions.

Here is a look at the elf from different angles. I didn’t bother creating feet because they were not going to show in the final image. I used a crinkled normal pattern to make parts of the costume look like they were made from fur when viewed from a distance. The arms were made from Bezier splines so that I could move the hands around in different positions and arms would follow.

There was one final detail that was missing from the image. My guess is no one would have noticed had I left it out. After I took the time to put it in, I was glad I did. When you print something on a 3D printer, the software draws an outline around the object several times so that it ensures that the plastic is flowing smoothly out of the nozzle before the actual print begins.

I started by making a rendering of just the build plate with the toy sitting on it with the camera directly overhead looking down. Then I loaded the image into a paint program and using the mouse, drew an outline around the object. I then erased the toy, adjusted the color palette, and reused that image as an image map on the print bed of each printer. Here is the overview rendering that was the first step of the process.

Here is the same image after I’d drawn the outline by hand.

Here is the image map that has the outline but the toy is removed. The color palette has been adjusted.

I had to create a separate image like this for the robot, racecar, wheels, and rocket. I tried to make the one for the robots and racecar reasonably accurate but the other ones were going to be so small in the background that I didn’t do much detail on the outline. You can barely see it in the final image.

Here is a close up of the robot sitting on the printer. You can see the outline on the printer bed. This also shows how I used a clipping plane to chop off the top of the toys to make it look like the print was in progress and not yet complete. On a real model like this, there would be a crisscross pattern or honeycomb like interior on the model but I just decided to leave it hollow.

The final step was to add the text overlay. I considered just using a paint program to overlay the text on the completed image. However I wasn’t sure how that text was going to impact the details of the image. I would’ve had to re-render the image at different zoom factors and camera positions to allow space for the text to show up. During the re-renders and then having to redo the text multiple times was going to be complicated. In the end, I modeled the text in the rendering program so the text is actually an object sitting in the foreground of the image. It takes quite a bit of math to move the text object into position so that is completely perpendicular to the line of sight of the camera. You take the location of the camera, the location the camera is looking at, and then do a bunch of math. It took me nearly a day to figure out and I might’ve been able to do all of the re-rendering and adding of the text as post-processing quicker than figuring out the math. However if I ever need to do this again, I’ve already got the math worked out in advance. So I figured it was worth the effort. Here is a side view of the scene that shows that the text is actually an object floating in midair.

Here is what the interior of the Christmas card looks like.


Because some of my friends and family have never seen a 3D printer or know how it works, I decided to put something special in the card of several recipients. I made a little 3D printed Christmas ornament in the shape of a piece of holly. Embossed on the front of the object I put their name. Here is a sample print showing my last name.

Finally here is another blog post on my technology blog with a YouTube video showing how I created the little ornaments.

How I Made 3D Printed Customized Christmas Holly Ornaments

You also find other blog posts about other 3D printed Christmas ornaments that I have created here and here.

I hope you have found this little behind-the-scenes peek at how I create my Christmas card educational and entertaining. Merry Christmas everyone.

3-D Printing a POV-Ray Model

gabriel

The above image is a rendering of St. Gabriel the Archangel that I created sometime in 1995 using the Persistence of Vision Ray Tracer or POV-Ray. It is loosely based on the logo for St. Gabriel Church in Indianapolis where I attend. Click here to see a YouTube video that explains how I designed it. The angel has appeared in many Christmas cards that I’ve designed including the first card I designed in 1995 which you can read about here. The style of the angel became the basis for a number of other figures in other Christmas cards that I’ve created over the years.

I’ve always wondered what this angel would look like in real life. With its shiny somewhat iridescent surface I’ve always envisioned it as if it was made out of glass or fired ceramic. Not having any skill in that field my desire to see it realized it had to be put on hold for decades. However a little over a year ago I purchased a 3-D printer and I finally found a way to create a printed version of the angel. The halo is rendered as a sort of gaseous blob floating around the head so there’s no way I can re-create that but the rest of the figure can be 3-D printing.

The biggest technical problem is that POV-Ray does not export its designs in a format that can be used by 3-D printers. I eventually found a way to get POV-Ray to scan the shapes and create a set of points in 3-D space referred to as a “point cloud”. I then use that exported data in the program called Mesh Lab that will turn this set of points into a mesh of triangles that can be imported into other CAD design software and eventually be 3-D printed.

Here are some photos of a small version that printed in a single color plastic. The trumpet was 3-D printed separately but the rest of the figure was printed in one piece. Is about 2 inches tall.

small_angel

Next I printed a somewhat larger version about 3 1/2 or 4 inches tall in different kinds of plastic. The main body and arms are printed using sky-blue PLA plastic. The wings are printed in a transparent plastic called “t-glaze”. The end result is sort of a frosty translucent look. The hair was printed in dark brown PLA. It was difficult to find flesh colored plastic from the face so I printed it using the light blue and then we painted it with acrylic paint. The trumpet was printed in orange PLA and then coated with gold glitter paint. We use glue to put the pieces together. The angle of the hair is off by a little bit and does not line up with face perfectly. I may end up creating another version that we will assemble more carefully.

angel1
angel2

When I initially printed the wings I printed them both the same time but as the printhead went back and forth between the two objects it left an artifact in the middle of the wings as seen on the left. I then reprinted it printing them one time and got a clean set.

bad_wings good_wings

Before printing the body I printed a small test piece to see what tolerance I needed for the holes where the wings would be inserted as seen in the image on the left. On the right is shows the hair and the face before I painted it. Below that shows the body with the cutouts for the rings and the head.

test_fit hair_face

big_body

When completed the entire model was sprayed with a clear coat which can be seen in this flash photo.

angel3

Here as a YouTube video describing the process that I used to convert the POV-Ray shapes into something that could be 3-D printed.

Here is a technical article giving more details about the software that I used.

Converting POV-Ray Shapes to Triangle Mesh for 3-D Printing

The Persistence of Vision Ray Tracer or POV-Ray is an open source free ray tracing rendering engine that uses a special text based scene description language for modeling and rendering objects. Unlike many CAD programs that use triangle meshes to define it shapes, POV-Ray uses mathematical formulas to define its primitive shapes. In addition to traditional primitive objects such as sphere, box, and torus it includes blobs, fractals and polynomial based objects. It does also support creating objects out of triangle meshes but there’s no way to convert other POV-Ray objects into triangle meshes. If you want to 3-D print an object you’ve designed in POV-Ray or transfer it to some other CAD program that only supports meshes there’s no surefire way to do that.

POV-Ray does give you the ability within its scene description language to fire a ray at any object and determine its intersection point and surface normal at that point. We can use this feature to fire rays at an object in a grid and create a set of points called a “point cloud”. These points can be output to a text file and then imported into other software that will create a triangle mesh based on this collection of points. This method is not useful for exporting an entire scene with many objects and lots of details. But if you have one of those organic blob, fractal, or polynomial primitive shapes that you want to convert to a mesh it does give you a reasonably accurate representation of the original shape.

We have created a set of macros called “pov2mesh” that automates this process for you. In this tutorial we will describe how to use this software along with other free programs to scan a POV-Ray object, create a set of points, and convert it to STL files suitable for import into other modeling software and eventual 3-D printing.

The part of the process is the POV-Ray vector function “trace” which is used as follows:

   #declare Norm=<0,0,0>;  
   #declare Hit=trace(Object,Location,Direction,Norm);
   #if(vlength(Norm))
      #debug concat(vstr(3, Hit, " ",0,6), "\n")
   #end

The function returns a three element vector containing the XYZ coordinates of where the ray hits. You pass it the object you want to trace, the initial location, the direction of the ray, and a vector variable that it will use to return the surface normal. You have to pre-initialize a vector for the surface normal. The function modifies that parameter to return the value. If the length of that normal vector is zero then the ray did not hit. If the length is nonzero we then output the values to a text file using the #debug statement. Each point is output to a single line separated by spaces.

The macros we are supplying use this code to fire rays at your object in a grid pattern of parallel rays. We also have a cylindrical pattern which we will describe later. The grid macro is as follows:

Grid_Trace(Object,StartR,EndR,DeltaR,DirR,
           StartC,EndC,DeltaC,DirC,CamL,CamD,DotR) 

The first parameter is the object you want to scan. The scan is done in rows and columns. You specify the start coordinate of the row, the end coordinate, the increment or delta value that is the distance between the rows, and the direction of the row. This is followed by the start, end, delta, and direction of the columns. If you want to visualize where the hit occurs the final parameter is the radius of a tiny sphere which will be rendered at the intersection point. If you pass zero then the spheres are not created. Here is an example of how you would invoke this macro:

Grid_Trace(My_Object,-3,3,0.25,y, -4,4,0.25,x, -10*z,z, 0.1) 

This uses rows in the Y direction from -3 to +3 in 0.25 increments. It goes from -4 to +4 and 0.25 increments in the x direction. The ray starts at -10*z and is emitted in the +z direction. The radius of the dots is 0.1.

The trace function only gives you the first hit where the ray intersects the object. Therefore it is generally necessary to fire the rays from multiple directions. We also provide a cylindrical pattern that you would invoke as follows:

Cyln_Trace(Object,StartR,EndR,DeltaR,DirR,
           StartH,EndH,DeltaH,DirH,CamL,CamD,DotR) 

The second through fifth parameters are the starting angle of rotation, ending angle, the increment in degrees, and the direction of rotation. The next four parameters the start, end, increment, and direction of the height of the cylindrical scan. The other parameters are as with the grid trace version of the macro. Here is a typical way to invoke it.

Cyln_Trace (My_Object,0,360,2,y, -4,4,0.25,y, 10*x,-x, 0.1) 

This does a cylindrical scan going from 0 to 360 degrees rotating about the y-axis in 2 degree increments. It goes from -4 to +4 along the y-axis in 0.25 increments. The ray starts out at location 10*x and points inwards in the -x direction. That location is what’s rotated. If you wanted to scan from the inside out those parameters would be <0,0,0>,x which would put the camera on the origin and point outward in the +x direction.

We also give you a combination macro which will fire rays from the top, bottom, left, right, front, and back as well as cylindrical scans outside in and inside out. It is defined as follows:

All_Trace (Object,GridMin,GridMax,Deltas,DeltaAng,Dot) 

The parameters GridMin and GridMax are vectors defining the bounds of the area to be scanned. Deltas is the increment for grid scan and DeltaAng is the increment in degrees for the cylindrical scans. You would invoke it as follows:

All_Trace (Object,<-3,-4,-3>,<3,4,3>, 0.25,2, 0.1) 

You can download the sample code from GitHub at the following link. https://github.com/cyborg5/pov2mesh It contains three files. The first file “test_platform.pov” is a standard camera and lighting system that I use when testing objects. The file “pov2mesh.pov” contains the macros. Finally there is a sample scene “blob.pov” using blob shapes that we will use to illustrate the process of converting the data into a mesh. Here is that scene:

#declare test_cam_loc=<20,20,-50>;
#include "test_platform.pov"
#include "pov2mesh.pov"
#declare Strength = 1.0;
#declare Radius1  = 1.0;

#declare My_Object=
   blob{
     threshold 0.6
     sphere{< 0.75,  0,   0>, Radius1, Strength scale <1,1,0.5>}
     sphere{<-0.375, 0.65,0>, Radius1, Strength}
     sphere{<-0.375,-0.65,0>, Radius1, Strength}
     scale 5
     pigment{rgb<1,0,0>}
   }

object {My_Object}
All_Trace(My_Object,<-6,-6,-6>,<6,6,6>,0.2,2,0.05)
background {rgb 1}

You should render this scene and redirect the debug output to a text file using the following command line switch “+GDblob.asc”. The “.asc” extension is simply an ASCII text file which we will import to a free program called Mesh Lab. Here is where you put the switch to redirect the output.

pov_gd_switch

Here is the image created which shows you the dots on your object. The box around the object illustrates the minimum and maximum areas for the grid scan. We don’t really need the image that was rendered. We are only interested in the text file containing the points.

blob

This is an example of a few lines of the file we created.

-4.200000 -3.600000 -0.327045
-4.200000 -3.400000 -0.454927
-4.200000 -3.200000 -0.476401
-4.200000 -3.000000 -0.408605
-4.200000 -2.800000 -0.164190

We will now import this file into Mesh Lab. It is a free program available on multiple platforms which you can download here. http://meshlab.sourceforge.net/ You can click on the images
throughout this blog to see larger versions. Open Mesh Lab and click on the File -> Import Mesh menu. A small dialog box will open and you should use the default settings. You will then see the point cloud you have imported.

meshlab_points

Because our scanning method could have produced identical points or points that were extremely close to one another we need to filter them out using a “Clustering Decimation” filter. You should click on the menu Filter-> Remeshing, Simplification and Reconstruction-> Clustering Decimation item. A dialog box will pop up and you should enter a number in the first field “Cell Size”. I recommend entering a world unit of 0.1 which means that any points which are closer together than 0.1 units will be combined into an average location of a single point. This value should be equal to or perhaps less than the increments you used when creating the scan of points in POV Ray. You also have an alternative percentage value that you can set if you would rather do it that way. You then click on the “apply” and “close” buttons.

meshlab_cluster_menu meshlab_cluster_dialog

Next we need to compute the normal for each of the vertices. Although the POV Ray “trace” function gave us the normal, we have not yet figured out how to import that information into Mesh Lab. Click on the menu Filter-> Normals, Curvatures and Orientation-> Compute normals for point sets. You can use the default settings in this dialog box. Click on the “apply and “close” buttons.

meshlab_normal_menu meshlab_normal_dialog

We are now ready to actually construct the faces of the mesh using the points. On the menu select Filter->Remeshing, Simplification and Reconstruction-> Surface Reconstruction Poisson. Note that there are two other methods of surface reconstruction available. The others are “Ball Pivoting” and “VCG”. To be honest I don’t understand what any of these three options mean. The Poisson method was recommended by a website I found and it works for me so I use it. You can probably use the default settings in the dialog box that pops up. Again I’m not really sure what these values do. At times I’ve tried experimenting by increasing the first two values from the 6 to 8 and the samples from 1 to 2 and at times it gave me slightly better results. Or you can just click on “apply” and “close”.

meshlab_poisson_menu meshlab_poisson_dialog

You should now see your object shaded gray like the image below.

meshlab_completed

However if it comes out very dark or black like the image below, it means that the service normals of the faces somehow got inverted and we need to force them to flip over.

meshlab_needs_flip

To invert the face normals use the menu Filters-> Normals, Curvatures and Orientation-> Invert Faces Orientation. Use the default values on the dialog box and make sure that “Force Flip” is checked. Click on “apply” and “close”.

meshlab_flip_menu meshlab_flip_dialogue

Finally we already to export our newly created mesh as an STL file. On the menu select File-> Export Mesh. Give it a filename and under the Files of type: choose STL File Format.

meshlab_export_menu meshlab_export_dialogue

Many CAD programs and 3-D printing slicing software can import and manipulate and print STL files. We like to use Blender 3D for editing files and designing models. You can obtain it at https://www.blender.org/ We’re not going to go into any detail on how to use Blender because there are plenty of online tutorials available especially on YouTube. Also you may be using other software. In Blender balance to you can click on the menu File-> Import-> STL format. You’ll probably notice that your object is not in the orientation that you expected.

blender_imported

That is because the POV Ray coordinate system has the y-axis pointing upwards and the z-axis pointed into the screen. Most CAD programs have the y-axis pointing away from you and the z-axis pointing upwards. So you may have to rotate and/or mirror flip to get your object oriented properly.

blender_oriented

Here is what the object looks like in edit mode so that you can see the individual triangles. You may want to use a decimation modifier to reduce the number of triangles in relatively smooth areas while retaining detail in areas with tight radius.

blender_mish

Unless you use extremely small increments, you may lose detail in converting your object. Sharp edges are never going to be completely sharp using any scanning method. You may have to do some retouching in another program. This method is really only intended for those bizarre shapes that POV Ray can render that are not available in other CAD programs.

Here is a link to another article describing how I used this software to 3-D print a POV-Ray object that I created many years ago. It includes a YouTube video that shows an animated depiction of how the scanning process works.

angel1

Shining Your Light on Christmas: Making a Ray-Traced Christmas Card

Each year it’s a challenge to come up with a new design for my famous ray-traced computer-generated Christmas cards. Sometimes when I run out of ways to outdo myself from a previous years card I come up with a gimmick. For example in 2010 when I had just purchased a new 3-D TV I decided to make a 3-D Christmas card. I rearrange the figurines in an older card that showed a nativity scene and rendered it in 3-D. Each card came with it on set of red/blue anaglyph 3-D glasses for viewing. I also made an animated version where the camera moves in and around the scene. It was a little video clip that I could show on the 3-D TV. Click here for a YouTube version.

This year I was totally stumped on what to do next. I’ve done almost all of the religious themed cards that I could think of. I’ve done Mary and the Child (1997). The Three Kings (1998). Shepherds and Angels (2002). Mary and Joseph and Child (2001). Mary and Joseph and the Innkeeper (2003). I’ve done various Christmas trees some close-up with just a few ornaments (1996). An entire decorated Christmas tree (2006). Then I did a Christmas stocking (2007), a fireplace with multiple stockings (2008). In 2011 I had my most complex card ever been included a Christmas tree, fireplace and stockings, a dining room table with Christmas decorations including a china cabinet in the background full of plates with various patterns on them. Last year I did a close-up of the same table Christmas ornaments hanging in a cone shape that resembled a tree. There just isn’t much else to do.  Here is a link to a Facebook album showing all of my computer-generated Christmas cards.

The only area of Christmas that I haven’t explored is more of a kid’s theme with Santa Claus, reindeer, snowman etc. So I decided that was the road I needed to explore next. However except for a small Santa figure on the dining room table scene, I had not created any objects from this genre. One of the things that makes these cards easier to make your after year is that I can reuse models and reincorporate them year-to-year.

Perhaps it was time for another one of my gimmick cards that reflects what’s going on in my life currently. If I look back over the past year the thing that is occupied my life most is taking up the hobby of electronics. I built multiple gadgets using Arduino-based microcontrollers and I have the little credit card sized computer known as the Raspberry Pi which I’ve been playing with.

pen

There is a company called Bare Conductive that makes electrically conductive ink and paint. They sell little kits where you can make greeting cards that have blinking LEDs and tiny button sized batteries. You connect the LEDs to the batteries using their special electrically conductive paint. They have a card that shows Rudolph the Red Nosed Reindeer with a blinking red nose but it is a horribly stylized design that I thought was really unattractive. In addition to selling kits for single cards, they will also sell you a package of 50 LEDs and 50 batteries along with a paint pen for drawing the lines. So I ordered a tube of paint and the LEDs and batteries and started work on my own version of Rudolph.

battery led50

I needed some reference images to design my reindeer. I decided I wanted him to look like the stop motion animated figures from the 1964 Rankin/Bass produced TV special which is still erring even this year. I got a copy of the TV special and did some screen grabs. In the early part of the show Rudolph is very young and has not yet grown antlers. Halfway through he grows up into sort of a teenaged reindeer. He has a good set of antlers but they are not as full as some of the other reindeer. Here are some of the reference images that I used in designing my model.

Screen grab images from Rudolph TV special use as reference images.

Screen grab images from Rudolph TV special use as reference images.

I wanted more than just Rudolph although I could have put him on the ground in the snow perhaps with Santa and some elves standing around. However one of the nice things about creating these computer models is it’s easy to duplicate them. Once you’ve got one reindeer it’s a snap to make eight or nine. I had so much difficulty modeling the antlers the way I wanted them that I didn’t feel like making a different set of more mature antlers for the other reindeer. (You can click this image  for a larger version  as well as other images  in this blog.

Test rendering of Rudolph

Test rendering of Rudolph

One problem was that the LED is relatively big when you think about sticking it on the nose of a reindeer. If I filled the scene up with too many other things, Rudolph would be too small and his nose would be gigantic. So I had to make the Rudolph character be the primary ature in the scene. I did some test renderings with eight reindeer and Rudolph in the front row. It turned out I really didn’t have room to create a sleigh with Santa and a bunch of toys. So it turns out that the sleigh is sort of off the edge of the screen not visible just implied.

8deer1

Then I had to figure out what to do for a background. I could have just done a plain white background and had them sort of abstract. But I really wanted to create a full scene. It would be easy to create a night sky perhaps with stars or snowflakes but I wanted to give it some perspective so I needed a ground scene. It would’ve taken me weeks to come up with a bunch of tiny houses or buildings covered in snow on the ground extending to some horizon. My friend Rick Ruiz suggested maybe a photo background. I started doing Google image searches for nighttime Indianapolis skyline. I needed something that was recognizably Indianapolis but have lots of sky. I finally found the following image.

night1

Unbeknownst to me, this image was a copyrighted photograph by photographer Rich Bell. I do not recall where I obtained the image but it did not contain any visible copyright notice. I did not bother to check the metadata and after resizing and manipulating the image the metadata was lost. In 2016 my use of the image was the subject of a copyright infringement lawsuit brought against me by Mr. Bell. We eventually settled the dispute and of court for undisclosed terms. The image shown here is a legitimate licensed version of the image. Although I believed that my use of the image constituted “fair use” under copyright law, it would’ve cost me thousands of dollars of legal fees and over 18 months in federal court to see the case through to its ultimate conclusion. There is a public perception that anything on the Internet is fair game but it most certainly is not. Any content creator wishes to aggressively pursue their rights under the law may do so. Let my story serve as a cautionary tale. I will have more to say on the topic in a separate blog post at some point.

I had to extend the sky higher and clip off the bottom of the image that shows the city lights reflected in the canal. Now that I had decided that the reindeer would be in flight over the city rather than standing on the ground I had to remodel their legs to put them into more of a swept back flying position. Also my original design had been flying from left to right but the way the buildings were arranged in the skyline I had to flip the reindeer around so they were flying right to left.

bg2

I still have the problem that the legs of the reindeer were partially blocking the skyline. I had to keep moving the skyline image lower in the scene, adding more black space at the top. One of the problems was that the shorter buildings such as the state capital were disappearing off the bottom of the card. So I took some creative license and did a cut and paste to move the capital dome and some of the other buildings on the lower right up higher in the image. It doesn’t mess up the perspective too bad.

bg3

Here is the final image.

final13

I sent the image off to VistaPrint.com to have them printed and soon the parts arrived from Bare Conductive to add the LEDs. We experimented by connecting an LED to one of the tiny button batteries and we let it sit to see how long it would run. After about three days it was still going but it was very very dim.

I generally try to put some sort of Scripture quote on the inside of the card along with a Christmas greeting. I came up with the theme of letting your light shine on Christmas and immediately thought of The Sermon on the Mount where Jesus says to the believers “You are the light of the world… Let your light shine…” So here’s what it looks like on the inside of the card.

greeting

The examples they show on the Bare website show you how to use a little flap of paper to make a switch. You leave a gap in the circuit and then coat the flap with the conductive paint. As you hold close the flap it completes the circuit and the LED lights up. The nice thing about these LEDs is that they don’t require any other circuitry or resistors. They just blink on their own. Here are some images of my circuit.

Paper Flap Switch Open

Paper Flap Switch Open

Paper Flap Switch Closed

Paper Flap Switch Closed

One of the big problems all along in this project was deciding how I was going to mail them. Almost half of them would got people that I see on a regular basis and so I could just hand-deliver them. But many of them are out of town or are people who I don’t see very often in town and would need to mail them. I asked a photographer friend if he knew where I could get boxes that were the size of a 4 x 6 photo since that was the size of the greeting cards. He told me that everything is digital these days and photographers mostly just give the client a Photo CD other images and perhaps a few big prints. One company that made boxes for photos had gone out of business.

DVD mailer from Staples

DVD mailer from Staples


After doing some Google searches I realize that boxes made for mailing DVDs would be just about right. I’m not talking about flat envelopes were padded envelopes. These were boxes for mailing a boxed DVD. The post office has some that are just the right size and you can get them for free but you have to mail them using priority mail. That was likely to be more expensive. I finally found almost the same thing at Staples.com at a reasonable price and so I ordered them. Later that same day I got a phone call saying that they were out of stock and I could reorder in a few days. They did not offer to put them on backorder for me.

uspsI went ahead and ordered the priority mail boxes from the post office. Then I started figuring out the price. If I could go regular first-class I could mail a 2 ounce box for a little over two dollars. But if I was going to mail priority mail it was going to cost me five dollars each! I suppose I could have taped over the “Priority Mail” logos all over the boxes and disguised them but that was probably a federal offense.

I tried looking at Office Depot and Office Max and they had something similar but they would only sell them in packages of 50 and they cost $30-$33. Out of the 30 or so LED cards I was making, I had narrowed it down to only 13 or 14 bit to be mailed. By the time I had paid a high price for 50 boxes, the cost per card was back up to the point where it was similar to the free boxes and the priority mail postage costs. I tried reordering at Staples a few days later we got the website said “in stock”. Unfortunately two days later I got an email saying “sorry they are out of stock”. So we are going ahead with the priority mail postage.

I generally send about 65 or 70 Christmas cards each year. Because I was worried that we were going to ruin some, I went ahead and ordered 100 cards even though I could’ve ordered 75. It was a good thing because we did run a couple of cards and I always come up with some people I want to send to at the last minute.

Because dad was going to be the one to have to assemble all these LEDs and batteries and little paper flap switches and because I didn’t want the costs to get too big, I picked out about 30 people who are family and my closest friends to get the special edition. The other 30 people will get a plain Rudolph Christmas card without the LED. (Shhhh… don’t tell them).

We also ruined a couple of cards because after we had them assembled we discovered that my package of 50 red LEDs actually contained five or six green LEDs! When these LEDs are not illuminated they look completely clear so you can’t tell what kind they are. Somebody messed up at the factory. They are going to get an email from me.

Here is a video of what the card looks like with Rudolph’s nose blinking.

Many thanks to my dad who assembled all of these cards, assembled the boxes for mailing, stuck on the mailing labels, and hauled that all off to the post office.

satb100I had to show off my card so I couple weeks ago I participated in the weekly Adafruit Show-and-Tell video chat on Google plus. Here is a link to that video in which I also showed off some computer-controlled Christmas lights. My segment of this 22 minute video started about the four-minute mark. But the other projects shown that night were really cool as well.

Now what the hell am I going to do next year??? I guess I will have to add sleigh, Santa, and some toys and skip the LEDs.

Creating A Ray Traced Angel

Every year since 1995 I’ve created a ray traced, computer-generated image for a Christmas card. But before I created this Christmas cards I created an image that is my rendition of the St. Gabriel the Archangel Catholic Church logo how it would look if it were three-dimensional instead of a line drawing. I’ve created this video to show you the process I use to create such images. Eventually I hope to create a similar video for each one of my Christmas card designs. This angel appeared in my 1995, 1996 and several other cards. It established the visual style for many of the elements of future cards. So when you’ve got about 13 minutes to spare, check out the video embedded below or better yet click here to view it on YouTube in full-screen mode since it is in full 1080 HD.

See how I turn this… Into this…
gablogo4x3 gab4x3

Here is the video below.

My First Ray Traced Computer Graphic Christmas Card 1995

All these blog entries up till now have been leading up to my first real computer-generated Christmas card. This was the cover image on the card.

My 1995 Christmas Card Image

The card was printed on ordinary 8.5″ x 11″ paper one quarter size with a quarter inch margin. The page was then folded twice to make a 4.25″ x 5.5″ card. On the inside was the following message.

“Can it indeed be that
God dwells among men
on Earth?”

1 Kings 6:27

“And the Word became flesh
and made his dwelling
among us”

John 1:14

Merry Christmas and Happy New Year

This would be followed by my signature. I actually signed my name one time on a piece of paper, scanned it and turned it into an image which I cut and paste into Microsoft Word every Christmas card since then. So if you get a Christmas card from me in 2013 my signature will look exactly like it did in 1995 because it’s the same image.

The word “Emmanuel” is Hebrew for “God with us”.

While this image looks like a couple of angels in the foreground, the Star of Bethlehem high in the background, and Bethlehem itself on a distant horizon. It is in fact a clever bit of forced perspective. When creating models for ray tracing sometimes it’s important to model the objects at a particular size in order to get the proper lighting effects and texture effects. I also wanted to get the idea that the scene was being backlit by the Star. However with a bright light source in the distance casting shadows forward, the angels themselves were too dark. When I tried shining a light on the front, it illuminated the ground and the Town of Bethlehem too much. I actually ended up creating four spotlights with two each shining on the angels to illuminate them. But there’s more at work here. The “Little town of Bethlehem” is much littler than you think. It is actually a small model sitting in the foreground. Also the Star looks like it is distant with lens flare rays appearing in front of the other objects. In fact Star and the rays are a physical object in the foreground between the Bethlehem model and the angels. In the image below I’ve moved the camera back and up at a different angle to show you what the model really looks like. Sometimes when you’re trying to compose a good-looking image, you just have to fudge it.

The secrets behind the forced perspective in this image.

The secrets behind the forced perspective in this image.

We’ve not seen the last of St. Gabriel Angels. It will show up again in the 1996 Christmas card as a Christmas tree ornament. Stay tuned…

It All Started with the St. Gabriel Logo

Official logo of St. Gabriel Church Indianapolis originally designed by parishioner artist Joanne Austell

Continuing with my saga of computer generated Christmas cards, before we get into the first card I want to tell you about my ray traced version of the St. Gabriel the Archangel Catholic Church logo. Shown here on the right.

The image was designed by St. Gabriel parishioner and artist Joanne Austell to commemorate the 25th anniversary of the founding of our parish on the northwest side of Indianapolis. I’d always been a big fan of the image and I promoted it and used it wherever I could especially when designing the website for the parish. I wanted to create a three-dimensional looking version of this elegant 2-D line drawing.

The image below is what I came up with. I actually do a very good job of re-creating it to the extent I had hoped. When I was working on it I didn’t have a copy of the logo handy at the time. Somehow I thought original line drawing had a nose on it. I remember it had a very distinctive shape and I spent a lot of time trying to re-create that contour. Unfortunately the contour was remembering was actually the chin of the original drawing. By the time I had completed the image and realized my mistake I had already gotten used to seeing the nose on my version so I kept it. My image actually has no chin at all. I spent a lot of time trying to get the sweep the arms just right. I never was completely satisfied with that but at least it’s closer to the original than the face was.

This angel image would be reused in my 1995 Christmas card as well as several other cards. It also was the basis for this style of all the figures used in subsequent cards. So if I’m going to tell the story of how I created all of those Christmas cards we really needed to show this one first even though it wasn’t a Christmas card.

You can click on the image for a larger version.

At one point I had created detailed video explaining the mathematical formulas behind the various shapes and a sort of step-by-step visual explanation of how I pieced together the geometric shapes to create the image. Unfortunately I never completed the video and part of the files I used to make it (most notably the open captions and the voiceover narration) somehow got lost. I’m going to go back and re-create that video someday but for now you can just enjoy the still picture.

In the next installment we finally get to my 1995 Christmas card.

Fractal Valentine

I thought I would kick off my graphics blog on Valentine’s Day with one of my favorite images. This fractal Valentine contains 2327 individual harts of seven different sizes.

From Ray tracing

I will add more details about this image later. I just wanted to get it posted quickly for Valentine’s Day.