A project log for Metaverse Lab

Experiments with Decentralized VR/AR Infrastructure, Neural Networks, and 3D Internet.

alusion 11/07/2015 at 07:530 Comments

Artificial Neural Networks and Virtual Reality

A vision came to me through the 802.11 in the form of a dream-catcher.

I had a collection of panospheres I scraped, with permission, from good John and got inspired to create.

I spun up some Linux machines and began to make something. I'll let the pictures do more talking.

# cut the strip and turn it into a cube-map
# adjoin

i=`ls | grep jpg`

convert "$i" -crop 256x256 out_%02d.jpg

adjoin -m V out_04.jpg out_01.jpg out_05.jpg test.jpg
adjoin -m H -g center out_00.jpg test.jpg test1.jpg
adjoin -m H -g center test1.jpg out_02.jpg out_03.jpg "$i"
mogrify -rotate 90 "$i"
rm {test,test1}.jpg
rm out_*

Convolutional neural networks are mathematically modelled after biology and give the machine a deep understanding of the context of information, most often images. I trained neural-style with a variety of artists throughout history such as Picasso, Chirico, Monet, Dali, Klimt, stained glass scenes of nativity, video games, and other experiments.

Frames are saved between iterations, allowing a viewpoint into the mind of the program.

The cross is a portal into a world from which is rendered by unfolding the cross into a skybox.

[ View above image in WebVR here ]

After chopping it up into cubemaps, I made a simple FireBoxRoom to render the skybox:

<AssetImage id="sky_down"  src="09.jpg" tex_clamp="true" />
<AssetImage id="sky_right" src="06.jpg" tex_clamp="true" />
<AssetImage id="sky_front" src="05.jpg" tex_clamp="true" />
<AssetImage id="sky_back"  src="07.jpg" tex_clamp="true" />
<AssetImage id="sky_up"    src="01.jpg" tex_clamp="true" />
<AssetImage id="sky_left"  src="04.jpg" tex_clamp="true" />
    pos="0.000000 0.000000 0.000000"
    xdir="-1.000000 0.000000 -0.000000"
    ydir="0.000000 1.000000 0.000000"
    zdir="0.000000 0.000000 -1.000000"
    cursor_visible="true" >

There were some obvious defects:

One can see the outline of the skybox caused from by edges interfering when processing the cross:

The maximum output from a GTX 960 can only manage default 512 resolution before running out of CUDA memory:

/usr/local/bin/luajit: /usr/local/share/lua/5.1/cudnn/SpatialConvolution.lua:96: cuda runtime error (2) : out of memory at /home/alu/repo/cutorch/lib/THC/

I proceeded to create one final piece before ending the experiment by combining multiple different neural networks together: waifu2x, deepdream, and neural-style:

You can watch the video of the transformation here:

The final piece after many layers:

Part 2: Minerva Outdoor Gallery

What if instead of looking at art in a virtual gallery, you could go inside of the art and be in the painting?

This time, I chose to preprocess the cubemaps into equirectangulars first so that the edges can be blended seamlessly into one image. This type of format is best for video as well. Take equi in Janus with [p] or ctrl-f8.

It takes about 2 minutes to process a single frame with 400 iterations at 512 resolution output. HD video will have to wait until I can upgrade the setup.

View with cardboard:

The beginning is near