Changes I'd Make:

*Note: This project is marked as completed, as I don't plan to do any more major updates at the moment, but I still may sometime in the future. Who knows.

Summary:

There were 2 overarching issues with my previous proof of concept. My DIY Google Cardboard headset was not very good and the video I was attempting to show through it was not correctly setup for viewing in VR. The first problem was an easy fix. I went onto Facebook marketplace and found a Google Daydream headset for pretty cheap. I actually got 2 of them for $30, which given that I have Google phones makes this all a perfect solution for me. So with the headset sorted out now I needed to move onto the more difficult task of splitting the video into 2 parts and applying some slight alterations to them so that when viewed by each eye independently they would match up and seem as if they were one. 

Before that though, In the previous video I used a separate piece of software call Raspberry-Pi-Cam-Web-Interface to provide the live video, but after doing some digging I found that at the heart of this live video stream was a program called RaspiMJPEG and essentially it's like a hybrid between raspistill and raspivid. For this use case it’s configured to capture images at a specified frame rate and put the captured image into a ram buffer. This ram buffer is of course much faster than saving it to the SD Card slot and allows me to continually access the image in the ram buffer and send it over the air to the connected device giving the perception of live video. Now there are a lot more things that RaspiMJPEG can do, but this is it’s core functionality. This time instead of using Raspberry-Pi-Cam-Web-Interface I decided to bake support for RaspiMJPEG into my raspberry pi action camera project code. This gives me a much better solution for the camera preview in the web control panel. After some fiddling around and searching for how to actually use the RaspiMJPEG program, I finally had it working and powering the now redesigned camera preview page previously mentioned. 

Ok, so we have the video feed, but now we have to split it and apply some filters so that it can be perceived correctly in the Google Daydream headset. For that I went looking around Google and came across an article which actually goes over doing exactly what I am trying to do of taking video off a raspberry pi and make it into a VR output in JavaScript. Bonus points as well as they are using the Raspberry-Pi-Cam-Web-Interface that I am basically branching from. It was a little confusing as the article does reference some base code that was created into another article to actually create the VR environment, but after doing a lot of copying and pasting and then tweaking for my use case, I had the video up and working. 

Now all that was left was to basically reassemble everything from my previous proof of concept video! I did make a change and printed out different extension for the camera in order to make everything lighter, but also to not have the weird 90 degree connecting ends. Another change I made was that I wanted to use a cheaper and lighter HDMI cable because it’s much easier to manage as it’s not as stiff. The only problem is that, like I mentioned in my previous video, these passive CSI to HDMI converters don’t work with all cheaper HDMI cables if they are missing a ground wire in them, but like I mentioned again in my previous video there is a possible solution. By soldering a connecting wire between the ground on the PCB and the HDMI plug housing we can fix that missing ground and allow the cheaper HDMI cables to work! 

So now it’s time to test out the video and see how it works! I started with the same really cropped video that I also experienced in my previous video, but once I switched the video with and height to match the max sensor region as outlined by the camera specs, I was able to see ALOT more and I was even able to reduce the number of extensions to only 3 making the rig as a whole much more stable on my head. The good times would end there though as my hacked together VR viewer js code was not good enough and the videos did not line up giving me this weird double vision, but the video was much clearer and this time around things were much more promising. 

Tossing out my makeshift VR support I began digging around on Google for an alternative method, focusing on the Daydream and it’s VR environment when I happened to come across a site talking about something called WebVR. Now unfortunately WebVR has since been discontinued, BUT in its place WebXR was created in 2018 and it’s API is the heart of all VR AND AR on the web today. Luckily for me threejs which was the driving force behind my previous faked VR support, has support for WebXR and with only a little bit of hassle I was able to redo my code to support this API and get my video rendered out in the world of VR and working more beautifully than I could have ever imagined. This is sorta fake VR though as the video is actually only getting rendered like a HUD in a video game, so it doesn’t move, but most importantly it’s set up correctly so that both eyes work together to stitch the image together instead of seeing a weird double image. And with that we now have out 3rd person camera setup!