close-circle
Close
0%
0%

Metaverse Lab

Experiments with Decentralized VR/AR Infrastructure, Neural Networks, and 3D Internet.

Similar projects worth following
We're living in exponential times. The Metaverse, a term coined in Neal Stephensons visionary novel Snow Crash, is what the internet will evolve into in the next 5 years with mediated reality systems being massively affordable. It's amazing when you can send someone a link and they can suddenly be with you. With WebVR, a person with an internet connection can learn basic web skills to build something and share that portal with minimum friction and maximum access. There will be no sharp line between the channels in 5 years. The OASIS will just a mirage if you're going to trust corporations to build it from top-down. It's up to us to make the future be compatible with digital human rights by implementing bottom-up solutions.

Metaverse Lab is inspired by things such as the Renaissance era and the hackerspace movement. The project goal is to think about the future of an internet that people inhabit such as the OASIS from Ready Player One and build components for blending the physical and digital indistinguishable from one another.

The Metaverse is a collective virtual shared space, created by the convergence of virtually enhanced physical reality and physically persistent virtual space, including the sum of all virtual worlds, augmented reality, and the internet.

The backbone of a metaverse is the internet, however as of today there are several challenges with the current Web that prevent the realization of a true metaverse.

    • HTTP encourages hypercentralization, this needs to change if we want to have a fighting chance for keeping the internet open and free.
    • Control: Large companies build walled gardens that require developers to submit a complete version of their VR app for approval to be listed in the app store. Governments and corporations alike exploit the centralized model for spying on us, monetizing our data, and blocking access to any content that represents a threat to them.
    • Dead links: Locations are centralized servers that can go down. The older a site is the more likely it is to eventually 404.
    • Lack of Interoperability: VR is exploding but applications still need a standard that can let them talk to each other.
    • No easy way to produce content: The most popular methods of creating virtual reality is too cumbersome for people to materialize their ideas.
    • Performance and Uptime: HTTP is expensive to scale content.
    • Security: We're trusting the servers with the data rather than trusting the data itself.
      • Centralized servers disconnect people. Location based servers provide consensus on the state of data in a localized area but do not allow anyone to live dynamically with anyone else in the metaverse.

      Lets reinvent the way we envision the web.

      JanusVR: Software that brings all of the web into Virtual Reality

      James McCrae, inspired by the novel Snow Crash by Neal Stephenson who detailed the Metaverse, built an engine that allows a spatial walk through the internet. The analogy is that webpages are rooms, and links connect rooms via portals (doorways which seamlessly connect rooms).

      To be precise about the meaning of the name Janus - it is in reference to the portals which are used to interconnect rooms. Like the Roman god Janus, a portal is a single object with two faces that can peer into two separate spaces.

      By embedding some XML within an existing HTML file, Janus can read the content and arrange the content in particular patterns on pre-defined geometry. Here's an example of a simple app for implementing an ADF (Area Definition File) from a matterport scan:

      <FireBoxRoom> 
      <Assets> 
      <AssetObject id="scan" src="http://ipfs.io/ipfs/QmSg5kmzPaoWsujT27rveHJUxYjM6iX8DWEfkvMTQYioTb/house.obj.gz" mtl="http://ipfs.io/ipfs/QmSg5kmzPaoWsujT27rveHJUxYjM6iX8DWEfkvMTQYioTb/house.mtl" /> 
      <AssetImage id="black" src=
       tex_clamp="true" />
      </Assets> 
      <Room use_local_asset="room_plane" visible="false" pos="0 0 0" xdir="-1 0 0" ydir="0 1 0" zdir="0 0 -1" col="#191919" skybox_right_id="black" skybox_left_id="black" skybox_up_id="black" skybox_down_id="black" skybox_front_id="black" skybox_back_id="black"> 
      <Object id="scan" js_id="alusion-7-1438484330" pos="-5.8 0.043 -10.400001" xdir="0 0 -1" ydir="-1 0 0" zdir="0 1 0" lighting="false" /> 
      </Room> 
      </FireBoxRoom>
      All it takes to start creating a VR site is by placing the FireBoxRoom tag within the body tag of an existing HTML file.

      Converting it into a WebVR site is very simple, just paste the FireBoxRoom code in between the comments and host the file somewhere for view it from a regular 2D browser.

      <html>
      <head>
      <meta charset="utf-8">
          <meta http-equiv="X-UA-Compatible" content="IE=edge">
          <meta name="viewport" content="width=device-width, initial-scale=1">
          <meta name="description" content="Memes">
          <meta name="author"...
      Read more »

      • 33c3: Retrip

        alusion01/19/2017 at 03:36 0 comments

          32c3 Writeup: https://hackaday.io/project/5077/log/36232-the-wired

          Chaos Communication Congress is Europe's largest and longest running annual hacker conference that covers topics such as art, science, computer security, cryptography, hardware, artificial intelligence, mixed reality, transhumanism, surveillance and ethics. Hackers from all around the world can bring anything they'd like and transform the large halls with eye fulls of art/tech projects, robots, and blinking gizmos that makes the journey of getting another club mate seem like a gallery walk. Read more about CCC here:

          http://hackaday.com/2016/12/26/33c3-starts-tomorrow-we-wont-be-sleeping-for-four-days/
          http://hackaday.com/2016/12/30/33c3-works-for-me/

          Blessed with a window of opportunity, I've equipped myself with the new Project Tango phone and made the pilgrimage to Hamburg to create another mixed reality art gallery. After more than a year of practice honing new techniques I was prepared to make it 10x better.

          It's been months since I've last updated so I think it's time to share some details on how I am building this years CCC VR gallery.

          Photography of any kind at the congress is very difficult as you must be sure to ask everybody in the picture if they agree to be photographed. For this reason, I first scraped the public web for digital assets I can use then limited meat space asset collection gathering to early hours in the morning between 4-7am when the traffic is lowest. In order to have better control and directional aim I covered one half the camera with my sleeve and in post-processing enhanced the contrast to create digital droplets of imagery inside a black equirectangular canvas which I then made transparent. This photography technique made it easier to avoid faces and create the drop in space.

          This is what each photograph looks like before wrapping it around an object. I used the ipfs-imgur translator script and modified it slightly with a photosphere template instead of a plane. I now had a pallet of these blots that I can drag and drop into my world to play with.

          I then began to spin some ideas around for the CCC art gallery's visual aesthetic:

          I started recording a ghost while creating a FireBoxRoom so that I can easily replay and load the assets into other rooms to set the table more quickly. This video is sped up 4x. After dropping the blots into the space I added some rotation to all the objects and the results became a trippy swirl of memories.

          I had a surprise guest drop in while I was building the world out, he didn't know what to make of it.

          Take a look into the crystal ball and you will see many very interesting things.

          Here's a return to the equi view of one of the worlds created with this method of stirring 360 fragments. After building a world of swirling media I recorded 360 clips to use for the sky. Check out some of my screenshots here: http://imgur.com/a/VtDoS

          In November 2016, the first Project Tango consumer device was released after a year of practice with the dev kit and a month of practice before the congress I was ready to scan anything. The device did not come with a 3D scanning application by default but that might soon change after I publish this log. I used the Matterport Scenes app for Project Tango to capture point clouds that averaged 2 million vertices or about a maximum file size of 44mb per ply file.

          Update** The latest version of JanusVR and JanusWeb (2/6/17) now supports ply files, meaning you can download the files straight into your WebVR scenes!

          Here are the steps in order to convert verts (ply) to faces (obj). I used the free software meshlab for poisson surface reconstruction and blender for optimizing. (special thanks /u/FireFoxG for organizing).

          1. Open meshlab and import ascii file (such as the ply)
          2. Open Layer view (next to little img symbol)
          3. SUBSAMPLING: Filters > Sampling > Poisson-disk Sampling: Enter Number of Samples as the resulting vertex number / number of points. Good to start with about the same...
        Read more »

      • 3D Web Scraping

        alusion10/14/2016 at 05:35 0 comments

        FireBoxRoom Scraper

        These scripts are meant to make the lives of Metaverse explorers and developers better while helping to decentralize the Metaverse by archiving assets to the Interplanetary Filesystem. While exploring the immersive web using JanusVR, pressing Ctrl+S will copy the source code of the site you are currently on (as well as download the html/json file to your workspace folder) to the clipboard.

        One line to brute download assets in a FireBoxRoom with absolute paths. This requires the package 'wget' to be installed, otherwise you can chop off the part '&& wget -i assets.txt' and just have the assets.txt file serve as a list of absolute links found in the file.

        cat index.html | grep -Eo "(http|https)://[a-zA-Z0-9./?=_-]*" | sort | uniq > assets.txt && wget -i assets.txt
        

        I wrote a script using python3 to more politely index and count the various assets in a given FireBoxRoom and optionally download them separately or all at once. https://gitlab.com/alusion/fbparser

        I plan to update this script to accept a url argument to easily scrape relative pathsand be able to archive VR websites with IPFS.

      • Decentralized Avatars

        alusion10/13/2016 at 22:25 0 comments

          Imgur to Avatar Translator

          https://gitlab.com/alusion/imgur-ipfs-avatars

          The next tool I want to share will make generating custom avatars a breeze, seamlessly converting imgur albums into wearable avatars. It uses imgur to host texture files that are generated based on a given template. The userid.txt file is where information for your avatar is stored in and can be found in ~/Documents/janusvr/appdata/userid.txt on Windows and ~/.local/share/janusvr/userid.txt on Linux.


          The next script requires API keys (simply register on Imgur and go to settings -> applications, insert on line 12-13) and imgurpython ($ pip install imgurpython)

          https://gitlab.com/alusion/imgur-ipfs-avatars/raw/master/imget.py

          For every object file in the directory, generate an avatar file using the Interplanetary Filesystem.

          #!/bin/bash
          # Requires IPFS (ipfs.io)
          hash=`ipfs add -wq *.obj *.mtl | tail -n1`
          for filename in $(ls *.obj)
          do
          cat << EOF > ${filename%.*}.txt
          <FireBoxRoom>
          <Assets>
          <AssetObject id="body" mtl="http://ipfs.io/ipfs/$hash/${filename%.*}.mtl" src="http://ipfs.io/ipfs/$hash/${filename%.*}.obj" />
          <AssetObject id="head" mtl="" src="" />
          </Assets>
          <Room>
          <Ghost id="${filename%.*}" scale="1 1 1" lighting="false" body_id="body" anim_id="type" userid_pos="0 0.5 0" cull_face="none" />
          </Room>
          </FireBoxRoom>
          EOF
          done
          

          The one liner version of this will output an IPFS hash that contains the userid.txt files that match the imgur filename.

          python3 imget.py TYmvd; sh gen_avatar.sh; ipfs add -wq *.obj *.mtl *.txt | tail -n1
          

          QmRBaAxLsHmG5VsGH8GVjQrxbatxEJ9R53Zfs5G4oivCY8

          If you have any issues make sure that you have all dependencies met and your API keys are in the imget.py file.

          Replace the contents of your userid.txt file to change your avatar with the generated one.
          Your avatar should load up SUPER fast!! ( ´・ ω ・ ` ) Enjoy your dank memes ( ´・ ω ・ ` )


          /* Updates *\

          The next feature that needed to be implemented is an image gallery that links to the generated avatar file. You can also tweak it to create an image gallery of the imgur album.

          #!/bin/bash
          
          hash=`ipfs add -wq *.obj *.mtl *.txt | tail -n1`
          
          echo "<html>"
          echo "<body>"
          echo "<ul>"
          while IFS='' read -r line || [ -n "$line" ]; do
              base="${line##*/}"
              echo "
        • $hash/${base%.*}.txt'><img src='$line'></a>"
          done < "$1" echo "</ul>" echo "</body>" echo "</html>"

          The next update I tweaked the imget.py script to print out the image names so that they can easily piped to a file and be downloaded for offline usage. The following command will convert the imgur album into avatar files that sit behind a front-end preview on IPFS:

          # Generate 3D files for Imgur and save image links to a file.
          python3 imget.py TYmvd > images.txt
          # Next, generate avatar files and front end and publish to IPFS
          sh gen_avatar.sh; sh gen_preview.sh > list.html 
          ipfs add -wq *.txt *.obj *.mtl *.html | tail -n1
          

          https://ipfs.io/ipfs/QmRBrDYFqVnvda76XDgVdvuDWwv1PaUe74UQZ76YYKNE58/

          Finally, I forked the script in order to translate albums into virtual reality websites based off a given template. Here's the first version for converting into a JanusWeb site: https://gitlab.com/alusion/imgur-ipfs-avatars/raw/master/gen_vr.sh


          Converting a list of albums for Imgur

          The next thing I wanted my program to do was to convert many albums that I categorized into avatars and to also generate a front-end for easy selection. I wrote out the steps for those who wish to follow in their own lab. Improvements and suggestions are welcomed.

          The first requirement is to make a list of the imgur album tags collected in a file like this: http://sprunge.us/WXWA and to save this file as albums (no extension).

          # Convert list of albums into avatars + HTML front-end for selection
          for f in $(cat albums); do mkdir $f; python3 imget.py $f > images.txt; sh gen_avatar.sh && sh gen_preview.sh > list.html; ipfs add -wq *.txt *.obj *.mtl *.html | tail -n1 >> hashes; mv *.obj *.mtl *.txt *.html $f/; done

          Have a lot of albums and hashes?...

        Read more »

      • Spicy Reality

        alusion10/09/2016 at 04:29 0 comments

        2015

        https://hackaday.io/project/5077/log/25989-the-vr-art-gallery-pt-1

        https://hackaday.io/project/5077/log/27625-sublime

        Many virtual art gallery applications seem uninspired and taste bland.

        I wanted to create an artificial art gallery with neural networks that make their own art: https://hackaday.io/project/5077/log/25989-the-vr-art-gallery-pt-1

        The ideas evolved from painting images in the WiFi to stepping inside the painting and connecting with others. A VR web browser can distinct each sphere from different file/web servers and represent them in playful ways. This site is even unlocked and editable, a functionality that most immersive digital content lacks. VR is as much as a creation tool as it is a consumer device and being able to edit the source from within the simulation is a key component for being able to bootstrap the Metaverse.

        2016

        Lets enhance.

        First we need to spice up these textures: http://imgur.com/a/ByuFw

        Using a combination of style-transfer and image super resolution you can transfer just textures plain without any seasonings:

        Preview in VR:

        But if you're like me, you like to take things up a notch. I styled in some Van Gogh and palette knife layers in the neural texture oven for 5 minutes.The scan looks much better when you import it into a game engine. Further optimizing this is very possible. What if it were possible track a persons movement or gaze on the 2D texture map and foveate rendering there to eliminate pixels and enhance? I think there are no limits with spicy reality.

        Full album: http://imgur.com/a/epTLc

        VIEW ROOM IN WEBVR: https://ipfs.io/ipfs/QmTgEDMh611WJPC8YhPAAqefnvGPDbDhm1eQTVAtYdkQRL/

        **UPDATE: New WebVR art Gallery here: https://kool.website/bedbath/

        As long as your movement is being precisely tracked, you can render real-time effects into mixed reality by applying the shader origin to where a persons gaze rests and then switch states. You can kind of see where this can go with LSD for Hololens: https://fat.gfycat.com/HopefulGenuineBarasinga.webm

        It'd be a cool sequence to tap into GPU when sitting idle and have it wake up entering into a dream state of where you last left off.

        It's not enough to look around and just watch a 360 video anymore, in here you can be social and explore in an interactive 3D environment. Combine the power of the browser with a gaming engine written in javascript and the entire world becomes a massive multiplayer online video game anyone can be part of.

      • Trashmogrified

        alusion09/19/2016 at 02:48 0 comments

        3D Scanning Objects with a Smartphone +123D Catch

        Transmogrify: transform, especially in a surprising or magical manner.

        The future of everything is 3D and you need to get on that right now. The first step is to download 123D catch from the app store and make an account. Detailed and easy instructions can be found on the official how-to page here: http://www.123dapp.com/howto/catch which I'll summarize briefly. First open the app and swipe right to start a new capture. Follow the on-screen instructions and take a picture while maintaining a careful distance between you and the subject matter to fill in the graph. http://imgur.com/a/yJQlj

        When you have 9+ pictures taken, you can then upload your photos and Adobe cloud is going to take the positional metadata attached with the source images in order to generate the 3D model for you. The result is a downloadable 3D object with high resolution graphics (and high poly) that you can optimize with blender for use in your projects.

        Search for models using the 123D catch gallery or download your own from the website and decimate them to a lower polygon count using Blender.

        Trash World

        Over time, I collected and organized over 50 photogrammetry scans from 123D catch into a list of the model names adjoined to the IPFS link containing the data files and FireBoxRoom template [https://u.teknik.io/ljywy.mp4]. It's basically the barebones of an IPFS inventory system without the GUI and makes for a quick and dirty pallet. Majority of the photogrammetry scans in my pallet literally consist of garbage because I started to notice a pattern in many VR experiences that often feel sterile of the chaos that reality is littered with.

        Trash is pretty interesting subject matter when seen as a carrier of entropy and became a common ingredient in a series that would explore collaging these scans together with other media. I started turning scans into avatars and ghosts so that users can become a trash can or transform into an elaborate scan of an environment that people would hang out in. http://imgur.com/a/4CAoG


        Street Art Galleries

        https://hackaday.io/project/5077/log/25989-the-vr-art-gallery-pt-1

        The background of this image was made by chronologically ordering images that were carved out of a packet capture of public WiFi onto a page. This is a virtual stained glass piece is that is meant to be viewed in VR because of scale.

        I've always held a strong interest in street art and what it stands for. Moving out to LA has exposed me to crazy amount of inspiring art on a daily basis. I've been creating more WiFi glitch art with neural networks, this time using more powerful algorithms and better hardware. The images extracted are further processed with openCV with facial detection in order to sort images destined for portraiture away from the rest.


        Link to generated layers

        The new pieces would need a new kind of gallery. I started to play with the photogrammetry scans and put together a new creation each day. My first attempts became an alley way and ash bowl gallery that I linked together:

        Link to high resolution video: https://u.teknik.io/PNKQj.mp4

        Link to high resolution video: https://u.teknik.io/uXuVA.mp4

        360 preview: https://ipfs.io/ipfs/QmYcV2k4xKbPW8Q2sfAHXwbtdD8beTyLUE9Zu5EZrTyenW/

        I started to experiment with outdoor scans in order to texture the environment with generated art. The larger gallery space provides a good space to visualize the ideal scale that I desire to make art with.


        Link to album: http://imgur.com/a/z72uX


        Degentrification

        I began to seek nostalgic video game levels for gallery space, ripping out models from classic N64 games. Growing up, I remember being completely immersed in playing the Zelda games; I did every quest and unlocked every secret. That feeling of coming home, popping in the game cartridge, and hearing the intro was the best feeling. I grabbed game files from models-resource and started to add my own decorations to places like Hyrule courtyard:

        Link to full album: http://imgur.com/a/nhN8I...

        Read more »

      • Matterport + WebVR DevOps

        alusion09/18/2016 at 19:16 0 comments



        Matterport is a camera that can automate the process of high resolution reality capturing. The Matterport is reasonably priced with the 3D cameras costing $4500 and a monthly/yearly cloud processing fee. I think that the speed and quality of the system makes this a good investment for professionals, whom can be local as well if you're interested in a scan. There's an excellent community of 3D/VR/360 photographers and agents over at we-get-around. From there I was able to find a most excellent sample to use of a gigantic mansion.

        Visualizing the matterport scan is easy with Janus since there's an option with these camera's to export to obj files and from there it would be just a matter of opening a portal to the files in JanusVR with a url like file:///path/to/files/.

        Converting to JanusWeb is easy enough that you can make your own site in minutes. On the site you want to use for your landing page, press Ctrl+s to copy the source code of the Janus site to your clipboard. You can then download prebuilt JanusWeb files from here and then and unzip the contents somewhere. Then, open the index.html and just paste what's in between the <FireBoxRoom> markup into the comment section of the index.html.

        <html>     
        <head>     
        <title>Janus</title> 
        </head>    
        <body>     
        <!--
        
        <<< PASTE YOUR FIREBOXCODE HERE >>>
         
        -->        
        <script src="janusweb.js"></script> 
        <script>elation.janusweb.init({url: document.location.href})</script>
        </body>    
        </html>

        The VR site is ready to be viewed from any web browser. WebVR support is coming to modern browsers such as Firefox Nightly, Microsoft Edge, and Chrome. Check out the builds at webvr.info and follow the instructions to get your headset working from the browser and join the awesome community on IRC/Slack.

        Of course, the project gets much more interesting with Janus and IPFS. For one, it can make the applications lighter by distributing assets amongst the swarm. Example code looks like this:

        <FireBoxRoom> 
        <Assets> 
        <AssetObject id="scan" src="http://ipfs.io/ipfs/QmSg5kmzPaoWsujT27rveHJUxYjM6iX8DWEfkvMTQYioTb/house.obj.gz" mtl="http://ipfs.io/ipfs/QmSg5kmzPaoWsujT27rveHJUxYjM6iX8DWEfkvMTQYioTb/house.mtl" /> 
        <AssetImage id="black" src=
        tex_clamp="true" />
        </Assets> <Room use_local_asset="room_plane" visible="false" pos="0 0 0" xdir="-1 0 0" ydir="0 1 0" zdir="0 0 -1" col="#191919" skybox_right_id="black" skybox_left_id="black" skybox_up_id="black" skybox_down_id="black" skybox_front_id="black" skybox_back_id="black"> <Object id="scan" js_id="alusion-7-1438484330" pos="-5.8 0.043 -10.400001" xdir="0 0 -1" ydir="-1 0 0" zdir="0 1 0" lighting="false" /> </Room> </FireBoxRoom>
        The entire web app can also be hosted with IPFS: https://ipfs.io/ipfs/Qma87Ew1TPhdA76prrGoYopP9AJ78jWpAW31JEt6kyvrQX/

        I started to use this scan as a base. JanusVR can be seen as a visual programming environment because every action you have manipulating objects in the scene you are editing the DOM underneath without writing code. Some members of the Janus community took advantage of this to discover a clever workflow for building VR sites by using Piratepad as an editor:

        Etherpad allows you to edit documents collaboratively in real-time, much like a live multi-player editor that runs in your browser. This really improves a group to iterate faster and collaborate in powerful ways. You can open a portal to or export any version from the piratepad easily by tweaking the url (http://piratepad.nl/ep/pad/export/P4jEp0HzwM/rev.115). You can also snapshot the 'state' of the living document as a text file that can be distributed with IPFS (QmWkM2WhqhXTjaGhJG3bdHrw7otG1tBJM4JLkHVSTTSn9i). These are screenshots taken from Janus, you can view the browser version here: https://ipfs.io/ipfs/Qma87Ew1TPhdA76prrGoYopP9AJ78jWpAW31JEt6kyvrQX/#janus.url=http://piratepad.nl/ep/pad/export/P4jEp0HzwM/rev.115 (I will install seed servers for IPFS later, this is still very experimental)

        This is super badass setup because as you make changes to the room the document will auto-save...

        Read more »

      • River Phoenix

        alusion07/16/2016 at 07:06 0 comments

        Making Edgy Art with Neural Networks

        Links to previous posts:

        https://hackaday.io/project/5077/log/25989-the-vr-art-gallery-pt-1

        https://hackaday.io/project/5077/log/31529-generative-networks

        https://hackaday.io/project/5077/log/32774-generative-networks-pt-2

        I used http://carpedm20.github.io/faces/to generate faces from neural networks for the particles in this scene. More than 100K images are crawled from online communities and those images are cropped by using openface which is a face recognition framework. The code can be found here: https://github.com/carpedm20/DCGAN-tensorflow

        http://www.kussmaul.net/gifkr.html

        https://github.com/awentzonline/image-analogies

        Clouds: http://imgur.com/a/6nkQg

        Basquiat: http://imgur.com/a/DTKd7

        Pallet: http://imgur.com/a/IRGD3

        Link to album: http://imgur.com/a/7eHaT


        Metaverse Inventory System

        The JanusVR UI is an HTML webpage that anybody can edit. Recently a community member created a badass inventory system that uses js-ipfs backend for uploading files to IPFS and storing the hashes in one's inventory. This system is a game changer, allowing anybody to easily upload anything or grab objects from other peoples sites and take it with them. See this in action in video below:

        It is really easy to setup IPFS on windows now, just download the binary from https://ipfs.io/docs/install/ and extract to your downloads folder. Next, it is useful to add the path to the binary to your environment variables on windows so that you can start the daemon more easily from anywhere. I wrote out the steps to do that here: http://imgur.com/a/ejvzd

        After these steps, you can open a powershell or command prompt and type ipfs init then ipfs daemon and begin to use IPFS on your local system as a full node.

        I quickly made a gallery by adding the folder of heads to IPFS then drag and dropping all the links from the websurface. Janus will automatically create the markup of the page with localhost links on all of the assets (since I requested it via localhost:8080/ipfs/<hash>). I copied the source code and then hashed the text file, now I can visit the VR site from any computer running IPFS and have it load as fast as the medium of our p2p connection.

        heads.html QmVTUr9vRv53MxbrnHGvvDHuUeTK2f2FgHcbRAMsxt4Uxc

        To add an object to your inventory, select the object (middle click) and then click the icon on the bottom left with the hand over the circle (2nd from left). You can create folders to organize things more nicely. The current process behind the scenes is a proof of concept and has ways to go for a more proper implementation but it works and you can bring your assets (objects / images / sounds) with you to anywhere else around the metaverse all while exploring and collecting more assets from other sites you visit.

        The inventory is so much more powerful and fun when you get together with a few of your friends. Below is a video that is sped up 4x with 4 of my friends that also have the IPFS inventory UI world building in the JanusVR sandbox room. You can see how in 10 minutes we are able to create an entire map.

        Link to IPFS Inventory UI (extract to Janus/assets/2dui/)

        Building in VR album

        Link to room in JanusVR

        http://imgur.com/a/a8GRU

      • Data Flow

        alusion07/16/2016 at 04:09 1 comment

        "One of the things our grandchildren will find quaintest about us is that we distinguish the digital from the real." - William Gibson

        The Metaverse will be a place that billions will use and spend majority of their time within and thus requires ethical considerations to the many technological layers compromising its infrastructure. The current web is utterly broken -- the current model is riddled with holes and layers of ownership with choke points that allow for surveillance and exploitation at a massive scale. The plan to make the web great again is to decentralize everything and build a web of trust using P2P technology. Lets look at the current model and begin at the physical layer of the Internet; the undersea pipes that connect our world together.

        This graphic represents the collection points where data gets split between the original destination and the surveillance machine. We're already owned just by using the Internet.

        Never before has one side known so much about us and so very little known about them. Thanks to Snowden, we have seen the shape of what lies on the other side of the one way mirror.

        The size and value of this data will continue to grow exponentially as will our reliance on such technologies for the conveniences they bring. All of the world's data was generated in the past 12 months. Ownership of that data within the information age is complicated, between the many layers that connect our world together we are tied to our feudal lords that lay the pipes and provide the services and they are the ones that monetize our data the most.

        Lightbeam is a Firefox add-on that enables you to see the sites you visit and third party sites you interact with on the Web. Browsing Reddit for 5 minutes, I visited 11 sites and had 215 third party sites tracking me. Could you imagine the amount of data that is being gathered weekly? Multiply that a million times over and you'll begin to understand of how much data is being collected, stored, analyzed by these huge companies -- some of which you certainly have never heard of.

        A heavy amount of tracking information is necessary in VR in order to create the illusion of presence. It should be well considered with whom you let inside your mind. Information doesn't always flow both ways. With the advent of any new technology, there seeks those who wish to control it. We have seen this before as the media giants lobby new bills in attempts to tighten their grip on the flow of information across the internet.The basis of the advertising model is control; to understand and influence behavior is at its core.

        Jared Lanier, VR's father is deeply worried about how both the market for VR and the technology are developing. In particular he’s concerned about how virtual reality technology will put even more power in the hands of a very small number of already powerful companies.

        http://www.siliconbeat.com/2016/05/24/wolverton-vrs-father-worried-about-technologys-future/

        The inventors of the Internet and the World Wide Web are also concerned about the imbalance and have recently gathered at the Internet Archive in SF to hold the first summit dedicated to discussing ways to decentralize the web using peer-to-peer technology. Of course I was there, absorbing the information and brainstorming solutions that will be future compatible for our mediated reality future.

        Building Blocks for a Decentralized Web 3.0

        We live in exponential times and things are certainly changing. We now have computers that are a thousand times faster in our pockets and Javascript that allows us to run sophisticated code in the browser. Cryptography and public key encryption systems were once illegal in the early 90s but is now being used for authentication and privacy, enabling communications and transactions to be made safe in transit.

        Finally, blockchain technology has proven that we can build a global database with no central point of control.

        http://www.wired.com/2016/06/inventors-internet-trying-build-truly-permanent-web/...

        Read more »

      • Interplanetary Metaverse

        alusion04/23/2016 at 04:02 1 comment

          It's very easy to use IPFS to start building decentralized metaverse apps in JanusVR. All you need to do is download IPFS from here: https://ipfs.io/docs/install/ then follow the steps to initialize. Here's how you can easily add files to drag and drop into your Janus app:

          1. Go to the folder with the assets you wish to load in
          2. Open the command prompt in the current directory and type ipfs add -r .
          3. The last hash represents the root folder, copy that and load up JanusVR.
          4. Press escape and open the web browser
          5. Open contents with http://ipfs.io/ipfs/<yourhash> which will cache the files through the main IPFS gateways. Your assets will be online instantly
          6. Ctrl-click and drag the objects or images out of the web page into your room. That's it!

          For building rooms, I find that having an inventory list of all the hashes helps to speed things up.

          Once you drop your asset into a room with IPFS, you can use the JanusVR built in code editor to do really cool stuff and preview the changes live with the update button!

          Finally, you can also use IPFS to bundle the entire application and set a portal to link to it. This step will let you instantly publish your creations, censorship free, without needing to deal with one of the many walled gardens such as Google Play or the Oculus Store. The hash for IPFS racing is here: QmSrHBJUaXYe3oJt1rdTDDGvUYC3AuwAfigHnvsqTtbDhc. The Metaverse should be kept free and open, bring back power to the users and decentralize all the things! No one owns the internet. Here's how it looks in Janus versus JanusWeb (transparency working).

          http://janusweb.metacade.com/#janus.url=https://ipfs.io/ipfs/QmcXxDzfjpwn7sx4d5TuNakhQDCnWEEvoDPvpNn9YRCN63/ipfs_dank2.html

          Rooms will load fine on mobile web browsers and VOIP also works but there lacks controls (will test bluetooth controller in the future).

          View progress in JanusVR: http://alusion.net/lab.html

          View in JanusWeb (Browser): http://janusweb.metacade.com/#janus.url=http://alusion.net/lab.html

      • The Wired

        alusion04/19/2016 at 01:18 0 comments

        32c3: Gated Communities

        UPDATE*** Visit the 32c3 WebVR gallery here: https://kool.website/32c3/

        I periodically reflect back on my experience first time attending Europe's largest hacker conference. The Chaos Communication Congress is an annual four-day conference on technology, society and utopia and is going onto its 33rd year. I planned my trip in advance with the goal of participating in the congress with some amazing virtual art. The response was amazing and a very positive feeling to be amongst the most brilliant minds in the world. I made sure this experience is one I'll never forget.

        During CCC I sampled various bits around the congress and created my own mini congress inside AVALON.

        https://ipfs.io/ipfs/QmXwG57htZagbFUHdJsnyB9vuqkva2mfbFq4LiQhetKBq7/

        Successful test with Pirateox GL-inets. AVALON nodes configured in a meshnet.

        Congress inside the congress, multiple people can connect to the AVALON mesh network and congregate inside a private virtual world. You can use Avahi to browse for services that are being broadcast on the local network. The command-line tool can be used to see mesh connected nodes and their ipv6 address. With Janus, you can just open a portal to that room.

        http://janusweb.metacade.com/#janus.url=https://ipfs.io/ipfs/QmXwG57htZagbFUHdJsnyB9vuqkva2mfbFq4LiQhetKBq7/


        Interplanetary Scale

        Using IPFS, we can build a global distributed supercomputer to upgrade the internet to a metaverse.

        The goal of IPFS (InterPlanetary File System) is to connect all devices with the same system of files, like the web plus Git and Bittorrent. In some ways, this is similar to the original aims of the Web. It creates a P2P swarm that allows the exchange of IPFS objects that aims to make the web faster, safer, and more open.
        https://hackaday.io/project/5077/log/18650-general-update

        Issues with HTTP works that are not scaling with our uses of the network and our uses of the web in general. In terms of how websites store data on the internet, HTTP is quite brittle. We need protocol to reason how data moves and properties in links that you have to check integrity or cryptographic proofs.

        The core problem with HTTP is that it's location addressed (IP address / domain name to access computers to server information you're requesting: problem when you can't access those resources). Short lifespan to websites and links become stale / broken. (Internet Archive has been trying to make backups, IPFS can make it more automatic).

        IPFS enables websites that are completely distributed, do not have any origin, can run on client side browsers without any service to talk to.

        Read this excellent comparison of HTTP versus IPFS here:
        https://ipfs.io/ipfs/QmNhFJjGcMPqpuYfxL62VVB9528NXqDNMFXiqN5bgFYiZ1/its-time-for-the-permanent-web.html

        Quick Summary of IPFS

        libp2p is the guts of the IPFS network

        IPFS is a protocol:

        • defines a content-addressed file system
        • coordinates content delivery
        • combines Kademlia + BitTorrent + Git

        IPFS is a filesystem:

        • has directories and files
        • mountable filesystem (via FUSE)

        IPFS is a web:

        • can be used to view documents like the web
        • files accessible via HTTP at http://ipfs.io/<path>
        • browsers or extensions can learn to use ipfs:// directly
        • hash-addressed content guarantees authenticity

        IPFS is modular:

        • connection layer over any network protocol
        • routing layer
        • uses a routing layer DHT (kademlia/coral)
        • uses a path-based naming service
        • uses bittorrent-inspired block exchange

        IPFS uses crypto:

        • cryptographic-hash content addressing
        • block-level deduplication
        • file integrity + versioning
        • filesystem-level encryption + signing support

        IPFS is p2p:

        • worldwide peer-to-peer file transfers
        • completely decentralized architecture
        • no central point of failure

        IPFS is a CDN:

        • add a file to the filesystem locally, and it's now available to the world
        • caching-friendly (content-hash naming)
        • bittorrent-based bandwidth distribution

        IPFS has a name service:

        • IPNS, an SFS inspired name system
        • global namespace based on PKI
        • serves to build trust chains
        • compatible with other NSes
        • can map DNS, .onion,...
        Read more »

      View all 44 project logs

      Enjoy this project?

      Share

      Discussions

      Similar Projects

      Does this project spark your interest?

      Become a member to follow this project and never miss any updates