Close
0%
0%

T^2 TyMist [gd0138]

I need some virtual desktop monitors that offer 90+PPD at 90+Hz.

Similar projects worth following
I had high hopes for the yet-to-be-released Pimax TWELVE K, but apparently, a PPD of 35 means that I can't use it as a monitor replacement. All the recent AR glasses (wearable HUDs) have a PPD of 49 - 55, but that's still kind of low and 1080p isn't enough pixels for me to be the most productive, especially considering they're landscape only. The Movero BT-40 is 65PPD but there are reports of the device overheating and shutting down.

My current proposed solution is to use square displays and beamsplitter optics for (primarily) 4:3 landscape or portrait virtual monitors, and PLDC (aka smart glass) film around my vision that changes from transparent to a "windows 11 mica material" level of mist, electronically.

Tech... squares... transparency and mist...
TechnologySquared TransparencyMist.

I also want purified air and virutal surround sound, so I'm packaging it all in a ~300mm cylinder.

[09 Nov 22]

From trying different screens, I can say that:

  • 70ppd looks pixelated and annoying. Must avoid.
  • 80ppd is on the edge of looking pixelated. I'd like to avoid because I'm always going to be squinting, thinking "can I see pixels?".
  • 90ppd is a nice balance of sharpness and Windows GUI at 100% scaling. 

So it seems that 100ppd would either be ultrasharp or just indistinguishable from 90 if I extrapolated it, and that increments of 10ppd make a very noticable difference until getting to my eye's resolution, so I'd rather not imagine what Nreal Air and its 50ppd looks like.

[02 Nov 22]

I will admit that a notable reason why I'm starting this project is because of some issues with #Teti [gd0022] and its 3 screens. I've had them since Dec 2020, so I know of the challenges and limitations, which are:

  • There's only 3 screens. I can store extras away when I don't need them, but if I'm really in the zone, programming something mystical like #enSweepen [gd0096], I might want 4 or 5. The only solution would be to move closer to the screen and reduce the scaling percentages of them all to fit more content.
  • They weight 800 grams each. That's 2.4kg of screens to be carried inside #TetInventory [gd0039], and somehow needs to be held in place by a hinge on Teti.
  • 9:16 is a very tall aspect ratio for portrait mode viewing.
  • I don't like screen light coming in via my peripheral vision. It's a leading reason why I've gone with a stacked monitor setup for Teti.
  • They have different white balances. I didn't expect this because they look fine by themselves, but when I turn on all 3, I can see that one is slightly bluer than another.
  • They're small. 15.6" is a decent size, but I've longed for a 17.3" or larger screen.
  • They're large. I know what I just said a bullet point ago, but there's still a lot of glass surface area that might be impacted or scratched one day. It's also pretty tough to hold one handed since glass doesn't have a suitable level of grip.
  • They're a bit of a "black box". If they break (I've already got a stuck green pixel on one), I wouldn't really know if I could get parts to fix it or have the skill to not break something else while doing so. The firmware on them also has this annoying blue "no signal" screen that has annoyed me for the 22 months I've had the displays.
  • They're 60Hz. I'm glad I went resolution over fps back in 2020, but I'd very much prefer higher Hz.
  • There's no privacy. I can only imagine how curiosity inducing I'd look whilst sitting somewhere with a complete 3 monitor PC setup. Additionally, IPS panels have great viewing angles, so pretty much any 170 degree angle behind me is visible.
  • Speaking about visibilty, they're glossy, meaning that anything behind me that's not dark coloured is going to reflect off the screen.
  • I need something to place the screens onto, which limits the comfortable viewing positions possible.
  • Tetent might not be ergonomically compatible with the dual screen laptop configuration idea.

TyMist can solve all these problems (and others, such as other things like protecting my eyes from bright light sources), and currently, it seems less complex than I originally estimated to obtain a solution.

  • I can have much more than 3 screens, and I won't even be limited by the minimum angle due to the length/width of the screen. I could have 5 screens yet my neck only needs to move the angle of 2.
  • I fully expect that a HMD isn't going to be 2.4kg in weight.
  • 3:4 is a comfortable aspect ratio for portrait mode.
  • Only the active screen and position markers for windows on other screens will be visible at a time, so the only peripheral vision light should be from the environment.
  • I might still have issues with colour uniformity since the current solution uses 4 seperate panels. At least it'll be consistent between virtual monitors.
  • It's large. I expect a 170" desktop at 6m away, which would be a screen 2.5m high and 3.4m wide.
  • It's small, so would be easier...
Read more »

  • [T] Black+White PDLC Squares

    kelvinA5 hours ago 0 comments

    I'm cleaning up my AliExpress basket and I get an idea:

    Double down and get even more squares into this T^2 tech. 

    30cm * 3.14 / 2 = 47cm, which is 2cm longer than 3 * 15cm. I can have the 3 white ones on the front of the cylinder tube and the 3 black ones on the rear of the tube where there's a mouth cutout. (I haven't modelled it in yet, but I'm going to keep most of the acrylic tube intact and just cut an internet browser tab shaped cutout so that I can eat.)

    The reason for putting the black PDLC on the cutout side is simple: air quality is less of a priority when I'm likely to use it. I'm less likely to be doing intellectual work and more likely trying to get an IMAX cinema vibe going if I desire a black PDLC behind the virtual monitor. If I am doing intellectual work and I still desire a black PDLC , it's likely that it's because I'm working in the fresh outdoors (away from pollution sources) and want to improve contrast since the outside world is much brighter.

    With 1 square infront and 2 on each side, I should gain some quality of life improvements. 

    The main one is an animation that turns on the centre PLDC and then turns on the two side PDLCS 0.33 seconds later. (Remember, turning the PDLC on makes it transparent.) This animation would also be used in reverse and makes it much more comfortable to switch from full transparency to full mist and back, compared to suddenly switching one large PDLC. My instant thought was "Wow that's certainly some Iron Man Armoured Adventures aesthetic!" but all I can remember is a void of darkness whenever the camera was looking at the suit wearer. Honestly, the only instance I can think of where I've seen this UI/UX is in Total Fantasy Knockout where the control room metasurface of spacelike sparkles opens up to a view outside:

    You might also understand where the design inspiration of #Tetent TestCut [gd0139] probably came from.

    I also would have the opportunity to only mistify the front PDLC for contrast and keep my peripheral vision transparent, or have things the other way around to feel like I'm in some frosted glass office of the future (and hide distractions). I could also have a slow "mexican wave" animation to get the Thunderbird 5 rotation vibe I've mentioned in a log a while ago without physically moving the acrylic tube. Like the video below but rotated 90 degrees, way slower and much thicker strips:

    Oh and I just noticed that the acronym is PDLC not PLDC. Now I have to be on the lookout for that typo.

    To conclude, I think this white and black PDLC film idea is a good solution to the problem of high light sources behind the screen. Using a polarising light gate (1 pixel LCD w/ no backlight) would only work for flat surfaces, won't cover all the vision, blocks 50% of all light in the most transparent state and only comes in specific sizes. Dimming electrochromic film takes a while to switch (which isn't really a problem in this application) but I can't seem to find any on AliExpress (which is a serious problem in this application).

    PDLC seems to have a number of benefits:

    source

    Seems that full transparency requires 4W * 0.45 * 0.15 = 0.27W of power. I also like the sound of the safety as I don't want acrylic flying if I trip+fall. Putting the film on the inside might also be a consideration, since 29 * 3.14 / 2 = 45.53cm and even closer to the length I need, so the film could be completely gap free all around and reduce exposure of the film to the elements. Yeah lets go with that plan instead. 

    Man, this is turning out to be a very cool helmet. Fairly expensive tho... but very cool.

  • [R][M] Removing the fresnel lens

    kelvinA13 hours ago 0 comments

    [09:05] Considering that most VR and AR systems today... actually basically every one I know of (even the Meta prototypes) have PPDs well under 80, I'm thinking that I should be more cautious with my optical path. This was brought to my attention when I found out about this AR lens module:

    It's also notable that it's up to 25% transparent in this industry. Perhaps that means that I could use a R60:T40 beamsplitter instead of a 50% splitter.

    Anyway, I'm still unconvinced that the fresnel lens is going to give me the optical quality I desire, so I went on AliExpress.

    Hm? A lens in the 50 - 60mm diameter range that has a usable focal length? ASPHERIC TOO!? Wow that didn't take long to find!

    Then I found a bunch of achromatic lenses that all had rather high focal lengths for this application. I also found a parabolic mirror, but the simulation didn't reflect light like I hoped it would (90 degree reflection + focusing).

    This lens looks suprisingly good in the AliExpress listing video for being 20p per each. I've assumed the actual optical diameter is 34mm because it looks like there's a bit of a flat section for mounting, though there's also a youtube vid that looks to be an edge-to-edge lens:
    A solution seems to exist for 1x 60mm + 3x 34mm lenses, though I still have the unsimulatable problem of chromatic abberation to worry about. Hopefully, the longer the focal length the less of an issue this is. It's also going to be a very tight fit with that 60mm diameter lens, and I'm using the outer reflective surface of the 20mm prism instead of TIR like in the previous optical path. (The 200unit ruler on the mirror also implies where the glass face is.)

    [15:50] It's interesting that AliTools only has historical data for both lenses that only go back 1 week. Is the reason why I never found them before related to the part where they never existed until very recently? I also noticed that the lens is F45mm not F47mm as I thought in the simulation. I doubt it'll change much. (Can confirm. I was able to move the screen forward 4mm though.)

  • [M] Optical path refinement

    kelvinAa day ago 0 comments

    The distance from the top of the fresnel to the top of the helmet (above the fresnel) is just over 10cm:

    That means that the optical path I had probably was going to intersect with the user head area.

    I've got this new path that has a couple of benefits:

    • There is no lens near a focal point
    • Only 2 glass lenses are needed, so that (starting from the fresnel lens), it's F40 -> F30 -> F40
    • The screen is less than 10mm behind the front of the eye.

    The steepest angle emmited from the screen is under 15 degrees, and according to this blog post (see below), the OLED brightness at a given angle is cos(angle) * 100%. The result for 15 degrees is 96.6% brightness so I'd probably never notice. I wouldn't be suprised if I see steeper angles from normal sized OLED panels.

    Overlaying the path, it seems that I've still got space to go up even further, perhaps to allow more head padding space.

  • [R] LetinAR's PinTILT

    kelvinA2 days ago 0 comments

    I just found this animation and thought it was particularly intelligent, especially the folding animation to show that they went for a totally internally reflected path to shave even more thickness on the optics.

    It does seem that the light efficiency has been increased over the traditional birdbath too since the light gets halved twice (so only 25% makes it to the eye). Below are some of the paths the light will take

    Since the whole system is reflective, there's no opportunity chromatic abberations either.

    Looking at the actual device, it kinda seems like something that might be theoretically possible to print on the #SecSavr Suspense [gd0105]:

    But that sounds like having to print stainless steel and glass (or some high temp transparent material due to the sintering of stainless steel) in the same print, along with polishing gear in the Brick of Innovation to get this kind of finish:

  • [M] Optical Path for 2560px Microdisplay

    kelvinA2 days ago 0 comments

    I've hopefully got a new solution for the optical pathway, and I should have a better understanding of how things should work out.
    • For an 18.4mm active area and intending to use 4:3 virtual monitors, the visible diameter is 23mm.
    • The 55mm fresnel lens is F-30, though this and everything else likely would change because I haven't taken the users head into consideration and the rays might be intersecting it.
    • The lens straight after that is the D42 F-40 I've already got.
    • The mirror is likely to be slightly tilted so that the optical elements after that are parallel with the top slope of the helmet, but for now I'm just using a 90 degree bend in this simulation.
    • Then there's 2 double convex lenses that obtain the 23mm focus disk.
    • Tracing backwards from the screen, it seems that the centre of the eyebox will see a slight dimming of the edges. If the eye is closer to the edge of the eyebox, the centre of the virtual monitor may appear dimmer than the edges. OLED has some good viewing angles though so I don't think this should be a critical issue.
    • The entire optical path + screen would likely fit under 40mm from the front of the eye, which is probably on or near the ear/scalp circumference of the head.
    • With the exception of the aspheric fresnel lens, the rays mainly cross though the inner 50% of the lenses which hopefully will reduce abberations.
    • The second lens is exposed (it'll be partially sticking out of the inner helmet "ceiling") and very close to a focal point. This could mean that dirt on the surface of the lens would be visible.

    A possible thing that is incorrect is that I'm using the 10mm eyebox. Maybe I'm supposed to use the larger 12.5mm diameter FOV circle (see below) or be using 18.4mm as the screen size in the simulation instead of 23mm. Maybe this just means the actual eyebox is 10mm * 10/12.5 = 8mm, which should still be servicable.

  • [M] 30deg FoV potential solution

    kelvinA2 days ago 0 comments

    It actually looks... kinda smart looking. I haven't modelled it in, but just imagine there's 2 curved sticks maybe 5-10mm in diameter on the edges of the optics to hold them:

    Ensuring that I've got >22mm of eye relief, I've angled the combiner inwards by 5 degrees so that the 55mm fresnel lens is optically centered (as apposed to a 50mm lens with very tight tolerances or an uncentred 55mm lens).

    The 60mm combiners give ample space for the FOV desired, but a 55mm one might just about work too:

    It's likely I'd get a bit of visual blur on the edges of the combiner, so 60mm is likely the visually better choice.
    This oval visually looks more tame from the 3rd person perspective too. It looks more like a cleanroom lab guy than a disco kid.

  • [T] 30 degrees or less FOV

    kelvinA3 days ago 0 comments

    It's really seeming like 32 degrees of FOV is right on the edge of what's possible with the basic optical path I'm currently trying. Considering that I'm supposed to be aiming for 90PPD and not 80 anyway + the 2560px display is the highest available on the market, I'll try 30 degrees; a marginal difference that could make all the difference. This results in a PPD or around 85 which hopefully is the ideal compromise of feeling sharp and large at the same time. The 2160px would have a 25 degree FOV.

    Unfortunately, the backlight for the 2160 panel is likely too dim for this application. I might have to see if it's possible to replace it with a more powerful LED, but that could add even more size to an already relatively large screen. Additionally, the display's contrast would be  even worse than the 800:1 already rated. Obviously, the biggest issue is the size, and 2 fresnel lenses in the system make for a rather blurry image it seems, so I'd have to distance the screen to avoid vignetting. I think the allure of a £20 screen might result in the project getting thrown under the bus or pushed back months. Oh, and the £83/each driver board really doesn't help. £200 for dim screens with sub-desired resolution and undesired size isn't exactly easy to swallow. The contrast is probably fine since there'd be a 40 or 50% passthrough of light anyway, reducing contrast even for OLED.

    I have limited time and a lot of projects, so it seems that my simple yet inescapable choice is to use the 2560px screens going forward, likely pushing the BOM to the £1,000 and up club.

  • [M] Simulating As Seen In TV

    kelvinA3 days ago 0 comments

    So, by enabling "simulate colours" and aligning 2 "beam"s to have the same angle as +/- 16 degree (and blocking out the excess), I was able to get rays that look like the optical rays diagrams I've been seeing in AR diagrams such as this one I've recently seen:
    Source
    The first time I saw ray diagrams like these, they used red, green and blue colours. I originally thought it was to test for chromatic abberations (in english: colours seperating into wavelength components) because it's the same as the RGB emited by all non-Porotech DPT screens. Then I reasoned it was just to get FOV understanding.

    Now, I'm estimating that this is used to see both FOV and focus, along with confirming the eyebox.

    The white line is where the eye starts. I'm going for a 10mm eyebox (the scale in these simulations is 10 units : 1mm) and the lines pointed to by the arrows is the cone of light I care about. 

    As you can see, each slightly different colour set of rays are parallel. This means that the intended focus is infinitely far away. I've also used colours slightly different from RGB to prevent personal confusion.

    I think that the focus point is juuuuust behind (or infront) of the locationwhere the rays merge together into a point. This is because the pixel isn't an infinitely small point. The rays are also used to see the distance and size of the screen when focused at infinity. This is a notable difference to where I thought this location would be. Let me add a red beam to show you what I was doing before:

    It's slightly misaligned but it's close enough for a description.

    The lime green line is a screen with the right focus but wrong size, and the red line is the 18mm 2560px display size. Looking at where the actual FOV lines are, I would've misplaced this even further back. So, technically, the screen at the red location would've taken up my entire view, but that would've only been with 1 out of the 5 rays. The other 4 rays, likely originating from a black wall or something that isn't the screens active area, would make the edges mostly wall. At the same time, the centre is getting light from a wide part of the screen, so all those rays would get blended too. What would this look like? A stereotypical blurry blob of light that blur-fades into darkness the closer to the edges I look.

    The below image is with a lens focal length of 80 instead of 50. In the first simulation, I would've thought that this would collect parallel rays (red) but no there's a (wider than the lens) location where an item would be in focus:

    Well I've got some lenses so I might as well see if my hypotheses are correct or not. The below is F-30 and indeed if I look though the lens I actually have with my eye set to infinity, it just looks like a blur at any distance:

    What about F40?

    Allegedly, if my eye is 60mm away from the lens and the image source is 40mm behind it, I should be able to see it at focus=infinity.

    Wow. I just looked using the placeholder 40mm beamsplitter prints and can indeed confirm  that the focus, FOV and eyebox are probably correct. Magical. No... scientific!

  • [M][T] Door Length + Material, Reflections and Cameras

    kelvinA5 days ago 1 comment

    So I got a 12" (305mm) ruler to try and imagine the kind of size this helmet would be like. It feels like I could really use an AR headset to help design this HUD helmet that might become an AR headset in the future as a sort of stretch goal. Suprisingly, the UK going rate of the Magic Leap 1 has plummeted to like 10-13% of it's MSRP.

    To be honest, the price anchor of £2000+ MSRP wore off after about 5 minutes, and the single reason is due to the low resolution. If I'd never heard of Magic Leap or this was some AliExpress manufacturer, I'd be like "expectable price". I mean, it also includes the computing power too so that's why it's still in my mind.

    Moving on, it does seem that, if I'm to use the 300mm tube visor, it can't extend much past my bottom lip or else my natural neck angles will be compromised. It seems that 150mm or so would be ideal, but that length isn't available. Since I need to cut it anyway, I'm thinking of getting a 305mm length (that costs £107) and cutting it in half.

    I'm also considering using a perforated metal instead of acrylic. I'd have to make a jig to bend it into a tube and it would let in less light than acrylic, but due to diffraction it might be possible to see decently well though. The PLDC will be adhered onto the inside of the bent metal so that I could actually use the metal as a face shield in the event of a trip+fall.

    I'm looking into circular beamsplitter elements and-- actually it'll be easier to explain if I just modelled it.

    [about 20 minutes later...]

    Wow... I uh didn't expect that modelling to go perfectly straightforward. I was expecting that the 40mm reflectors I had in my mind would still conflict with the door or the face but I've just fit a 60mm set in there super easily without any inconvinience. And angry-eyebrows aside, it actually looks organinc like a Spiderman mask. 

    The idea is simple.

    First create the beamsplitter and fresnel lens. The lens here is 40mm so maybe it's possible to find a non-fresnel alternative for higher image quality.
    Next rotate the front face 45 degrees and move the assembly forward until the lens misses the forehead:

    Mirror for the other side. The idea is that the right display is responsible for the left eye and vice versa. I might try and see if 50mm is posible since the edges of glasses might get in the way.

    For 4:3 / 3:4 monitors and a 10mm eye box, I expect that the ray volume from the eye to the reflector combiner looks like this:

    Unfortunately, geometry is often dissapointing and the rays conflict after the reflection:
    It also misses part of the lenses (highlighted in blue). I'll see what I can do after writing this log.

    Lastly, I also wanted to mention that I want to add a 10x optical zoom camera and a thermal camera to TyMist if possible.

  • [M] Beamsplitter cube may fail

    kelvinA6 days ago 0 comments

    Finally got around to doing some optical simulation after thinking about what exactly the light is doing. For starters, I think it's easier to start from the eye and cast the rays backwards. That already made me realise that I actually want the image on the retina side of the eyeball lens. If the image is formed there, then you'd see it. Working backwards better allows me to see what rays -- friend or foe -- will hit the retina.

    The reason why the image in the test rig looked tiny is because I was trying to grab the rays I'd see at >1m. I'd imagine that the image would look the same size if my eye was [insert system focal length] away too. I might be able to fix it just by bringing the fresnel lens in the decollimator closer.

    Anyway, this is the simulation I set up to see if I could actually get a solution with the 30mm beamsplitter, and the short answer is "not really".

    With these rays, I can finally understand why I was getting some mystery mirrored image on the bottom of my vision when I angled the right angled prism to reflect stuff 90 degrees off axis. 

    • That bright section of rays is what I'd see through the F50 fresnel lens. The slightly darker section is the full 32 degrees I'm aiming for. 
    • The length of the lens is the same as the length of the beamsplitter cube (or 30mm right angled prism)
    • Therefore, anything that misses the lens would've hit the front wall of the glass.
    • Due to the angle of light, these rays would've been totally internally reflected.

    I've got the eye relief at 24mm and I can't have relected rays going past that point, so the beamsplitter reflective plane would have to move back. The rays in question would likely need a 40mm lens, thus a 40mm beamsplitter (or 45mm beamsplitter plane). I'm currently unsure as to if there's actually enough space infront of the head to acommodate that within the 300mm cylinder.

    I'm not annoyed by the reflections... yet... but 25 degree FOV isn't exactly ideal. Every way I slice it, it feels small. 

View all 49 project logs

Enjoy this project?

Share

Discussions

Similar Projects

Does this project spark your interest?

Become a member to follow this project and never miss any updates