Text
Voxon VX1 3D Volumetric Display
Hi, my name is Gavin Smith and today I'm going to talk you through a short demonstration of the Voxon VX1 volumetric display, so let's get started. I'll turn on the VX1 and wait for Windows to boot up. After a few seconds we'll put in our password and wait for the desktop to appear. We're going to now around VoxieOS our own volumetric file explorer. Which is a bit like windows explorer. It lets you browse through the contents of your computer in a volumetric user interface. The screen is now running and displaying an animation file. So let's navigate through the folder structure and find our 3D file to look at. If we pick this skeleton file, it's a good way of showcasing the technology. We're going to be using this 3D space mouse to navigate the model moving it around the volume, moving it up and down and scaling it using the buttons on the side. This is a really good introduction to what the volumetric data looks like. You can pan the camera around or move your viewpoint around and see that the display really is physically three dimensions at absolutely any angle and no special classes are required We can also support animations. They're simply a zip file full of objects. In this case an animated dancer which was animated using a motion capture file from Carnegie Mellon's online repository of motion capture files. These animations are merely a zip file containing a sequence of 3D objects are animated and like any 3D object they can be zoomed, manipulated, panned and scaled on any axis. This next model is a three-dimensional fly. This was actually a hand created 3D model/ It really does give a very good indication of the level of detail that can be captured by a group of people gathered around the VX1. This one almost looks like an electron microscope image of a fly. Zoom in to whatever level of detail was actually encoded in the model itself when it was created. We've had groups of 10 to 15 students gathered around the display, looking at looking at this kind of data. If we bring up another animation file I'll show you some of the color features of the display. This one is a model of a 3D dragon this was actually animated using a file from Miximo, the character rigging company, so it came down as an FBX file which was animated in 3D studio and this one actually has a color texture map so I can use the LCD screen on the front here and enable RGB mode and that now renders in color mode. As well as RGB color we can actually choose any monochrome color if you want to see the data at its highest resolution. So any RGB plus the secondary colors, in fact, you can mix the colors in white mode to display any hue that you that you want to do. If we flip back to our pure RGB mode for a second we can show some of the other features of the VX1, in this case we'll show you some gaming. Let's flick over to a game you might be familiar with. So, in this game rather than creating a line of blocks we're actually creating a plane of blocks, three by six plane of blocks that have to fit together. This one's very very addictive. These games were actually created in C. And this one here is another familiar game. We've taken existing IP and made it much more fun by adding several levels to the maze and as an educational introduction to computer programming students can edit a text file which describes the shape of the maze and they can make their own mazes of any shape or complexity and really really have a lot of fun demonstrating them to their friends. Some files that we support natively in VoxieOS are MOL files for example. This is a great way to look at chemistry data This is a chest rendered in two color high-res mode, which we also use for DICOM viewing. This one's got an inbuilt AI. We've got some realflow liquid animation happening here, which is rendered in 3d studio max use the realflow plugin. This is a demo of facewear face capture Recently we've put a lot of effort into getting Unity running we now full support for Unity. This is a short demo using some sword-fighting that we've created in Unity. The Unity SDK is available on our website. This is an STL file of a building showing how to architectural data can be visualized by simply locking the rotation and moving the object up and down through the volume to see different floors We've got our own mapping API a program called map view which allows you to create geographical height maps from anywhere in the world and navigate through them and explore like Google Earth in 3D. This is a great example of stereophotogrammetry. And going back to education, this is one of our popular ones. This is Graphcalc which allows you to look at 3D mathematical formulas in a completely interactive way of typing in a formula and visualizing the shapes in 3D. You can create your own formulas our look at some of the 36 built-in formulas and lastly we wrap up this demo showing one of our new 3D asset types. This is DICOM. This is for viewing medical data. This is real time marching cubes segmentation of data from MRI and CT scans, a very powerful way of looking medical data on the VX1. We hope you enjoyed that and if you want any further information. Please do not hesitate to contact us via our website. Thanks for watching
https://youtu.be/FVYoWsxqK8g
0 notes
Text
Live Volumetric Imaging LVI Catheter
so lvi stands for live volumetric imaging in other words real-time ultrasound imaging in 3d lvi is the key to lv I is an innovative breakthrough ultrasound transducer technology that we've developed at RTI and this enables us to miniaturize the size of the ultrasound transducer device to be able to enable real-time 3d imaging from a catheter device the way this device would be used is that the ultrasound catheter would be placed in the right atrium it would be positioned to look to basically point that the target they're looking at might be a mitral valve or an aortic valve or the pulmonary veins and it produces this pyramidal shaped view instantaneously while the hearts beating and that image can be manipulated with the ultrasound imaging system by rotating the volume cutting into the volume at any arbitrary angle and rotating for example a plane or any any image within the volume in real time to give a surgeon a different vantage point without having to reposition the catheter the vantage point can be changed so it's sort of like a camera doing a fly through in the heart a typical procedure that a cardiologist might use this for would be an interventional procedure to treat for example atrial fibrillation which is a common cardiac arrhythmia currently they do not have imaging of the of the soft tissue that they're ablating and and it current and adequate imaging of where the ablation catheter is with respect to where they're trying to Blake into the tissue so this volume view can give them a real-time image of where the cardiac catheter tip is in relation to the to the objects that they're trying to ablate which in this case would be the pulmonary veins the interventionalist really wants to have a 3d imaging modality that's in control of the interventional list that's just another catheter device that can be placed inside of the heart left alone making images while he's doing the surgery just makes it a much simpler and more effective procedure this work started with a research grant from the National Institutes of Health and basically it was a collaboration with Duke University where been doing 3d ultrasound for many many years and there was really a limitation in the transducer devices themselves and basically the ability to miniaturize the performance of a devices that you would get out of them when you try to miniaturize them because they were interested in developing catheter-based imaging tools so my background being in electronic materials and micro electromechanical systems had an idea for producing a micro electromechanical device that was made using a semiconductor manufacturing techniques and this is different from current ultrasound transducers which are machined ceramic devices so by applying semiconductor type manufacturing techniques we can use photolithographic processes to pattern the ultrasound transducer arrays and miniaturize the element size to be able to get a higher element density within the catheter to produce the matrix arrays that are required for 3d imaging in a device that fits inside of a catheter which may be as small as three to four millimeters in diameter for cardiac catheter this transducer device is manufactured in our clean room here at RTI we can produce these arrays and silicon wafers where each silicon wafer can contain several hundred transducer arrays and each batch of silicon wafers can contain thousands of transducer arrays that are manufactured in one batch and additionally we produced the interconnect and cabling technology that's needed to connect all of the individual transducer elements in the in the device with signal cabling that then runs the length of the catheter and connects the transducer to the ultrasound system the next stage of development for lv i will be working toward the product development stages of the technology basically this device would have to be approved by the FDA our initial target would be a first-in-human study where we actually get an investigational device approval from the FDA and be able to use the device on a few patients in the hands of one of the cardiologists that we've worked with in the past to show the utility on human patients for real inter cardiac procedures and then beyond that the next step for lvi is RTI is actively seeking partners to commercialize this technology and bring it to market
https://youtu.be/30gTL20c7v0
0 notes
Text
Voxon creates the world's first volumetric video call over 5G
this actual demonstration is the world's first holographic video communication that's ever been done on a 5g network so we work with this incredible company Vox on out of Australia they've worked with us Ericsson actually helped connect us with this wonderful company and we brought them into our 5g labs at the Ali powered by Verizon in New York City started working on how 5g could bring this to life and we brought them here with us to Los Angeles to work on this on the show floor and we've had people probably 5 to 10 deep for the last two days come to see this great technology what we've bred here is a it's a new type of 3d holographic display technology and we're working closely with both Ericsson and Verizon to demonstrate what is actually possible over their cutting-edge brand-new 5g network so we've got two particular applications that were demonstrating here we've got real-time holographic video conferencing over 5g and then we've also got some medical data that we are using to explore what what else is possible within that 5g Network over 5g over to the Ericsson booth where there's a similar set out there and at the same time we're going to be doing real-time video conferencing using a special camera to capture a face and create a hologram in real time so you get a picture a picture communication over five - you just ever been done before this is this is just incredible I never thought in a bazillion years I would be in a hologram my face
https://youtu.be/HkErGrSTDmw
0 notes
Text
Creating a 3D Volumetric animation
Hi, I'm Gavin from Voxon Photonics and today I'm going to show you how to use Blender to make an animation that we run on our VX1 volumetric display I'm using blender 2.78c in version 2.1 of the Voxon SDK. So let's get started. For this tutorial I'm going to be using a cloth physics demo from Littlewebhut.com The animation is 250 frames long and we're going to export each of those frames as an STL file To do this, first we'll need to install a free script and you can download from https://goo.gl/WkNX4F Once you have downloaded it navigate to the user preferences menu and click Install from File Select the Script, click the install button and finally save the user settings. You should now be able to find the script by hitting space and typing export and selecting exporting frames as mesh of files. When you hit enter you might think blender has crashed, but don't worry it's just very busy exporting your frames. After a short wait, you should find that your project directory is full of STL files along with your files you're going to need an info.ini file which looks like this this file has some extra information about the animation like its speed and playback mode. The next thing to do is select all the STL files and the info.ini file and zip them up I'm using the free 7-zip archiver. Once complete copy the zip file to the media section of the SDK and run VoxieOS the volumetric file browser. If you then navigate to the animation folder you can find and run your animation. Because the animation is made up of STL files you can use the density slider from the render menu to adjust the brightness. This function subdivides the triangles in the file. Ok, that's enough of the simulator let's transfer the animation to the VX1 on a USB stick and see how it looks. Thanks for watching.https://youtu.be/WyHRBo-Cc7o
0 notes
Text
Photogrammetry and Volumetric Capture
Current techniques of communication in video, AR, and VR can fall short in the exchange of information. They often require additional resources to help convey the message. Like in this video, you might be able to see or hear me, but you might not be able to fully understand how excited I am that it's Christmas Eve. I'm Stephanie Essin, and in this episode of IDZ Weekly, we tell you about techniques in photogrammetry and volumetric capture that can take your virtual environments to the next level. [MUSIC PLAYING] Volumetric capture and photogrammetry uses images from cameras and sensors to create 3D meshes, which can be merged seamlessly into game engines and virtual worlds for a profound psychological influence on users. They're going to feel really immersed in your experience. Anywhere you would traditionally use a computer generated asset is a great place to use a volumetric object for increased immersion. This article, by Tim Porter at Underminer Studios, compares volumetric capture and photogrammetry and takes a look at technical specifications, package sizes, capture options, computing needs, and cost analysis. It also looks at the benefits and complexity of each style and its use cases, as well as the engagement and retention in creating immersive realism for digital formats, including VR, AR, and MR. Technology related to photogrammetry volumetric capture is rapidly changing, but understanding the different types of volumetric capture, including the costs, benefits and complexity of each, with current use cases, puts you well on the way to incorporating these technologies in your next project. Check out the links provided to learn more, and don't forget to watch Tim Porter and his partner, Alex Porter, on Innovators of Tomorrow. I'll see you next Monday, and happy holidays. https://youtu.be/xl0SadtBUgw
0 notes
Text
Volumetric Filmmaking with Depthkit | Intel Software
The stories you tell should bring us into your imagination. So why are we still stuck watching from the Outside depthkit captures the world in depth and color, unlocking a new dimension in creativity. Optimized for Intel's latest generation processors, this is called volumetric filmmaking. Now creating an interactive 3D world is as easy as making a video. Share your world the way you imagine it with depthkit.
0 notes