#visualsfm
Explore tagged Tumblr posts
patricklevar · 7 years ago
Photo
Tumblr media
3d photo scan shot with my Samsung S7 edge about 41 pics then put them in #visualsfm and then #meshlad and last into #cinmea4dr17 (at Osaka) https://www.instagram.com/p/Bo_t7B8ARRx/?utm_source=ig_tumblr_share&igshid=15fnrvm6nvo05
2 notes · View notes
petermorse · 8 years ago
Video
vimeo
Sifted from luckybulldozer on Vimeo.
A film Directed by Dan Monaghan, VFX Software Developer Ben Torkington, Music Nick Buckton, Staring Mia Straka. The images in this film are made up of Photogrammetry, 3d models made from photographs. Made in New Zealand with the help of Creative New Zealand and The New Zealand Film Commission. 3D world VFX is made with Ben Torkington's application DotSwarm for OS X (just released). 3D Models generated with Changchang Wu's VisualSFM and dense point clouds Yasutaka Furukawa's PMVS2. --- Breaking News --- The Software that made this film is now available for everyone... visit dotswarm.nz
0 notes
anc230blogsdangkhoale · 8 years ago
Text
Week 13
Week 13 was the last week of Trimester 5 and also the last week of ANC230 Animation Studio. During the week we were given advice on how to freelance and what common pitfalls to avoid. This would be definitely be helpful in the near future. At the end of Friday we were able to present our Independent studies to everyone.
Reflective analysis
When I came in to Animation Studio 3 I knew that I needed to do a lot more work than just modelling for the main group project in which I did in the main project last trimester. During this trimester I decided to take more roles in the major group project for ANC 230 Animation Studio 3. Officially I was able to be included in the creation of character concepts (creation of the bird and the butterfly), the creation of storyboards that were working in progress, the creation of the character models (the bird and the butterfly), and the creation of the animated shot between shots 5-7.
I had a bit of a shaky start when I was working on the project. Some of the problems I had were highlighted to me during the week 7 interview to see how everyone was doing. Phil advised to me that I needed to a bit communicate more than I should by checking the slack communication channel a bit more often. Phil also told me that I was a bit behind on my hours and need to catch up. Since that week 7 Interview with Phil, I wanted to be a bit more responsible in how I conduct myself to others. I decided to check the slack communication channel a bit more regularly than just looking at the slack communication channel only three times a week. I also tried to catch up my lost hours for major project and independent studies during the weekends of each week.
During Week 7 I was given the shots 5-7 to create a pass 1 animation on. I was told that these were the easy shots of film since I did not do any animation during last trimester. Once I started working on the pass 1 animation I knew definitely that I was not going to be able to submit the animation passes on time. It took me 12 hours to create shots 5-7 but I wasn’t happy with the results. During Week 8 when Animation pass 2 was about to start I decided to get help from Sarah for shot 7. Although the animation passes still took me 12 hours for each one, I was a bit more happier with my shots. I definitely learned a valuable lesson during this process which was to pass up on work if you feel it is too much too handle instead of delaying the production process.
Aside from the main project that I was able to contribute to, I was also working on my independent studies. For this Trimester, I tried to tackle the on the subject of photogrammetry. Photogrammetry was the art of taking multiple photos of a single object and then converting the multiple photos to create a 3D model that could be used for films, games and animation. My goal for this independent study was to create 3 3d models using the process of photogrammetry. At the end of trimester I was successfully able to create 2 tree stumps, 1 tree log and one rock. The first tree stump was created using the phone application called ‘SCANN3D’ which is able to automatically make models. The other stump, the tree log and the rock were created using the programs VisualSFM, cvms pvms and MeshLAB. Although I was able to create three models I knew that I needed to polish the models a bit more as some models had either empty UV textures which are really hard to see from first glance but it could distract the realism of the piece. I would definitely look in to fixing these problems. During the Trimester I definitely wanted to be a bit ambitious and create a full 3d head model, but I didn’t realize how hard it would actually be. Most of the 3D head models were incomplete when they were made with either SCANN3D of using Visual SFM and MeshLAB.
I had a lot of learning how to create 3d models using photogrammetry and really want to get more into it. For goals right now I would definitely like to learn more about converting my assets into usable assets for film, animation or video games as right now it can’t be used for anything due to the mesh texture having light baked in. Right now I may consider finding jobs that involve photogrammetry.
Cultural critique
The idea for the film “When The Dust Settles” was based around the theme of domestic violence. The dominant ideology that is being proposed in the film domestic violence has a big impact on children.
Signifiers that is used to support the dominant ideology include the art style of film, the sudden occurrence of a dust storm and the destructive impact of the storm.
During the scene where the rabbit looks beyond the horizon of the valley, it is able to see a huge dust storm approaching rapidly. The rabbit is stumbles and quickly sprints away from the storm. This signifies the spontaneous nature of domestic violence, showing that children who are in a situation with domestic violence need to be on guard at all time    
After the dust storm, the environment is left looking like a barren wasteland and the rabbit pops it head out of its burrow once more. This signifies the emotional scarring that children face from domestic violence. The dust storm had a major impact on the setting. Showing the scale of the damage of the environment shows heightens how severe the impact of domestic violence is for the child.
For the art style of the film, the team decided to use a paper origami texture with warm lighting. This signifies the world inside the mind of the child, showing the innocence the child itself.
Although it demonstrates qualities that signify that the film was based around domestic violence, there are few errors in the piece which would distract the view from reaching this conclusion. If we were to make modifications to the film, we would definitely fix the major on screen bugs. The glitchy flower in the film needs to be fixed so that the audience does not get distracted from drawing conclusions of what the film is really about. We would also try to texture the tips of the rabbit ears black so that it shows the scarring effect of domestic violence it has on children mentally and physically. The last modification we would make would be muting the sound in the credits scene. The music that is played in the credits scene is the same music that is played in the start of the film. This might confuse the audience about the intention of the film as it might be assumed that domestic violence is not as bad as it is made out to be.  
0 notes
gnomalab · 8 years ago
Video
vimeo
GeoLocating PointClouds from photos using OSM at Mapzen from Patricio Gonzalez Vivo on Vimeo.
The photos were take with a regular camera (red cones) and an iphone5 (green cones) to then be processed with VisualSfM to generate a bundler.rd.out and a dense reconstruction.
All was load into a C++ program (build with openFrameworks) that geo-locate the cameras and geometries using OpenStreetMap data.
0 notes
50percentgray · 10 years ago
Video
vimeo
This is a photogrammetric reconstruction of 'The Brain', in the collection of the British Library. It is extremely fragile and hence not on public display.
According to Kathleen Doyle, Curator of Illuminated Manuscripts -
"The manuscript is a copy of the Decretals of Innocent IV, a canon law text. It was damaged in the fire in Ashburnham House in 1731 where the Old Royal Library and the Cotton Library were kept, but unlike most of the other manuscripts damaged in the fire, was not restored but was left in its burnt and shrivelled condition as a specimen of the former condition of the many of the other manuscripts before restoration. It is written on parchment, and has a few illuminated initials. It was in a monastic collection, probably at St Albans."
To find out more about the British Library’s collections visit bl.uk/
The model was generated using VisualSFM and Meshlab, textured in Maya with Oleg Alexander's texture projection tool [olegalexander.com] and rendered in Mantra.
1 note · View note
fabkzo · 11 years ago
Video
youtube
VisualSFM quick test
1 note · View note
drabber · 7 years ago
Photo
Tumblr media
My Instagram Post 19 March 2018 Spent some time over the weekend taking pics of an older #sculpture of mine, testing out a process for making 3D models #visualSFM #3dscan #photogrammetry
0 notes
anc230blogsdangkhoale · 8 years ago
Text
Week 12
Pipeline for the whole Photogrammetry method ( For rocks and trees )
Before you do anything, make sure you have ‘VisualSFM’ , ‘cmvs pmvs’ and MeshLab installed on your computer.
Taking Photos
1.     Go outside and find objects do not have harsh shadows or shiny surfaces.
2.     Circle around and take photos of the object at every possible instance.
3.     Make sure the previous picture should be a bit simliar to the current one.
4.     Continue to do the same by taking shots for different angles ( one high angle, one low angle)
5.     Go find other objects and repeat steps 1-4
6.     Once you have taken the desired amount of photos group them in separate folders.
Importing Photos to generate point cloud
1.     Open up Visual SFM
2.     Click on File > Open Multiple Images
3.     Once images of a particular object is loaded, press compute missing matches button (four arrows going outwards.
4.     sfM > Deconstruct Sparse.
5.     sfM > Reconstruct Dense. (This will create a project folder so that it is able to be used in MeshLab
6.     Once the process is done head over to MeshLab
Cleaning Up Models in MeshLab
1.     Open up MeshLab
2.     File > Open Project
3.      Find The .nvm file
4.     File > Import Mesh
5.     Find the .ply file
6.     Find the button ‘Select Vertexes’ To highlight areas of the model you want to delete
7.     Delete the highlighted areas using the ‘filter select button’ (A hollow triangle that had been crossed out)
8.     Filters > Remeshing, Simplification and Deconstruction > Screened Poisson Surface Reconstruction (This step is for filling in the gaps of the models)
9.     Set Reconstruction Depth to 12 and press apply.
10.  Hide the model and .ply file in the layers menu on the right hand side by clicking on the eye and then select the Poisson Layer
11.  Delete unwanted areas in models using steps 6 and 7.
12.  Filters > Remeshing, Simplification and Deconstruction > Simplification: Quadric Edge Collapse Deciamation
13.  Reduce target number of faces so that it is below 1,000,000 faces and then press apply.
14.  Filters > Cleaning and Repairing > Remove Faces from Non Manifold Edges (deletes extra faces)
15.  Filters > Texture > Parameterization + texturing from registered rasters (creating the texture for the object file)
16.  Change texture size to 2048, change texture name and then press apply
17.  File > Export Mesh (save file type as obj)
Bringing the Mesh into Autodesk Maya
This step is just importing the converted obj file and using the hypershade to add the texture back on to the object.
0 notes
sdepazos · 10 years ago
Photo
Tumblr media
Photogrammetry Workflow
VisualSFM, MeshLab, Nuke, etc
0 notes
anc230blogsdangkhoale · 8 years ago
Text
Week 10
Week 10
This week was the start of render tests. I was able to render my shots for the animation over the week. Whilst I was rendering the render test I noticed that there was an Arnold watermark on the rendered shot. Whilst discussing with Nathan about this, I found out that I needed to render my shots through image sequence and not through batch render.
During class time, I tried to have another go at creating 3d models of heads using the photogrammetry app SCANN3d on my phone. I decided to make head models of Sarah and Simon but they both ended up having certain parts missing on the head. I was really getting close when I was able to render the side of Simon’s head. I found out whilst I was circling around and taking photos of the head, I tend move backwards and forwards to the model which may have resulted in head errors. I tried to fix this by putting masking tape frame on my phone so that the head model photos is exactly the same size. After using this method and testing it, I found out that it did not work. I did Simon’s head model twice, one with the tape frame(picture 3) and one without the frame(picture 4). The one with the tape frame eventually ended up looking worse than the one without. I think I may need to reconsider about creating a head model for one of my three models.
Screenshots of the results can be seen down below
Tumblr media Tumblr media Tumblr media Tumblr media
For independent studies, I decided to go travel and take pictures again. This time I tried to make sure I was taking high and low angles in a moderate pace. The place where I picked to take photos was at a park.The weather during that day was very sunny. Although there were a lot of objects in the park, most of these objects were under the direct sun. I knew that objects with the sun that is directly on them would create harsh shadows. This had definitely limited me in number of objects I could make in 3D from the park. The only objects that I was able to take good pictures of were the ones that are under trees as none had harsh shadows.
Once I got home, I was able to group the objects into seperate folders so I can get it prepared for next week to make the point clouds for each object in VisualSFM.
0 notes
anc230blogsdangkhoale · 8 years ago
Text
Week 6
For Independant Studies, I wanted to have a look at the other options available in terms of software for Photogrammetry on personal computers. From a quick google search I was able to find a programs such as AgiSoft PhotoScan, Visual SFM, MeshLab, SCANN3D and Autodesk Remake. With AgiSoft PhotoScan and Autodesk Remake there was only a trial version where as VisualSFM and MeshLab were free to use. I decided at the end I would choose VisualSFM and MeshLab as it had looked less complicated to use based on the appearance of the user interface.
0 notes
sdepazos · 10 years ago
Link
Fantástica guía para escaneo 3D solo con una cámara reflex DSLR vía técnicas de Fotogrametría!
0 notes
fabkzo · 12 years ago
Text
Structure from motion with VisualSFM
Thanks to changchang, we can visualize all the steps to make .ply dense cloudpoints , using your graphic card gpu;
according to your configuration you can load many high resolution photos ( up to 3200 px)  against 1200px with bundler.
Prepare your 16 Go RAM, 8 core proc and BIG recent GPU ( nvidia for cuda ) , and many hours to make ("easy") 3d scans.
Attention: you'll need to learn a bit of photography technics else your results should be too bad without good understanding of the matters
Nota: free for non commercial use
links:
http://ccwu.me/vsfm/
http://wedidstuff.heavyimage.com/index.php/2013/07/12/open-source-photogrammetry-workflow/
0 notes