Text
Building USD for MacOS
Recently I’ve been diving in to USD, Pixar’s “Universal Scene Description” file format. It’s got a lot of neat features and is seeing increased industry adoption, so I wanted to spend some time kicking the tires on it.
But it turns out it’s a bit of a bear to get up and running for macs. (If you are on windows or linux, you can skip all this pain and just download some sweet pre-built tools from nvidia.
If you’re still with me, you are probably on mac. Never fret! Here are the steps I’ve used to get it up and running, and will hopefully also work for you.
Clone the github repo: https://github.com/PixarAnimationStudios/USD
If you don’t already have it installed, install Homebrew: https://brew.sh/
brew install python3
brew install qt
brew install cmake
If you don’t already have it installed, install Xcode & the command line tools (from mac app store)
Ensure that your homebrew binary folder is part of your path (e.g. /usr/local/bin is in your command line path. Running ‘python3′ should find the python3 you just installed). If it’s not being found add the folder to your path in .zshrc (export PATH=$PATH:/path/to/homebrew/bin)
pip3 install PySide6
pip3 install PyOpenGL
pip3 install numpy
cd into the base folder of your USD repo clone if you haven’t already
Run ‘python3 build_scripts/build_usd.py /where/you/want/USD/installed’
And that should do it! the USD build script will tell you what paths to add to PATH and PYTHONPATH, and after that you should be off to the races.
0 notes
Text
Building Project North Star: Day 2
When We Last Left Our Hero...
Yesterday I posted about all of the prep 3d printing and initial construction steps. We ended up with a mostly-complete optics assembly and attached the hinge-arms to the sides of the optics assembly.
At the end of day 1, the headset looked like this:

On day 2, I did the following:
Assembled the headgear arms
Attached the ratcheting rear strap
Shaped and attached the forehead strap
Wrapped the forehead strap with foam
Assembled the full headgear
Attached the headgear to the optics assembly
Attached the lenses
Headgear Arm Assembly
In addition to joining the front and rear of the headgear, the arms of the headgear have a built in spring-loaded brake to keep the optics in place. Assembly was fairly straightforward once I sorted all the pieces for the left and right sides, although it's a little tricky to get the first couple of screws in while holding tension on the spring.

Also I had to disassemble the first arm after putting it altogether because I forgot to attach the little anti-skid rubber piece to the brake. Oops! The cable guides are a really nice feature but seem very fragile. I broke one at the very end of the night trying to fit my USB cable into it. In the short term I might design an alternative piece that's more sturdy and gives good attachment points to ziptie cables (especially if I end up wrapping them all).

Rear Strap
The rear strap is an adjustable ratchet strap that I think is from a welding mask? It comes prebuilt so you just attach it to the headgear arms. I had accidentally screwed the connecting pieces together when I built the arms, so I had to take those apart and then put it back together with the rear strap attached.

Forehead Strap
So the forehead strap needs to be a curved piece, but the plastic part prints flat. The idea is to curve it to fit comfortably on your forehead. One of the recommended techniques is to use a mannequin head but since I don't have one of those... I used my own. I guess this makes me a dummy?

I printed this strap in PLA+ instead of PETG because I knew I needed to shape it. This let me get it malleable at a lower temp (70 degrees Celsius). For the first few seconds I shaped it by bending it with my fingers, and then as it started to cool I did the final shaping by wrapping it around my forehead. It worked out in the end but I definitely recommend another approach if you have one. Once it was shaped it attached to the other side of the headgear arms with similar buttons for attaching the rear.
Forehead Foam
Before we can attach the forehead main structure to the forehead strap, we need to wrap the strap in foam. The foam and cover material were all included in the kit so this was easy in theory, but it took me a bit to figure out how it was all supposed to go together. In the end I had to search the North Star discord, where I found some instructions (namely - attach the fabric to the bottom hooks of the forehead strap, wrap the fabric around the back of the strap, insert the foam piece, and close it off by attaching the fabric to the top hooks of the forehead strap).
The only other thing I had to do was slightly trim around the buttons that connect the forehead strap to the headgear arms they are the part that 'click' into the forehead main structure, and I wasn't able to get them seated with the fabric in the way.

Final Headgear Assembly
At this point, all that was left to do was to attach the forehead main structure. This clicks onto the forehead strap, and then you're done with the headgear!

Attach Headgear to Optics
With the headgear built, attaching the optics is simply sliding the hinge arms through the headgear arm brakes. At this point it's really starting to feel like a headset!
Now that I could actually wear the device, I spent some time adjusting the fit and finding any rough areas on the 3d printed parts that may rub against the skin and cleaned them up so they were smooth.
Attach Lenses
Up until now, I had been keeping the reflector lenses in the original packaging because I didn't want anything to happen to them during construction. But with all of the major work done it was time to finally put them in place.
I really expected them to screw/bolt in, but they have a series of tabs and simply click into place! I was a bit worried that this wouldn't be very sturdy, but they seem to be holding just fine.
And look at that, it's our very own North Star headset!

Despite rumors to the contrary, I promise I’m not Judge Dredd
What's Next?
At this point I need to finish attaching the various sensors, but my Structure Core shipment got delayed so that may not happen until this weekend. I've also ordered some 10ft long USB cables for the Structure Core, and the Leap Motion Controller. Right now my plan is to run those cables + the USB power cable for the display driver board all down the left side with a wire wrap, and the display port cable down the right side. Once they meet in the back I'll combine them all into a single wrap down to the end of the bundle.
Once all the sensors are wired up, I need to run a calibration to get the headset properly configured. This also requires a custom calibration stand with a pair of stereo cameras. Hopefully I'll be able to print the stand tomorrow, and if my parts arrive soon I can do calibration over the weekend.
0 notes
Text
Building Project North Star: Day 1
What's North Star?
If you've never seen it before, North Star is an augmented reality headset originally designed by the folks at Leap Motion. Here's a short video of it in action:
youtube
Awesome, right? I've always wanted to build one, and now that the folks over at Smart Prototyping offer a kit I finally decided to give it a try. I ordered version 'A' of the kit, which includes all of the electrical and mechanical components, but leaves you to print the plastic pieces yourself.
(Also - Special thanks to Noah Zerkin for being a driving force behind these kits existing! I wouldn't be building a North Star without you making the parts so easy to come by)
Actually - Days -2 to 0...
So, for the last few days I've been 3d printing parts. A lot of parts. I suspect it'll be roughly 40 hours of printing to make all the parts, spread over 4 days. For the most part the printing has been pretty straightforward, although in retrospect I could have found better orientations for some of the pieces to cut down on the necessary support structures. There's been a lot of non-ideal cleaning.
Ok now really Day 1
Today I had enough parts to actually start assembling the headset and made pretty good progress! Here's a list of all the things that I did today, and I'll talk a bit about each in more detail below:
Placed all heat set inserts
Assembled the electronics tray and mounted the driver board
Attached the electronics tray to the optics assembly
Prepared the displays
Attached the displays to trays and mounted in the optics assembly
Connected the displays to the driver board and did an initial test
Mounted the Leap Motion Controller on the optics assembly
Added the structure core mount to the optics assembly
Built the slide mounts and attached them to the optics assembly
For instructions I'm not really following any particular guide, but rather looking at the v3 assembly drawing on github https://github.com/leapmotion/ProjectNorthStar/blob/master/Mechanical/Assm%20Drawing%20North%20Star%20Release%203.pdf.
Heat Set Inserts
This was one of two parts that were anxiety inducing, simply because I had never used them before and I was worried about screwing something up. Happily though these turned out to be fairly easy to use. I used a small vise to hold the parts that I was adding the inserts to, and then used a soldering iron to get them up to temperature while pushing the inserts into place.

For pieces where the hole for the insert went all the way through the plastic, I used a small piece of wood as a 'backstop' to prevent myself from pushing too far, which seemed to work well. The only other thing I did was use a small pair of needle nose pliers to hold the piece in place while removing the soldering iron, just in case it got stuck and tried to pull back out. I don't think I actually had that happen at all though.

Electronics Tray Assembly
This was pretty straight forward, it's just two plastic parts held together with a pair of screws. The driver board is then mounted with another 3 screws and you're done! Was nice to have an 'easy' piece to get started with.


Display Preparation
This was the most anxiety inducing step of the entire build so far, because it required peeling the display ribbon cables off the back of the displays so I can route them to the correct location on the driver board. It ended up going smoothly, I just very carefully peeled a small bit at a time, and eventually got to the point where I was pulling the cable and adhesive from the back of the display, instead of the cable from the adhesive.

Mounting Displays
Now that the displays were prepared, I mounted each of them to a display tray. There are two types of trays - one angled for 75cm focus, and one angled for 25cm. Since I'm most interested in working with stuff close up I opted for the 25cm trays. I placed the heat set inserts in, and then slid a display into each of the 25cm trays. I then carefully mounted each display tray to the optics assembly, making sure that the ribbon cables were roughly where they needed to be. It took a few attempts to find a good setup where I felt like the ribbon cables wouldn't get crushed, but eventually I got it sorted.


Display Test
With the displays in and the driver board in ... I had to see if it'd all light up. So I connected the ribbon cables to the driver board as well as USB power and the mini DP cable for a signal and wired it all up to my PC. And sure enough, we have video! Probably the highlight of Day 1.


Routing the cables is going to be a major challenge, when it's all done I'll have 3x USB cables and 1x DP cables coming off the front of the headset, but that will have to wait until we're closer to completion.
Mounting the Leap Motion
This was literally dropping the Leap Motion into a predefined hole on the optics assembly. Done! I decided to use the shorter included USB cable, assuming I'll need to run a 3m extension off the back like I am for all the other parts of the setup.
The provided cable is a pretty snug fit, so I might have to shave things down a bit, or see if I can find an alternate cable to use to route through the headset.

Structure Core Mount
The Structure Core will go above the Leap Motion, and provide world localization (and meshing)? There's another mount that can use a pair of Intel Realsense cameras for the same effect, but it's nice that the Core manages it all in a single device.
The mounting for this was just 4 screws into the optics assembly. Note that you don't need the standard halo piece if you're using the alternate mount! It takes a good while to print so skip it if you know you aren't using it.
(You can see the SC mount in the previous image above the Leap Motion controller)
Building Slide Mounts
The Smart Prototyping kit came with pre-tapped aluminum pieces, so this was just screwing a few parts together, and then attaching them to the optics assembly. I don't know if it was worth attaching these now, but if nothing else they're useful as a kickstand right now.


What's Next?
That was a busy day 1! If the parts are ready tomorrow I'll try to build the headgear assembly that makes the device wearable. If I can get that all put together that will be the vast bulk of the construction phase of this project. I also still need to print out the calibration stand parts and get it ready for device calibration once construction is complete. But that'll happen after all the headgear pieces are finished.
1 note
·
View note
Text
Unity Hub 1.0 - A Great Start
Unity Hub hit version 1.0 last week, and if you’re not using it to manage all your unity versions I highly recommend downloading it ASAP.
What’s Hub, you ask? It’s a centralized place to manage all your unity installs and unity projects. If you’re like me and have a long-running project plus several prototypes spread across various versions, it’s a huge help in making sure that you have all of the right versions installed and you don’t try to incorrectly open a project in the wrong editor version. It’s a real quality of life improvement to people who are extensive users of Unity.
There’s always room for improvement though, so here are some things I think would make future versions of the Hub even better:
Option to do an in-place upgrade for minor versions - Right now each version will always install to its own location. So 2018.2.5 gets a full install, 2018.2.6 gets a full install, etc. While this is something I want on occasion, most of the time I just want a single install of “2018.2″ that I can update to the latest bits when a new release occurs. It’d be nice for the ‘installs’ tab to show an upgrade button next to active installs when a point release occurs.
Link or popup window for version release notes - It’s great to see that there is a new version, but it’d be even better to easily see what changed. Bonus points if it’s in a new custom window that lets us filter the release notes by affected platform.
Technical previews tab - Currently the hub shows ‘official releases’ and ‘beta releases’. If you want a technical preview (e.g. the Magic Leap tech preview builds), you have to go to the website to find it, download it, and then add it to the list of installs in hub. For folks who do a lot of work with cutting edge tech it’s a hassle, and it’d be nice if those were readily available in the hub.
Older releases tab - Sometimes you need to install an older version, and the hub doesn’t currently give you any way to do that. A tab that exposes recent older versions with a link to the web archive for anything that doesn’t make the cut would be a nice way to streamline the workflow.
Sort headers for project list - It’d be nice to be switch the sort order quickly between most recently opened, alphabetical, and editor version for all the projects in my project tab.
Show icon next to name in project tab - Admittedly this is minor, but I’d like to see the project icon (if configured) next to the project name in the projects tab. In a long list it’d be faster for me to find stuff by looking for the icon.
Notifications for updates to installed versions - if hub is going to minimize to the system tray on Windows, might as well use to notify me that my active versions have an upgrade! Even just a ‘!’ next to it as an identifier of any new versions would be nice to have as a reminder to check the releases list.
1 note
·
View note
Text
A Pleasant Stroll - UE4 Voxel Project Devblog 3
tl:dr- Added foliage, learned about Hierarchical Instanced Static Meshes, started adding building construction with a long ways left to go, still thinking about scope and where to take the project overall
Welcome to my forest, have a nice walk around:
youtube
Foliage, Hierarchical Instanced Static Meshes, and Bugs
Despite not having a lot of time to work this week, I made some good progress. After thinking about ‘what’s next’ for a while, I wanted to add in foliage because I wasn’t sure how hard it would be but it seemed like a fairly self contained task.
My initial plan was to spawn a pair of Hierarchical Instanced Static Mesh (HISM) components along with each terrain chunk, generate random instances when the chunk is spawned, and then reuse the HISM components when terrain chunks are swapped out.
Easy, right? Well it was, until I hit weird rendering errors. In packaged builds I couldn’t figure out how to get HISM’s to clear their rendering buffers. I tried calling ClearInstances(), storing instance indexes and removing them manually, switching between sync and async tree rebuilds, forcing rebuilds after clear, etc. Despite all of that I was never able to get clearing HISM data to work properly so I could reuse the component.
The engineer in me wanted to (and still wants do, if I’m honest) sit down with the UE4 source and debug a test case until I could find and fix the issue but the pragmatist in me said “couldn’t you just destroy the components and create new instances instead of recycling?”
15 minutes later everything was working perfectly.
I’m probably going to take some time to dig into the HISM issue anyways because it will be a good opportunity to debug/test a UE4 engine issue in a fairly specialized use case. But I’m happy that the foliage stuff is working and the project can keep moving without being dependent on that debug process.
Small addition but I also made the grass disappear in an area on terrain modification - this makes carving through the ground feel better because you’re not stuck with grass floating mid-air after sculpting.
Building a Home
After I got the foliage in I wanted to start working on cabin/base building. Here’s a quick WIP video of that as it stands today:
youtube
The building isn’t as far along as I’d like at this point, I keep iterating a lot on the core components (parts, snap locations and orientation, placement and removal) trying to find a system that I’m happy with. I’m hoping with a bit more work I’ll be able to build a “complete” cabin with floor, walls, roof, and door. Eventually I’ll need to find a better collection of parts to build with too, these parts are alright for testing but feel very flat compared to the rest of the environment.
Two changes I need do make that are related to building - 1) Remove overlapping foliage on
Sculpting
Finally I also made a bunch of improvements to terrain sculpting - I had a bug where if I tried to increase the strength of the sculpting tool to be greater than 1, I would have these weird cycling issues where a cell would go from almost empty to completely full again...
... aaaand if you’ve done enough coding in your life you’re probably saying “Ryan you muppet the density is an unsigned byte and it’s wrapping around.” You are correct. Fixed and works great now. I also updated how we calculate the density change in cells to be a function of the distance from the tool’s center point, it makes the cuts line up much better and gives a cleaner result overall.
UE4 Stuff I’ve Learned
Hierarchical Instanced Static Meshes
Learned a lot about them while implementing foliage - they’re great! But they can definitely be finicky in a procedural environment. I wouldn’t recommend trying to reuse the components at this point, just destroy the old stuff and spawn new ones. Also - make sure your meshes actually have LODs!
If you want players to be able to interact with instances (e.g. chop down trees) you’ll probably want to consider an extra data structure that can track tree state and pair it with the instance index that UE4 returns when you add a new instance. That’ll let you remove the instance and drop in a damaged/destroyed element.
Child Actor Components
For building I’m setting up all of the snap points with a child actor called, well, SnapPoint. At first I was frustrated because it didn’t seem like you could edit the child actor settings in the properties panel, but I eventually found them hiding in the ‘child actor component’ section in the ‘child actor template’ field (it’s a dropdown if you toggle the arrow).
Blueprint Helper Libraries
As I find Blueprint patterns that I’m using more and more I’m trying to refactor them out into helper libraries - it keeps the blueprints clean and makes them faster to edit. I have a ‘MathHelpers’ one that’s a collection of my common math operations, and an ‘ActorHelpers’ for things that I do frequently in my various actor blueprints.
Obviously this is for functions that have applicability across a variety of blueprints - otherwise they’d be better off just as a method within a given blueprint.
What’s Next?
First and foremost I want to get a first pass of cabin building to a happy place. So functioning/sensible snap points, a wider variety of building parts, some interior decorations, and figuring out the building controls mapping to controller so I can test them without mouse/kb.
From there I want to think a bit about generating cabins in the wilderness along with the terrain - even if it was just a single ‘prefab’ for now it would be nice to be able to come across a cabin or a watchtower.
It’s a small thing but cutting down trees and adding wood to the player’s inventory for building (along with destroying the tree visual in game) would make building start to feel more like a game loop instead of just an adhoc/creative mode.
On the generation/rendering side I want to investigate moving away from fixed size chunks and allowing distant chunks to cover larger areas with less detail (clipmaps, basically)? This would require a good bit of rework to the stuff I have so far but it would drastically improve view distance.
Along the same lines for generation I want to start adding in some higher level variation - I love walking through the forest as it is but it feels very samey. I’m not sure what I’m going to do here, need to experiment.
As always if you have questions or something in particular you want to know about - feel free to hit me up on twitter. Thanks for reading!
1 note
·
View note
Text
Gettin’ Smooth - UE4 Voxel Project Devblog 2
tl;dr version - moved from cubes to a smooth mesh, added a day/night cycle, lots of small tweaks/fixes, lots of new problems
Two videos to start you off this week! First here’s some general tooling around:
youtube
And here’s a short video with an accelerated day/night cycle so you can see it in action:
youtube
What’s New
Almost immediately after I posted my first devblog I made the decision to jump in and try my hand at switching from cubes to a smooth mesh. I like the way it looks and I want to do base building with separate mesh components instead of cubes, so it fits the gameplay better as well. I dropped in a library called PolyVox just to see how it would work and I pretty quickly ended up with something that looks like this:

Which is pretty neat! But not very smooth. And after playing around for a bit I realized it’s because I was treating my voxel cells as single values - either a cell was filled or it wasn’t. But for smooth meshes you need to have a range of values that describe how ‘filled’ a given cell is. So I switched to having the topmost cells calculate a fill percentage and all of a sudden:

Hey, that’s looking smooth!
And, uh, awful. An unexpected consequence of switching from cubes to smooth is that creating interesting shapes out noise gets way harder to do well. So I’ve spent a ton of time poking at and tweaking the voxel generation algorithm. After a while I had:

Which is nice but samey and too pointy up on the peaks. If you go back to the video at the start of this post you can see I’m trying to blend some of these shapes in with smoother/rolling simplex noise. - I’m not happy with where it’s at but I made some decent headway. I’m pretty sure this is something I’ll continue to tweak for as long as I work on the project.
After I got the noise to an ok state I started working on getting some texture to the world. I’m not actually sure I want to keep realistic textures but I thought it was worth trying to see how it looked. It looks good!

From there I spent a lot of time poking at visual stuff like adding exponential height fog (thanks for the recommendation, Andreas!), playing with the auto-exposure settings for adjusting to light/dark zones, and eventually ended up with something that looks like this:

And y’know, I’m pretty happy with that.
The last thing I did is added the day/night cycle that I showed in the video at the start. It’s almost entirely stock UE4 parts that I tell to update on a regular cycle (10 times a second right now I think?). It’s probably horrendously inefficient but the project is currently running at 300fps on my machine so not too worried about it. It’s just nice to have something dynamic in there after staring at static terrains for two weeks!
UE4 Stuff I’ve Learned
Actor Root Components
I had this bug where once I started recycling procedural mesh components the entire world would shift/freak out and for the life of me I couldn’t figure out why. After some digging around I realized that when you add components to an actor with CreateDefaultSubobject(), if you haven’t previously set a root component it will pick the first object you make and set it as the root! So all of my meshes were being made a child object of the first mesh, and when that mesh got moved in a recycle all of its children moved with it. Fixed this by adding a USceneComponent explicitly as the root before creating my ProceduralMeshComponents.
WorldCoordinate3Way
I had gotten about halfway through adding my own triplanar texturing material when someone asked me ‘why aren’t you using the WorldCoordinate3Way node?’
It was because I had no idea it existed.
So yeah if you’re looking to do basic triplanar texturing on a mesh in UE4 it’s super easy to use and get you up and running, and it exposes basic tuning values like scale/blend factors/etc. I might still replace it as I go (I want to figure out how to use volume textures or texture arrays to change material types) but this was great for a quick coat of paint.
PostProcessVolume’s Exposure Settings
The default auto exposure settings in UE4 are very strong and not really what I wanted. In a dark area it brightens the scene so much that there’s no need for additional lighting! After doing a bit of digging I found the Exposure settings on the post process volume and by playing with Min Brightness I’m able to keep a decent level of darkness in caves and other enclosed/shadowed areas. The downside is that it makes world shadows a bit darker than I’d like. There seems to be a balance between exposure, ambient lighting, etc that I haven’t quite figured out. But this was a good first step.
Per Platform Checking
I’m still running the project on mobile now and again just to check performance, and it’s to the point where I’d like mobile only to have an on screen HUD for buttons to sculpt/jump/etc. There didn’t seem to be a great way to do this from blueprint? The ‘solution’ I found was to name check against the running platform, but I really wish this was an enum type or something.
What’s Next?
On the voxel/rendering side of things I really want to dig into the transvoxel algorithm for level of detail so I can start using my meshes more efficiently to increase draw distance. But I’m guessing this will be a bit of a sinkhole in terms of time spent, so I’ll probably call the draw distance ‘good enough’ for now and move on to other things.
Right now I’m thinking I should spend a bit more time on the gamplay side and add inventory and the basics of base building/construction. That’s a big task too! But it will give me a lot more to do in the world, which will be nice. Right now there’s no motivation to go anywhere.
For the generator I still want to look at spawning objects based on the generator results. Some just for decoration (grass, trees) and some for gameplay (resources). Also I need to figure out how I want to deal with water at some point because right now I have no idea.
As always if you have questions or something in particular you want to know about - feel free to hit me up on twitter. Thanks for reading!
1 note
·
View note
Text
Hello (Procedural) World! - UE4 Voxel Project Devblog 1
tl;dr version: I’ve always wanted to play with 3d procedural spaces, and I’ve always wanted to learn UE4. So I’m doing both by making a voxel project in UE4!
For years now I’ve wanted a good excuse to dive into two different things: a deep dive into building things in UE4, and generating 3d procedural worlds from scratch. Recently some time opened up in my schedule to work on some new things so I decided to combine those desires and make a small voxel world game in UE4.
As of right now this is not a full game project, simply a technology testbed where I can learn more about working with UE4 and more about working with voxel data and procedural 3d spaces. That said I’ll be implementing various game’ish systems over time since that goes hand in hand with learning the engine.
So, after the first bit of work what does this all look like in action? Take a look!
youtube
Here’s a list of the bits I’ve implemented so far that make that video possible:
Using noise to generate a seamless/infinite world (including simple caves) and generating meshes
Multithreaded loading chunks of the world on demand
Basic player interaction to create/destroy voxels
A shader that textures voxels based on their material type
Also for fun I took the voxel generation and embedded it in an augmented reality application so I can build voxel objects in the real world. :)
UE4 Stuff I’ve Learned So Far
ProceduralMeshComponent
The ProceduralMeshComponent is doing the heavy lifting for all of my mesh rendering and world physics state, and it couldn’t be easier to use. It has a function called CreateMeshSection that you pass vertices, triangle indexes, normals, uv’s, etc to and it creates the mesh and collider for you.
One thing I found while I was testing is that a single ProceduralMeshComponent allows for multiple meshes - each CreateMeshSection call takes a “section index” and if you specify a new index it will add another submesh to the component and you can give it a separate material. However there seems to be an increased processing cost each time you add a submesh that I need to investigate. The processing was sufficiently large with the number of chunks I’m making (~512 chunks) that I ended up switching to an array of ProceduralMeshComponents and each one only has to handle a single chunk.
Long term I think this will be a better route anyways, since then I can use section indexes to handle materials that need different rendering options (e.g. water or stuff with animated tiles).
FNonAbandonableTask
As I started generating caves I hit stutters again since generating 3d noise is computationally expensive. What I wanted ideally was for chunk generation to be kicked over to another thread, and then update meshes on the main thread when the generation is finished. FNonAbandonableTask has been perfect for this so far - for each chunk I create a struct to hold all of the generation results and pass it to a new GenerateChunk task, which then does the work, fills out the struct, and marks it as complete. I was really happy with how easy this was to get going, between reading up on it and implementing it I think I probably only spent 2 hours? Works great. Worth noting that these are *not* immediately killed when a game stops, so if they rely on a memory/reference that gets destroyed on game stop it can bring down your editor. The joys of threaded programming. :)
FCriticalSection
One issue that I ran into with the threading experiment was that my sparse voxel volume data structure wasn’t threadsafe, so when I kicked chunk generation into the task graph all the different threads started stomping all over each other. Not good! I reworked the tasks to use a FCriticalSection to lock access to the volume for writes. The expensive part of the computation (noise generation) can still happen freely since each voxel is computed independently. So a single tasks generates all of the data for a chunk, grabs the lock, updates the volume, and then releases the lock. I was worried about how a locking data structure would work with the task system but so far it’s been smooth sailing.
Misc Stuff
Lots of little things about creating materials for the voxels, e.g. using world space + the block material to calculate UV’s for the block faces
Putting input handling into the player controller instead of the pawn - being able to swap in a new pawn type is handy!
Using SCOPE_CYCLE_COUNTER to track performance of various code blocks is super handy for tracking down what’s causing stutters in your code
Along those same lines: the profiler in the session frontend is great for digging into why you had a spike
UGameplayStatics has a ton of useful functions
Misc Augmented Reality/Mobile Stuff
If you’re on android, don’t enable Vulkan - it hard crashes on launch as of UE4.19
Ambient occlusion doesn’t work on mobile!
I still haven’t figured out how to get a directional light to cast shadows for my voxels
Getting close to the surface where you’re editing blocks causes the AR system to lose tracking -- not enough feature points?
Use the “launch” feature for testing, packaging/deployment is pretty slow otherwise
Voxel Stuff I’ve Learned
Generation
There is no magic single noise formula to get interesting results, use different noise functions for different aspects and combine them all for the result you want. Right now I’m using 3 sets of noise and will probably need a lot more before I’m done
3d noise is expensive. If there’s something you can do with 2d noise instead for a similar result, do it
Perturbed cellular noise is great for all sorts of stuff
Simplex noise is pretty boring to look at but great as modifier/argument to more noise stuff
Caves are really hard to do well when you’re generating each voxel independently
You can lose an entire night’s worth of work time just tweaking your noise parameters and testing the result
Always have an easy way to see the seed of your current world so you can repro bugs as you find them
Always showing player coordinates in dev mode is a really handy tool so you know how to get back to an area that had a bug
Rendering
Turn off all the static lighting stuff, you can’t use it anyways
For cubes, ambient occlusion adds a nice effect at the seams. But don’t overdo it otherwise it looks like your world is being taken over by black mold :(
Using world space coords to generate uvs for blocks is great - just make sure to inset everything slightly so you don’t bleed into neighboring tiles in your tile atlas
When you create/destroy a block on the edge of a chunk make sure you tell the neighboring chunks to update as well, otherwise you can end up with discontinuous areas
What’s Next?
Now that the basics are working I’m still deciding what I want to do next. On the UE4/gameplay side I’m thinking I might have block destruction spawn materials that players can pick up and use to build new structures. It’d be a good lead-in to eventually adding something like a crafting system.
For the generator, I’d really like to figure out a better way to assign material types to voxels (and add new material types for increased diversity). Also I’m not happy with the generated caves at all right now, in particular with how they present themselves on the surface.
On the voxel side of things I’m tempted to give up on the minecraft/blocky look entirely and try to move to a smooth world mesh like you see in Astroneer and No Man’s Sky. I need to read up more on how those systems work before I dive into it though.
Other things would be reading up on UE4′s static instance mesh system to start adding foliage, changing chunk generation to be based on player line of sight instead of a naive ‘load all chunks in radius x around player’. Also starting to think about how to handle chunks that need multiple material types (e.g. animated lava/water)
Well, that’s all for now I think, hopefully the next dev blog will be shorter! This one was a bit of a brain dump since I waited so long to write it. As always if you have questions or something in particular you want to know about - feel free to hit me up on twitter. Thanks for reading!
6 notes
·
View notes
Text
Unity GDC Keynote Braindump
Dump of all my notes from the Unity keynote (if you have 2 hours to kill you can watch the whole thing here: https://www.youtube.com/watch?v=cmRSkHl-Gv0 ) Real info starts at 10′ish min and very watchable at 1.25x speed.
My Summary
2018 feels like the biggest core engine update in a while, maybe since I started using unity. The new ECS system, the Scriptable Render Pipeline, the shader graph, gpu accelerated light mapping, vector art support, the inclusion of Probuilder, Nested Prefabs (and presumably an improved asset serialization system to support said nested prefabs), . Those are all amazing and should make day to day working with Unity infinitely more pleasant and I’m so happy to see them arriving.
It’s hard to care about all the fancy bells and whistles in the high definition pipeline since realistically I’ll never have the team or the funding to produce the required assets. It’s great that they’ll talk about stuff like photogrammetry but that only matters if you can make your bespoke assets to the same level of quality. I’m sure there are some small teams with amazing artists that will be able to fully leverage this but it’s just not my wheelhouse.
All the talk about ‘filmic’ effects and storyboarding scenes also misses me entirely. It’s all very pretty and interesting but is very far from what I do. I’d love to see some more focus on interactive environments and play in the demos in the future.
A lot of the performance talk is really nice to hear and I’m happy to see it as a focus. I’m hoping that people are also considering performance from the perspective of “run as efficiently as possible.” Thermals and battery drain are a major concern for a huge set of devices (e.g. nintendo switch, standalone vr headsets, magic leap, mobile devices).
Blah stuff: using machine learning to try and show player the right microtransaction promotions, focusing on playable ads when talking about minimal runtimes and startup times. I get that there’s an entire industry around this stuff but it feels like stuff you could talk about in side sessions, it’s hardly inspiring.
Curious Stuff: I don’t think the VR editor was mentioned at all? I was curious to see the current state of things, there. No mention of Unity Hub either, was hoping to hear it’d become the default launcher in 2018.
Raw Notes ----------------------
Machine Learning Update
Using ML in game development
ML-Agent imitation learning - learns behavior and play from real player interaction
Example of a racing game using ML to learn strategies for cpu racers
Agent maintains course after 30s of training
Agent plays well after 5min of training
Q: Can agents blend between training sets? E.g. a beginner set of data and an expert set of data?
Performance
Huge explosion of devices to support at different perf levels
Scaling features is hard for small teams
LiveTune optimizes assets and settings at runtime to give player best experience on their device
Q: How tweakable is it? Can you focus on reducing perf to optimize battery life/thermals?
"IAP Promo" - tracks player behavior to promote best purchases to player based on their behavior. Abusing player data for fun and profit!
Engineering Update
3 expected releases 2018.1, 2, and 3
2018.1 "spring" (April)
New artist tools
New rendering pipeline (SRP?)
Lots of new device support in AR/VR/MR/XR in 2018.1
Post-Processing stack no longer beta
New post-processing volumes for blending/changing settings
Decals added!
Probuilder
2018.2 "summer"
Real-time ray tracing GPU lightmapper
Vector graphics importer
2d character anim tools
New asset bundle tools
2018.3 "fall"
Nested Prefabs!
Long-Term Support versions
Always based on x.3 versions
24mo patch support on LTS builds
Scriptable Rendering Pipeline
One core architecture customizable with C#
Templates for various feature levels/styles
Only pay for what you use in the renderer
Default templates: High-Def "all features", and "Lightweight" for lower end devices
HD for targeting xbox one, ps4, pc, Lightweight for targeting remainder
Q: Can games that run on different platforms (e.g. PS4, PC, ios) switch their rendering pipeline based on platform without a ton of extra work/asset changes?
HD Pipeline has area lights - area lights w/o shadow cost roughly 2x normal spotlight
New rendering debug tools to investigate lighting contributions, normals, etc
Rendering debug modes run on device for on-device debugging
Can create new debug modes simply
HD pipeline has lots of new options for subsurface scattering, transluency, reflections, etc
Big focus on 'filmic' quality
Book of the Dead demo runs at 30fps on PS4 Pro as HD pipeline example
Artist Workflow
Talking photogrammetry (which is interesting but skeptical it has real value for small teams)
Shader Graph for visual editing of shaders. Supports all SRP pipelines
Layered material support (think photoshop layers but for materials)
Real-Time GPU lightmapper to dramatically cut down on iteration time
More cinemachine stuff, support for storyboard references
More colorgrading stuff, overlaying colorgrading targets and using waveform viewer to try and match scene to that
Unity Evolution
All in-editor content change timing goal: 500ms or less
Packages allow per-feature upgrading
Q: How do dependencies work between packages? Can you see/view them?
Aiming for performant by default (talking about ECS?)
C# job system intended to simplify multithreading for people
Burst compiler helps auto MT sections of code?
Unity wants to be the best game engine for online multiplayer. (As someone currently making an online game in unity... they have their work cut out here)
Ideally want 'one way of writing code' to support things like online by default
Goal of converting from existing monobehavior to new ECS in 30 minutes?
Web/Ads junk
New core runtime for web is 72kb
Asset optimization for lower end environments
Playable ads, kill me
Playable ad demo is built in unity but targets html/js canvas for rendering
Load times of sub 3s on device
1 note
·
View note
Text
Code Improvements With Unity 2017.1 & C# 6
One of my favorite features of Unity 2017.1 is the (experimental) update to the supported C# language version to C# 6. While there isn’t any single feature that will drastically change the way I program, there are a few features that can simplify your code and I thought I’d share some examples.
First off you need to enable the correct language version by switching your scripting runtime to “Experimental (.NET 4.6 Equivalent).” This will require an editor restart, and then you’ll be ready to go!
String Interpolation
String interpolation is a simple syntax addition to make creating strings from data a bit easier. Previously if you wanted to create a string that contained some variable data you’d end up doing something like:
With string interpolation you can change this to:
Or you could even simplify it further to:
Null Conditional
Sometimes when you need to pull data out of an object hierarchy it can end up looking something like this:
This is a made up example to belabor the point, but you get the idea. Checking for null at each step in the chain and exiting, to prevent hitting a null pointer exception. With C# 6′s null conditional operator, instead you can simply do this:
The ‘?’ after each step in the access chain says “if this is null exit the chain early and set prefabName2 to null, otherwise keep going.”
Index Initializers
So for a while we’ve been able to initialize a list with pre-set values, like so:
which is great, right? This syntax also works for dictionaries:
works great, and should work for any class that implements IEnumerable and has an appropriate Add() method to add new elements.
C# 6 adds a new option called an Index Initializer. Instead of relying on IEnumerable and Add(), it uses indexing behind the scenes to set values. Our dictionary would now look like this:
This syntax should work for anything that has a custom indexer. For simpler cases like a generic Dictionary, you’re probably fine just using whichever syntax you prefer.
Hey, what about async/await?
That’s a fair question, but I haven’t had enough time to play with them within the context of unity to say much about them. Once I get some more time I’ll make a new post!
2 notes
·
View notes
Text
Games Completed in 2017
Last year I decided to try and focus on finishing more games that I start, and I did okay! You can see that list here.
Starting a new list for the new year, let’s see how I do!
January
Final Fantasy XV (PS4)
Momodora III (PC)
Mighty Gunvolt (PC)
February
Yakuza 0 (PS4)
March
Nothing :(
April
The Legend of Zelda: Breath of the Wild (Switch)
May
NieR: Automata (PC)
GNOG (PS4)
* Any game marked with an asterisk is a game that doesn’t have a traditional ‘end.’ If I include it on this list it’s because I played it enough that I feel like I saw all there is to see, and feel comfortable setting it aside.
1 note
·
View note
Text
Configuring UE4 for Daydream VR
After I got Daydream working with Unity, next I wanted to get a similar scene working in UE4 so I could start to compare performance, workflow, etc. Be warned that this is not nearly as out-of-the-box as the unity version. If there’s interest I’d consider packaging up a template project with all these settings so let me know if there’s interest.
This is not intended to be a tutorial - rather if you’re familiar with UE4 this should give you the checklist of things you need to change in a new blank project to get a build up and running on device. That said, if any of the steps are unclear I’d love feedback!
Steps to get running:
This all assumes running UE4.14, which is the first version with daydream support
Install the android tools from the Engine\Extras\AndroidWorks
Go to wherever it installed the android SDK (e.g. C:\NVPack\android-sdk-windows) and run "SDK Manager.exe"
All of the tools for Android 7.1.1 (API 25) should be installed already, but also select and install the "SDK Platform" for Android 7.0 (API 24)
Create a new blank project targeting mobile/scalable/no starter content
Enable Google VR & Google VR Motion Controller plugins. Restart editor [1]
Open project settings
In Platforms->Android, click 'Configure Now' for APKPackaging and Google Play Services (next 8 steps are in Platforms->Android subsections)
In APKPackaging, Set an Android Package name (e.g. com.mycompany.daydreamdemo)
In APKPackaging, Set Minimum/Target SDK versions to 24
In APKPackaging, Set Enable Fullscreen Immersive to true
In Build, Enable Support arm64 and disable Support armv7 [2]
In Build, Enable OpenGL ES 3.1 if you want [3]
In Advanced APKPackaging, enable Configure for deployment to Daydream
In Advanced APKPackaging, set deployment mode to Daydream
In Advanced APKPackaging, enable sustained-performance mode
In Platforms->Android SDK, set SDK and NDK API level to android-24
In Engine->Input clear the Default Touch Interface option so it reads 'None'
In Engine->Rendering, Enable Forward Shading
In Engine->Rendering, set Anti-Aliasing Method to MSAA
Create a simple test map and in project settings set it to your default map
Package for Android ATSC [4]
Deploy to Device
Run on device!
Bonus Steps:
Open up Window->Developer Tools->Device Profiles
Create a device profile for Android_Adreno5xx with profile type "Android" and parent "Android_High". Open your new profile (it will be at the bottom of the list)
Add the rendering console variable r.MobileOnChipMSAA and set it to 1
Add the rendering console variable r.MobileContentScaleFactor and set it to 1 [5]
Close the profile widnow and in the main Device Profile window click the 'Save as Default' button at the top
Make a new default material for your project - I use a simple checkerboard that's fully rough. Replace the material on that default cube in your scene with it!
Notes:
[1]: The docs say these should be enabled by default, but they aren't
[2]: Enabling both should be an option but there's a bug where both+googlevr = broken build
[3]: I don't have a confirmation from Epic yet, but I suspect that vulkan + googlevr is busted. Or vulkan + arm64 is busted. Not sure yet. Some combo of the settings is causing vulkan to die a horrible death.
[4]: You could also use ATC here. I haven’t tested enough to see if there’s a clear benefit, I chose ATSC because long term I believe that is the base format that vulkan supports.
[5]: You can play with this value, it changes the size of the render buffer. 1 = 720p, 1.5 = 1080p, 2 = 1440p, 0 = native res. Native res will tank performance!
References:
UE4 Getting Started for Google VR/Daydream: https://docs.unrealengine.com/latest/INT/Platforms/GoogleVR/Daydream/
Google VR Unreal Get Started: https://developers.google.com/vr/unreal/get-started
Google VR Daydream in Unreal: https://developers.google.com/vr/unreal/daydream-in-unreal
Google VR Arm Model/Controller in Unreal: https://developers.google.com/vr/unreal/arm-model
0 notes
Text
Unity settings for Daydream VR
I got my Daydream headset last week and I’ve had a bunch of fun playing with it - and with a bit of down time this weekend I decided to prototype out a little fishing game.
Pretty quickly I started running into sub 60 fps frame rates and it took me a while to find the right mix of settings to get performance back up. Note: I did not change anything about my actual scene to get up to 60 fps, all of the work was trying different settings to get the game to run more efficiently.
Update: Added more tips from @HoldenLink -thanks!
Quality Settings:
Pixel Light Count: 1
Antialiasing: 2x MSAA
Shadows: Disabled
VSync: Don’t Sync
Player Settings:
Disable 32 bit display buffer [Not consistent, try both and see what gives you a better result]
Enable Multithreaded Rendering
Enable static/dynamic batching
Graphics Jobs (?) [seems to be better/worse depending on scene]
Daydream - Use Sustained Performance enabled
Misc Settings:
In script, set Application.targetFrameRate = 60
Set QualitySettings.vSyncCount = 0 in script, setting it via gui isn’t reliable
Disabled Precomputed Realtime GI
Enabled Baked GI
Camera: If cpu limited try switching camera sorting to NoDistanceSort
Notes:
I had shadows enabled previously but they looked *terrible* in game. Worth further experimentation to see if there’s a happy medium of perf/looks since they definitely help ground people in the scene
Disabling 32 bit display buffer can cause some banding but it’s one of the biggest upsides for saving on memory bandwidth/heat
Enabling multithreaded rendering makes it much harder to figure out what your perf bottleneck is - once I enabled this I needed to set targetFrameRate to 60 otherwise I’d hit a situation where I’d hit a long stall every 4th frame or so. Long term I’m guessing this will be less of an issue, but it’s helpful for now.
1 note
·
View note
Text
Postmortem - making a quick jam game with Unreal Engine
Yesterday was the end of the first ever “Simple Jam” - where participants were supposed to make small games. The explicit rules were “no more than 5 rules and no more than 5 assets” but those were flexible.
I struggled for a while over what to make, and after starting and discarding a few ideas I ended up settling on making a little kart racer game where you race around a track as fast as you can. (It’s called Dash Karts, you can download it here: https://itch.io/jam/simple/rate/73383) Given the small scope it also seemed like a great project to dive in more with Unreal to see if I could get more comfortable with the engine. This writeup is my notes from using the engine, talking about the bits and pieces I used along with any oddities I ran into.
Overall Impression
Things went great! For the time I was able to spend (around 20 hours) I’m very happy with how the game ended up. There was definitely a learning curve - I made heavy use of the following UE4 tools:
Splines & the SplineMeshComponent
Terrain
Blueprints
UMG
I only wrote about 20 lines of C++ during the project, and I’ll talk about that a bit more when I talk about blueprints.
Splines/SplineMeshComponent
I’m not an artist - if you looked at the page for Dash Karts, you probably already realized this. I have a basic level of competency with tools like Blender but making more complex shapes would take a massive amount of time and probably eat all of my project time just trying to get that done.
The spline tools in UE4 let me build a basic segment of road and then apply it along a spline to build the road. Not only did this save me pain trying to make something in Blender, it also made the process of designing the track incredibly interactive since I could do a lap, tweak a point or two, and then test again - all without ever leaving the editor.
I ran into three problems while doing this that I had to work around. The first was that by default the road wasn’t getting collision enabled, it sounds like there’s a bug there and I worked around it by creating a new blueprint node to manually set the collision state on a spline mesh component.
The second problem was that by default the collision was a bit bumpy at the points where two spline mesh components (think: individual track segments) would join. To work around this I waited until the layout of the track was finalized and used the “export to fbx” option in Unreal. Then I cleaned up a couple of points on the track and reimported it all as a static mesh. This seemed to solve the bulk of the physics issues.
The third problem was due to my solution for #2 - as a giant mesh I couldn’t figure out how to get lightmass baking to make good looking lightmaps for the track. There would be weird black splotches at different points and despite playing with things like lightmap resolution I couldn’t figure out how to resolve the problem. So for the jam I just disabled baked lighting and set everything to dynamic. Longer term I’d like to figure this out so I can re-enable baking.
Terrain
Unreal’s terrain editor is fantastic, and I know I only scratched the surface of what it can do. Once I had my basic track working I needed a world to go around it! so I made a basic landscape object that was big enough to surround the entire track and just started modifying the shape to fit. I think the biggest learning curve element was learning how to put together a simple landscape shader that supports painting layers, but the Unreal docs really helped with a step by step example.
I ran into two issues with terrain that I didn’t learn how to solve in the time I had, but I plan on investigating after the fact. The first was I couldn’t really figure out how to do sheer cliff faces with the tools. This might just be a limitation of the terrain system in general and the correct answer is to model those elements separately.
The second issue was finding a good way to make the terrain flush with the road in all the spots where they should be touching. If you play the game you’ll see multiple spots where the terrain isn’t quite flush (don’t drive your kart into those spots and get stuck!). I’m guessing there’s a tool to help with this and I simply didn’t find it in time.
Blueprints
About 99% of the game is done in blueprints. It was great to be able to quickly sketch out and iterate on ideas by throwing together a new blueprint, and quickly modify/and retest elements.
As blueprints would start getting bigger (the kart blueprint and the track builder blueprint in particular) I made heavy use of functions and variables to keep the graphs clean. Just like you would in code.
If I keep working on Dash Karts (and I’d like to), finding the balance of blueprint vs C++ seems like it’ll be an area that needs further investigation. My gut reaction right now is that the workflow would be something like:
Sketch out new idea in blueprint until it works
If the idea is something that we will build on/expand on with custom versions - move the core blueprint into C++ and then extend that for custom versions
If the idea is a one-off, leave it in blueprint unless there’s a performance need to move it into C++.
The one concern I have as a project goes is making sure there’s a clear delineation between what goes in BP and what goes in C++ - mainly so it’s immediately clear where you should be looking for the implementation of a given aspect of the game. Reducing mental effort to find things in large projects is important!
So what did I end up programming? I made 2 new blueprint nodes in C++ because the things I needed to call weren’t readily available from BP. The first was a “Find Between” node where you pass it two vectors and it returns a rotator that will rotate vector A to vector B. The second was a node to set the collision on my spline mesh components properly.
UMG
UMG is the latest UI tooling in Unreal Engine and despite not leveraging it heavily I really enjoyed working with it. For a simple UI adding the elements and wiring them up to the game state was very easy - I think I ended up spending about 30 minutes total reading up on UMG and then implementing the first pass of UI in the game. All the stuff in the UI ended up working as expected, there was one sticking point that took me a bit to figure out:
When showing the UI my initial reaction was to set the ‘HUD’ element on the GameMode, but it seems like that’s from a previous iteration of the tools. Instead I had to create an instance of my hud widget and add it to the viewport. When I wanted a player to interact with the UI I had to set the input mode and cursor visibility, which is fine but I don’t think it was immediately clear. I might have just missed this in the docs since it was closing in on midnight and I was rushing.
Miscellaneous
Material instances are great. I set up a common material that I used for all my world objects, and then just created material instances to override the colors/textures.
I love that UE4 has specialized editors for different game elements instead of trying to jam everything into the scene view. Different data should be treated differently!
You can effectively “nest” blueprints by using the ChildActor component. As a unity user this is 110% as miraculous and amazing as it sounds.
Context senstive search in blueprints is great to help learn the system. e.g. when I was defining the kart physics I’d often pull off a wire and type ‘physics’ into the search to see what my options were for getting/setting different options.
The collision tools in the mesh asset editor are great for defining a quick first pass of collision for new objects. You can always just use the mesh too by setting the collision option to always use complex collision.
In development builds of your game you can leverage all the debug drawing routines that you normally have in editor! It’s insanely useful to be able to turn on collider drawing when you’re getting stuck somewhere to try and track down the culprit
One thing that can be confusing is dealing with settings that can be overridden in multiple places. Post-processing settings come to mind here - some can be set in the project preferences as defaults, they can be set on the camera component, and they can be set in postprocessing volumes. It’s not bad that they can be set in multiple places, I can think of use cases for all of them. But when you’re trying to learn the basics trying to figure out where to do something can be overwhelming.
This is equally true for GameMode/GameState/GameInstance - it took me a while to really dig through and figure out what the differences are between those items, as well as how to best use them. In the end for a small single player game it doesn’t really matter, the differences between these elements are much more important when you start making a multiplayer game. Still, it’s another thing to learn as you’re picking up the engine.
The Unreal Engine video tutorial collection is massive and amazing. Very impressed with how much content is available to dig through when you want to learn something about the engine.
Saving the most important bit for last: I had a ton of fun making this game and working with UE4! I made a best effort to work with the engine and use the tools it provided instead of just trying to jump in and do things “my way.” I still have a massive amount to learn about UE4 (ai/behavior trees, navigation, animation, networking, more terrain stuff, etc), but this was a great project for solidifying some of the basics.
1 note
·
View note
Photo
Rough WIP day/night tile sets & props for Monsters & Monocles’ new forest zone. Next up is rain/fog :)
89 notes
·
View notes
Text
Networking Update (2016/03/01)
First real world testing!
So today Craig and I fired up the latest build and spent some time playtesting with the new networking code. This is the first time the code had been tested off my local network, and to be honest I was a bit nervous after spending the last two months on the code that it would all explode and I’d be left sorting through the rubble trying to figure out what went horribly wrong.
So, how’d it go?
Great! Session initiation , game initiation, and gameplay all worked pretty much as expected. I logged around 6 smaller bugs to fix that I hadn’t caught previously in my local network testing.
It’s a huge relief to see the code actually up and running at this point. I know there’s work left to be done on it all but the core is there and is working. Here’s hoping I can knock out these bugs in short order and report a full game clear soon. :)
1 note
·
View note
Text
Daily Networking Update (2016/02/29)
Leap Day, where none of my intermittent bugs exist!
So I spent a lot of today trying to track down bugs that I had seen previously in an attempt to reproduce them and fix them.
It did not go well.
I probably logged about 4 hours of game time today but I wasn’t able to reproduce any of the bugs that were left on my task list. This is frustrating because I’m not at all convinced they’re *gone*, they’re just not presenting themselves with how I’m currently testing.
So, new plan. Tomorrow I’ll play some rounds with folks who aren’t on the local subnet and hopefully generate a new list of bugs that I need to squash. From there that will hopefully be the remaining batch of big work between us and an Early Access launch
2 notes
·
View notes
Text
Daily Networking Update (2016/02/25)
Closing in on the last few bugs! (of this batch...)
Today I fixed several bugs that were causing rendering artifacts on client machines - they weren’t really networking related per se, but were introduced while changing code paths during the integration of the new networking code. The remaining issues were all caused by improper access checking around some RPC calls which have now been fixed up.
I also fixed two bugs related to item duplication - one where when players dropped a gun it was spawning multiple guns accidentally, and one where not all clients properly registered the destruction of a gun pickup when a player used it.
Oh and I fixed a bug that was affecting picked up relics not being equipped by the player, after digging in for a while that turned out to be another security model bug where the code was assuming the older, more relaxed security model.
So at this point the bugs remaining are stuff I haven’t really been able to reproduce, so tomorrow will probably be a lot of me firing up instances and trying to make them happen. If I can’t it’ll probably be a good time to get some other players on board and see what happens with the code when we put some more miles between the nodes. Starting to feel closer to done!
1 note
·
View note