#also been learning to work with json files to save specific game data
Explore tagged Tumblr posts
darkstarcore · 24 days ago
Text
“I’m the greatest programmer ever” to “stupidest coding bug known to man” pipeline
1 note · View note
nihilocrat · 8 years ago
Text
Mod Support in Unity
I wrote this for Ben Porter’s fun #NotGDC “event” on Twitter. I’m taking a year off from GDC but got ALL JELLY reading THE TWETS about people saying smart things at GDC. If you’re wondering how I have any authority to talk about this subject, look no further than the Sky Rogue Steam Workshop!
Let's get this out of the way: adding mod support to your Unity game is a fucking chore.
In most games, with most engines, to modify game files you need to do the following:
Find the archive(s) containing game assets (this might not even be necessary)
Edit the assets
Replace the pre-existing archive(s), or if the game supports it, tell it to load yours instead of the main ones
This is not really possible in Unity, unfortunately. Why is that?
The tools to do this are scarce; here are ones I've found:
https://7daystodie.com/forums/showthread.php?22675-Unity-Assets-Bundle-Extractor
http://zenhax.com/viewtopic.php?t=36
The assets they extract are highly optimized assets for particular platforms, which are often not easily editable
So in the end, you need to add modding support yourself. Bizarrely, there is nothing on the asset store directly related to doing this for you. I SHOULD PROBABLY DO SOMETHING ABOUT THAT, SHOULDN’T I?
There are currently two dominant ways of adding this support to your game. Like with everything, each has pros and cons.
Roll your own asset management system (load assets directly from disk)
Pros:
It works exactly how you write it
Unless you really make a mess of it, it should be the easiest for modders to use. You might not even have to write documentation!
Cons:
You have to write it all yourself
This means asset importers: mesh importers, texture importers, sound, script, everything
You have to write your own editor, or hope your modders are okay fiddling around with text files
Modders will be restricted by the boundaries you give them. Expanding those boundaries will always require more work on your part.
Use Asset Bundles
Pros:
You don't have to write your own importers!
You don't have to write your own editor!
Cons:
Your modders have to learn how to use Unity; from personal experience many are intimidated by it
You need to either write your own "glue" code or share practically all of your code
Sometimes they just don't work and I still haven’t figured out why
I can count the number of games doing this on one hand. Google will probably not help you when you need it.
ROLLING YOUR OWN
If you want to go this path, you should start doing it the moment you start making your game.
Unity has a few tools for importing assets from disk; they don't really seem intended for this but they work just fine.
Import PNG textures with Texture2D.LoadImage (https://docs.unity3d.com/ScriptReference/Texture2D.LoadImage.html)
Import .ogg audio with WWW.audioclip (https://docs.unity3d.com/ScriptReference/WWW-audioClip.html)
Beyond that, it's not hard to find an .OBJ importer for Unity; there are several to choose from. Some are slow. Some don't seem to work with all .OBJs.
You can use JSON, XML, or YAML for "gameplay" assets; several libraries exist for C#. You will not be able to serialize anything but basic types like int, string, and lists, and might have trouble saving and loading highly nested objects.
With these you can get started with your mod support. To reduce the amount of friction between the mod assets and Unity itself, you should probably make your own parallel GameObject / Component system which will save basic transform data and hierarchies, along with the components on an object and their values. You'll need to lean heavily on C#'s reflection abilities for this. If you really want to go "whole hog", I would advise that you implement your own game assets with this system, not only to let modders make total conversions for the game, but also to constantly test the system and expand it as you need more functionality from it, simultaneously adding features for your modders.
ASSET BUNDLES
You’ll want to download the Asset Bundle Manager from the Asset Store (
https://www.assetstore.unity3d.com/en/#!/content/45836
) before you get started, unfortunately it was released before I started adding my own mod support, and is intended to be used with a newer asset bundle API which I haven’t yet upgraded to, so I can’t give any specific advice or tutorials for it.
You get a LOT of things free following this path because your modders are using Unity itself to make the mods and the bundles they export are supported by Unity out-of-the-box. I’ve found it easier for modders if you distribute your SDK as a Unity project directory and not as a .unitypackage, something that they can open in Unity and get started right away with. The extra complexity of Unity itself means you’ll need to actually write documentation, with screenshots, to make it easier for more people to mod your game.
However, you’ve got to create and distribute an “SDK” for your modders to load in Unity to create these bundles. You've got to share your scripts and potentially the source files of other game assets. I would like to point out that this is probably not as bad as it sounds; I doubt anyone would clone your entire game and sell it but I also don't want to be proven wrong. You can build all of your scripts into a .DLL, but without extra obfuscation, a decompiled C# .DLL is practically the same as the source code, without comments and whitespace.
There's a middle ground, but it makes your support a bit more complicated: Write a bunch of "empty" versions of your game scripts, just the ones which modders need to use to create their assets. By "empty" I mean they lack all of the functions and just have the public variables which get serialized by Unity and appear in the inspector. You'll need to write some glue code to convert the "empty" versions of components into real ones when you load in the bundles; use C#’s reflection to read from one component and write to the other.
If you want to support users writing their own C# scripts, you’ll need to do a little extra work, but thankfully Unity has documented that:
https://docs.unity3d.com/Manual/scriptsinassetbundles.html
However, to actually let modders do things that operate with the scripts built into your game, you’ll need to either include those scripts in your SDK or create an API to do that; no small feat. I don’t have any personal experience doing this yet, but Cities: Skylines has such an API and you may be able to figure out what they’re doing by taking a look at their documentation:
http://www.skylineswiki.com/Modding_API
This is still a topic I’m actively developing things for, and I’ve been talking to other devs the whole time so this isn’t comprehensive at all, but I hope it gives you a good starting point. As a bonus, here’s some reasons why you might want to add modding support at all:
Mods add a lot of value to your game. If you have a heavily system-driven, simulationist, or sandbox game, mods practically create several sub-games for you. I bought the original Half-Life specifically so I could play the Counter-Strike mod (before it was a standalone game); this could become true for your game as well.
Mods grow your community. It gives players a path for becoming heavily invested in the game and add a little bit of themselves to the game. More investment means they talk about the game more, then people get invested in the mods themselves and talk about those. People talking about your game is what makes up your community.
Mods let you add licensed content to your game, by proxy, and not get sued. The Arwing from Star Fox looks pretty fitting in Sky Rogue. So does the Swordfish II from Cowboy Bebop. I couldn’t add either of these myself to the base game, and I’m really glad other people have. It sounds trite, but I’ve heard of players specifically buying the game because of these particular mods.
You can just dip your toe in and make some minor commitments for gains which can mean a lot: Guacamelee lets you customize the player avatar but nothing else.  The Long Dark crowdsources its localization by implementing each language as a community-created mod.
Modders will surprise you. They will make things you didn’t know were possible in your game. They will break your design and make stuff that’s cooler than you would have made because you’re in too deep. A bomber that can drop 100 bombs at once is broken as hell if you make it. If a modder makes it, it’s silly and fun and awesome.
32 notes · View notes
faizrashis1995 · 5 years ago
Text
What’s After the MEAN Stack?
Introduction
We reach for software stacks to simplify the endless sea of choices. The MEAN stack is one such simplification that worked very well in its time. Though the MEAN stack was great for the last generation, we need more; in particular, more scalability. The components of the MEAN stack haven’t aged well, and our appetites for cloud-native infrastructure require a more mature approach. We need an updated, cloud-native stack that can boundlessly scale as much as our users expect to deliver superior experiences.
 Stacks
When we look at software, we can easily get overwhelmed by the complexity of architectures or the variety of choices. Should I base my system on Python?  Or is Go a better choice? Should I use the same tools as last time? Or should I experiment with the latest hipster toolchain? These questions and more stymie both seasoned and newbie developers and architects.
 Some patterns emerged early on that help developers quickly provision a web property to get started with known-good tools. One way to do this is to gather technologies that work well together in “stacks.” A “stack” is not a prescriptive validation metric, but rather a guideline for choosing and integrating components of a web property. The stack often identifies the OS, the database, the web server, and the server-side programming language.
 In the earliest days, the famous stacks were the “LAMP-stack” and the “Microsoft-stack”. The LAMP stack represents Linux, Apache, MySQL, and PHP or Python. LAMP is an acronym of these product names. All the components of the LAMP stack are open source (though some of the technologies have commercial versions), so one can use them completely for free. The only direct cost to the developer is the time to build the experiment.
 The “Microsoft stack” includes Windows Server, SQL Server, IIS (Internet Information Services), and ASP (90s) or ASP.NET (2000s+). All these products are tested and sold together.
 Stacks such as these help us get started quickly. They liberate us from decision fatigue, so we can focus instead on the dreams of our start-up, or the business problems before us, or the delivery needs of internal and external stakeholders. We choose a stack, such as LAMP or the Microsoft stack, to save time.
 In each of these two example legacy stacks, we’re producing web properties. So no matter what programming language we choose, the end result of a browser’s web request is HTML, JavaScript, and CSS delivered to the browser. HTML provides the content, CSS makes it pretty, and in the early days, JavaScript was the quick form-validation experience. On the server, we use the programming language to combine HTML templates with business data to produce rendered HTML delivered to the browser.
 We can think of this much like mail merge: take a Word document with replaceable fields like first and last name, add an excel file with columns for each field, and the engine produces a file for each row in the sheet.
 As browsers evolved and JavaScript engines were tuned, JavaScript became powerful enough to make real-time, thick-client interfaces in the browser. Early examples of this kind of web application are Facebook and Google Maps.
 These immersive experiences don’t require navigating to a fresh page on every button click. Instead, we could dynamically update the app as other users created content, or when the user clicks buttons in the browser. With these new capabilities, a new stack was born: the MEAN stack.
 What is the MEAN Stack?
The MEAN stack was the first stack to acknowledge the browser-based thick client. Applications built on the MEAN stack primarily have user experience elements built in JavaScript and running continuously in the browser. We can navigate the experiences by opening and closing items, or by swiping or drilling into things. The old full-page refresh is gone.
 The MEAN stack includes MongoDB, Express.js, Angular.js, and Node.js. MEAN is the acronym of these products. The back-end application uses MongoDB to store its data as binary-encoded JavaScript Object Notation (JSON) documents. Node.js is the JavaScript runtime environment, allowing you to do backend, as well as frontend, programming in JavaScript. Express.js is the back-end web application framework running on top of Node.js. And Angular.js is the front-end web application framework, running your JavaScript code in the user’s browser. This allows your application UI to be fully dynamic.
 Unlike previous stacks, both the programming language and operating system aren’t specified, and for the first time, both the server framework and browser-based client framework are specified.
 In the MEAN stack, MongoDB is the data store. MongoDB is a NoSQL database, making a stark departure from the SQL-based systems in previous stacks. With a document database, there are no joins, no schema, no ACID compliance, and no transactions. What document databases offer is the ability to store data as JSON, which easily serializes from the business objects already used in the application. We no longer have to dissect the JSON objects into third normal form to persist the data, nor collect and rehydrate the objects from disparate tables to reproduce the view.
 The MEAN stack webserver is Node.js, a thin wrapper around Chrome’s V8 JavaScript engine that adds TCP sockets and file I/O. Unlike previous generations’ web servers, Node.js was designed in the age of multi-core processors and millions of requests. As a result, Node.js is asynchronous to a fault, easily handling intense, I/O-bound workloads. The programming API is a simple wrapper around a TCP socket.
 In the MEAN stack, JavaScript is the name of the game. Express.js is the server-side framework offering an MVC-like experience in JavaScript. Angular (now known as Angular.js or Angular 1) allows for simple data binding to HTML snippets. With JavaScript both on the server and on the client, there is less context switching when building features. Though the specific features of Express.js’s and Angular.js’s frameworks are quite different, one can be productive in each with little cross-training, and there are some ways to share code between the systems.
 The MEAN stack rallied a web generation of start-ups and hobbyists. Since all the products are free and open-source, one can get started for only the cost of one’s time. Since everything is based in JavaScript, there are fewer concepts to learn before one is productive. When the MEAN stack was introduced, these thick-client browser apps were fresh and new, and the back-end system was fast enough, for new applications, that database durability and database performance seemed less of a concern.
 The Fall of the MEAN Stack
The MEAN stack was good for its time, but a lot has happened since. Here’s an overly brief history of the fall of the MEAN stack, one component at a time.
 Mongo got a real bad rap for data durability. In one Mongo meme, it was suggested that Mongo might implement the PLEASE keyword to improve the likelihood that data would be persisted correctly and durably. (A quick squint, and you can imagine the XKCD comic about “sudo make me a sandwich.”) Mongo also lacks native SQL support, making data retrieval slower and less efficient.
 Express is aging, but is still the defacto standard for Node web apps and apis. Much of the modern frameworks — both MVC-based and Sinatra-inspired — still build on top of Express. Express could do well to move from callbacks to promises, and better handle async and await, but sadly, Express 5 alpha hasn’t moved in more than a year.
 Angular.js (1.x) was rewritten from scratch as Angular (2+). Arguably, the two products are so dissimilar that they should have been named differently. In the confusion as the Angular reboot was taking shape, there was a very unfortunate presentation at an Angular conference.
 The talk was meant to be funny, but it was not taken that way. It showed headstones for many of the core Angular.js concepts, and sought to highlight how the presenters were designing a much easier system in the new Angular.
 Sadly, this message landed really wrong. Much like the community backlash to Visual Basic’s plans they termed Visual Fred, the community was outraged. The core tenets they trusted every day for building highly interactive and profitable apps were getting thrown away, and the new system wouldn’t be ready for a long time. Much of the community moved on to React, and now Angular is struggling to stay relevant. Arguably, Angular’s failure here was the biggest factor in React’s success — much more so than any React initiative or feature.
 Nowadays many languages’ frameworks have caught up to the lean, multi-core experience pioneered in Node and Express. ASP.NET Core brings a similarly light-weight experience, and was built on top of libuv, the OS-agnostic socket framework, the same way Node was. Flask has brought light-weight web apps to Python. Ruby on Rails is one way to get started quickly. Spring Boot brought similar microservices concepts to Java. These back-end frameworks aren’t JavaScript, so there is more context switching, but their performance is no longer a barrier, and strongly-typed languages are becoming more in vogue.
 As a further deterioration of the MEAN stack, there are now frameworks named “mean,” including mean.io and meanjs.org and others. These products seek to capitalize on the popularity of the “mean” term. Sometimes it offers more options on the original MEAN products, sometimes scaffolding around getting started faster, sometimes merely looking to cash in on the SEO value of the term.
 With MEAN losing its edge, many other stacks and methodologies have emerged.
 The JAM Stack
The JAM stack is the next evolution of the MEAN stack. The JAM stack includes JavaScript, APIs, and Markup. In this stack, the back-end isn’t specified – neither the webserver, the back-end language, or the database.
 In the JAM stack we use JavaScript to build a thick client in the browser, it calls APIs, and mashes the data with Markup — likely the same HTML templates we would build in the MEAN stack. The JavaScript frameworks have evolved as well. The new top contenders are React, Vue.js, and Angular, with additional players from Svelte, Auralia, Ember, Meteor, and many others.
 The frameworks have mostly standardized on common concepts like virtual dom, 1-way data binding, and web components. Each framework then combines these concepts with the opinions and styles of the author.
 The JAM stack focuses exclusively on the thick-client browser environment, merely giving a nod to the APIs, as if magic happens behind there. This has given rise to backend-as-a-service products like Firebase, and API innovations beyond REST including gRPC and GraphQL. But, just as legacy stacks ignored the browser thick-client, the JAM stack marginalizes the backend, to our detriment.
 Maturing Application Architecture
As the web and the cloud have matured, as system architects, we have also matured in our thoughts of how to design web properties.
 As technology has progressed, we’ve gotten much better at building highly scalable systems. Microservices offer a much different application model where simple pieces are arranged into a mesh. Containers offer ephemeral hardware that’s easy to spin up and replace, leading to utility computing.
 As consumers and business users of systems, we almost take for granted that a system will be always on and infinitely scalable. We don’t even consider the complexity of geo-replication of data or latency of trans-continental communication. If we need to wait more than a second or two, we move onto the next product or the next task.
 With these maturing tastes, we now take for granted that an application can handle near infinite load without degradation to users, and that features can be upgraded and replaced without downtime. Imagine the absurdity if Google Maps went down every day at 10 pm so they could upgrade the system, or if Facebook went down if a million people or more posted at the same time.
 We now take for granted that our applications can scale, and the naive LAMP and MEAN stacks are no longer relevant.
 Characteristics of the Modern Stack
What does the modern stack look like?  What are the elements of a modern system?  I propose a modern system is cloud-native, utility-billed, infinite-scale, low-latency, user-relevant using machine learning, stores and processes disparate data types and sources, and delivers personalized results to each user. Let’s dig into these concepts.
 A modern system allows boundless scale. As a business user, I can’t handle if my system gets slow when we add more users. If the site goes viral, it needs to continue serving requests, and if the site is seasonally slow, we need to turn down the spend to match revenue. Utility billing and cloud-native scale offers this opportunity. Mounds of hardware are available for us to scale into immediately upon request. If we design stateless, distributed systems, additional load doesn’t produce latency issues.
 A modern system processes disparate data types and sources. Our systems produce logs of unstructured system behavior and failures. Events from sensors and user activity flood in as huge amounts of time-series events. Users produce transactions by placing orders or requesting services. And the product catalog or news feed is a library of documents that must be rendered completely and quickly. As users and stakeholders consume the system’s features, they don’t want or need to know how this data is stored or processed. They need only see that it’s available, searchable, and consumable.
 A modern system produces relevant information. In the world of big data, and even bigger compute capacity, it’s our task to give users relevant information from all sources. Machine learning models can identify trends in data, suggesting related activities or purchases, delivering relevant, real-time results to users. Just as easily, these models can detect outlier activities that suggest fraud. As we gain trust in the insights gained from these real-time analytics, we can empower the machines to make decisions that deliver real business value to our organization.
 MemSQL is the Modern Stack’s Database
Whether you choose to build your web properties in Java or C#, in Python or Go, in Ruby or JavaScript, you need a data store that can elastically and boundlessly scale with your application. One that solves the problems that Mongo ran into – that scales effortlessly, and that meets ACID guarantees for data durability.
 We also need a database that supports the SQL standard for data retrieval. This brings two benefits: a SQL database “plays well with others,” supporting the vast number of tools out there that interface to SQL, as well as the vast number of developers and sophisticated end users who know SQL code. The decades of work that have gone into honing the efficiency of SQL implementations is also worth tapping into.
 These requirements have called forth a new class of databases, which go by a variety of names; we will use the term NewSQL here. A NewSQL database is distributed, like Mongo, but meets ACID guarantees, providing durability, along with support for SQL. CockroachDB and Google Spanner are examples of NewSQL databases.
 We believe that MemSQL brings the best SQL, distributed, and cloud-native story to the table. At the core of MemSQL is the distributed database. In the database’s control plane is a master node and other aggregator nodes responsible for splitting the query across leaf nodes, and combining the results into deterministic data sets. ACID-compliant transactions ensure each update is durably committed to the data partitions, and available for subsequent requests. In-memory skiplists speed up seeking and querying data, and completely avoid data locks.
 MemSQL Helios delivers the same boundless scale engine as a managed service in the cloud. No longer do you need to provision additional hardware or carve out VMs. Merely drag a slider up or down to ensure the capacity you need is available.
 MemSQL is able to ingest data from Kafka streams, from S3 buckets of data stored in JSON, CSV, and other formats, and deliver the data into place without interrupting real-time analytical queries. Native transforms allow shelling out into any process to transform or augment the data, such as calling into a Spark ML model.
 MemSQL stores relational data, stores document data in JSON columns, provides time-series windowing functions, allows for super-fast in-memory rowstore tables snapshotted to disk and disk-based columnstore data, heavily cached in memory.
 As we craft the modern app stack, include MemSQL as your durable, boundless cloud-native data store of choice.
 Conclusion
Stacks have allowed us to simplify the sea of choices to a few packages known to work well together. The MEAN stack was one such toolchain that allowed developers to focus less on infrastructure choices and more on developing business value.
 Sadly, the MEAN stack hasn’t aged well. We’ve moved on to the JAM stack, but this ignores the back-end completely.
 As our tastes have matured, we assume more from our infrastructure. We need a cloud-native advocate that can boundlessly scale, as our users expect us to deliver superior experiences. Try MemSQL for free today, or contact us for a personalized demo.[Source]-https://www.memsql.com/blog/whats-after-the-mean-stack/
62 Hours Mean Stack Developer Training  includes MongoDB, JavaScript, A62 angularJS Training, MongoDB, Node JS and live Project Development. Demo Mean Stack Training available.
0 notes