Wizcorp is a Tokyo-base game production company focused on providing complete development services.
Don't wanna be here? Send us removal request.
Text
Wizcorp now part of Keywords Studios
Ref: https://www.londonstockexchange.com/exchange/news/market-news/market-news-detail/KWS/14047491.html
On April 18, 2019, Keywords Studios has acquired Wizcorp Inc.
This acquisition by Keywords Studios is a culmination of thorough, strategic discussions during which Wizcorp's direction team has gotten to interact and know key members of Keywords Studios. We are extremely happy to be joining such a knowledgeable and experienced team and believe that this partnership will turn both Wizcorp and Keywords into a powerhouse in Japan.
Keywords Studios provides a variety of technical services to the global gaming and entertainment industry. Their service lines include Art, Audio, Localization, Localization QA, Functionality QA, Engineering and Player Support. The company currently has 50 studios in 20 countries on 4 continents and employs over 5,000 people. Their services are being used by the 25 top gaming companies, and 8 out of the 10 top mobile games publishers by revenue.
This acquisition creates a match made in heaven; while Wizcorp will enable Keywords a new level of access to the Japanese market, Keywords will enable Wizcorp to outsource portions of larger projects while still remaining competitive.
And more importantly, we are joining a group of like-minded studios and individuals who values their customers as much as we do. We believe this cultural kinship will help us feel right at home within Keywords Studios.
We would like to thank Chris Kennedy, Managing Director for Asia, for working with us throughout the acquisition and giving us an introduction to the world of Keywords Studios.
This is the beginning of a new adventure for us, and we are looking forward to working with everyone!
1 note
·
View note
Text
Tokyo Game Show 2018
For game studios and publishers, not just in Japan but worldwide, the Tokyo Game Show is perhaps its grand showcase, generating the most hype for new titles, new hardware, new companies, and new artists.

A four day spectacle, events begin on Thursday and don’t end until Sunday of that week; the first two days being reserved for patrons with business or press connections to the gaming industry. This year, the event ran from September the 20th to the 23rd. The general public can attend too, but be warned to show up early. If you don’t queue up very early in the morning to get tickets, you probably won’t get any opportunity to play anything by the time noon comes along.
For a representative from a game studio and publisher, though, the Tokyo Game Show offers more than just a look at the newest releases. It’s an opportunity for us and others to connect with other studios and particularly, potential talent that we’d like to hire or work with in the future.
Personally, I came with three objectives:
1. Check out the latest developments in AR (in particular, libraries for detecting and analyzing human hand digit motion)
2. Follow-up with students I have kept in contact with, and have them demonstrate their work and projects to me. In particular, there was one demo from Tokyo Net Wave using hand recognition that I was quite impressed with.
3. And in a non-work role, on general admission day on Saturday, with my daughter (who is a student herself studying computer illustration and animation), inspect the portfolio and work done in the design schools -- usually done with Maya and Photoshop.
In addition to big sections of the Makuhari Messe hall that are devoted to works from other countries (especially those in southeast Asia, eastern Europe, and South America) and Indie and Amateur game development, there is a large hall dedicated to educational institutions: from full size universities to specialized vocational schools. These institutions offer educations not just in game programming and engineering, but the fine arts and design that are necessary for any high quality work.
It’s a treat perusing the student’s portfolios and works on display, and catching up with the career guidance representatives at each school.
In addition to standard recruiting, Wizcorp actively works with many of these schools: we attend their career fairs and their students attend our “Wizard Academy” internship program. Alumni of the Wizard Academy have ended up becoming wizards here in our building in Higashi-Nihonbashi, and we expect many more to join us.
These schools in Japan are not just limited to Japanese, nor are they limited to pure programmers. Here at Wizcorp we are international, having a mix of both Japanese and non-Japanese... technical engineers and computer animators and artists.
The Tokyo Game Show is a place to see not just the latest AAA games and the hardware for the next console. It’s a place where budding stars and those with dreams of making their own games come to find where to learn those skills, and possibly meet the company that will enable their dreams.
If you’re interested in learning more about the Wizard Academy, please check out our web page.
1 note
·
View note
Text
Wizard Academy Summer 2018 at Wizcorp
Every year Wizcorp holds an internship program called Wizard Academy, for young programmers who want to learn game development and practice new skills. The format is similar to an intensive training camp, where participants attend several lectures and practice by making an HTML5 sample game.

This year Wizard Academy Summer 2018 was from August 20th to 31st, and we welcomed six motivated interns at our headquarters. The theme was to make a shooting game and we highly emphasized the creative part of the project.
We prepared a curriculum for a period of 2 weeks: 2 days of lectures on game development & game design concept, and on the different tools participants will use to make the game, followed by 8 days of actual game development.
The tools are Visual Studio Code, Git, Javascript & TypeScript, Phaser.
Our six participants were put into two groups and worked as a team to help and learn not only from our IT professional staff but also from each other. There was no competition between the two teams, the atmosphere was always very hard-working and friendly.

Here are the amazing results from these 2 weeks of hard work from our interns:
Game title: Good Witch by Co-op creators team
Play here: https://wizcorp.github.io/wizard-academy-2018-co-op-creators/
Game title: Hungry Cat by Flying Cats team
Play here: https://wizcorp.github.io/wizard-academy-2018-flying-cats/
Like last year, we also held a closing ceremony so interns can present their games, and celebrate their achievements!
We would like to warmly thank all our team members and participants for making this Wizard Academy Summer 2018 internship program a great experience! お疲���様です!

Feel free to contact us if you are interested in joining our next Wizard Academy in 2019!
The Wizcorp Team
#game development#html5#game#JavaScript#Git#programming#training#internship#wizard academy#summer 2018#wizcorp
3 notes
·
View notes
Text
[Fix inside] Why Windows Explorer goes funky in folders with TypeScript files
If you're doing TypeScript development on Windows, you may have noticed that some folders hang the file explorer, as shown below.
This can be annoying, because if you have a slower machine and there are a lot of files in a given folder, the Explorer may hang out for several seconds when you visit it for the first time.
So why is Windows starting to show the folder as usual, blocking a second on every file, and then deciding to switch the view to thumbnails ?
A little explanation on how the Explorer works internally is in order here.
The file explorer has several so-called "kinds” of folders, which configure how files are shown by default. You can override, for each of these kinds, which layout you want. By default since Windows 7, general items use the "Details" view as shown above, while Pictures and Videos show in "Big thumbnails", and Music a variation of the "Details" layout with ID tags.
The kind of folder is auto-determined based on its contents. For optimization purposes, Windows tries to delay determining the appropriate kind for a folder until you open it for the first time.
So, in our situation, for some reason Windows, after some trouble, determines that this folder contains Videos.
Which is technically true. Because .ts files, before being TypeScript files are actually Transscript files, used on DVDs, and supported natively by Windows. Although they look like valid video files, they are not, so as the file handler goes and tries to parse it, it hangs for a while, which causes the delay. Nevertheless, the folder is determined as being a video folder because it contains a majority of those.
The solution then?
If you are annoyed here is a completely safe operation to do. Open the registry editor, navigate to Computer\HKEY_CLASSES_ROOT\\.ts (tip: as you open regedit.exe, type Ctrl+L to focus the address bar, paste the path and then press enter), double click on the PerceivedType key and replace video by text.
This will stop the madness for future folders. You then may want go ahead and define another default program for those files, such as Sublime Text. You may also dig further down and replace the Content type and the (Default) keys to read text/plain and txtfile respectively for example. Look at the .txt key just below to see how a simple text file is configured on Windows.
Avoid touching anything else before prior research. In case you have messed things up, you may get a backup of the original, unmodified .ts key here. Download it, add the .reg extension by renaming the file and double-click to run it.
Note: in case some folder remain shown as videos, you can reset the Windows Explorer by doing a Disk Cleanup (My Computer -> Right click on <My Drive (C:)> -> Disk Cleanup -> make sure that Thumbnails is ticked and click OK.
0 notes
Text
10 years

Today, 10 years ago, Wizcorp was born with a vision of applying cutting-edge software technologies to create innovative user experiences and products. We have come a long way since, and had many challenges, but we wouldn’t have become who we are without the support of our customers and our passionate members.
We would like to thank everyone who has been a part of our journey, and we look forward to your continuous support. Let’s make together the next 10 years as exciting, if not more, as the previous 10!
0 notes
Text
Unite 2018 report
Introduction
A few Wizcorp engineers participated in Unite Tokyo 2018 in order to learn more about the future of Unity and how to use it for our future projects. Unite Tokyo is a 3-day event held by Unity in different major cities, including Seoul, San Francisco and Tokyo. It takes the form of conferences made by various Unity employees around the globe, where they give an insight on some existing or future technologies and teach people about them. You can find more information about Unite here.
In retrospective, here is a summary of what we’ve learned or found exciting, and that could be useful for the future of Wizcorp.
Introduction first day
The presentation on ProBuilder was very interesting. It showed how to quickly make levels in a way similar to Tomb Raider for example. You can use blocks, slopes, snap them to grid, quickly add prefabs inside and test all without leaving the editor, speeding up the development process tremendously.
They made a presentation on ShaderGraph. You may already be aware about it, but in case you’re not, it’s worth checking it out.
They talked about the lightweight pipeline, which provides a new modular architecture to Unity, in the goal of getting it to run on smaller devices. In our case, that means that we could get a web app in something as little as 72 kilobytes! If it delivers as expected (end of 2018), it may seriously compromise the need to stick to web technologies.
They showed a playable web ad that loads and plays within one second over wifi. It then drives the player to the App Store. They think that this is a better way to advertise your game.
They have a new tool set for the automotive industry, allowing to make very good looking simulations with models from real cars.
They are making Unity Hack Week events around the globe. Check that out if you are not aware about it.
They introduced the Burst compiler, which aims to take advantage of the multi-core processors and generates code with math and vector floating point units in mind, optimizing for the target hardware and providing substantial runtime performance improvements.
They presented improvements in the field of AR, typically with a game that is playing on a sheet that you’re holding on your hand.
Anime-style rendering
They presented the processes that they use in Unity to approach as close as possible Anime style rendering, and the result was very interesting. Nothing is rocket science though, it includes mostly effects that you would use in other games, such as full screen distortion, blur, bloom, synthesis on an HDR buffer, cloud shading, a weather system through usage of fog, skybox color config and fiddling with the character lighting volume.
Optimization of mobile games by Bandai Namco
In Idolmaster, a typical stage scene has 15k polygons only, and a character has a little more than that. They make the whole stage texture fit on a 1024x1024 texture for performance.
For post processing, they have DoF, bloom, blur, flare and 1280x720 as a reference resolution (with MSAA).
The project was started as an experiment in April of 2016, then was started officially in January of 2017, then released on June 29th of the same year.
They mentioned taking care about minimizing draw calls, calls to SetPassCall(DrawCall).
They use texture atlases with index vertex buffers to reduce memory and include performance.
They used the snapdragon profiler to optimise for the target platforms. They would use an approach where they try, improve, try again and then stop when it’s good enough.
One of the big challenges was to have lives with 13 people (lots of polys / info).
Unity profiling and performance improvements
This presentation was made by someone who audits commercial games and gives them support on how to improve the performance or fix bugs.
http://github.com/MarkUnity/AssetAuditor
Mipmaps add 33% to texture size, try to avoid.
Enabling read/write in a texture asset always adds 50% to the texture size since it needs to remain in main memory. Same for meshes.
Vertex compression (in player settings) just uses half precision floating points for vertices.
Play with animation compression settings.
ETC Crunch textures are decrunched on the CPU, so be careful about the additional load.
Beware about animation culling: when offscreen, culled animations will not be processed (like disabled), and with non-deterministic animations this means that if disabled, when it’s enabled again, it will have to be computed for all the time where it was disabled, which may create a huge CPU peak (can happen when disabling and then re-enabling an object too).
Presentation of Little Champions
Looks like a nice game.
Was started on Unity 5.x and was then ported on to Unity 2017.x.
They do their own custom physics processes, by using WaitForFixedUpdate from within FixedUpdate. The OnTriggerXXX and OnCollisionXXX handlers are called afterwards.
They have a very nice level editor for iPad that they used during development. They say it was the key to creating nice puzzle levels, to test them quickly, fix and try again, all from the final device where the game is going to be run on.
Machine learning
A very interesting presentation that showed how to teach a computer to play a simple Wipeout clone. It was probably the simplest you could get it (since you only play left or right, and look out for walls using 8 ray casts.
I can enthusiastically suggest that you read about machine learning yourself, since there’s not really room for a full explanation of the concepts approached there in this small article. But the presenter was excellent.
Some concepts:
You have two training methods: one is reinforcement learning (where you learn through rewards, trial and error, super-speed simulation, so that the agent becomes “mathematically optimal” at task) and one is imitation learning (like humans, learning through demonstrations, without rewards, requiring real-time interaction).
You can also use cooperative agents (one brain -- the teacher, and two agents -- like players, or hands -- playing together towards a given goal).
Learning environment: Agent <- Brain <- Academy <- Tensorflow (for training AIs).
Timeline
Timeline is a plugin for Unity that is designed to create animations that manipulate the entire scene based on time, a bit like Adobe Premiere™.
It consists of tracks, with clips which animate properties (a bit like the default animation system). It’s very similar but adds a lot of features that are more aimed towards creating movies (typically for cut scenes). For example, animations can blend among each other.
The demo he showed us was very interesting, it used it to create a RTS game entirely.
Every section would be scripted (reaction of enemies, cut scenes, etc.) and using conditions the track head would move and execute the appropriate section of scripted gameplay.
He also showed a visual novel like system (where input was waited on to proceed forward).
He also showed a space shooter. The movement and patterns of bullet, enemies, then waves and full levels would be made into tracks, and those tracks would be combined at the appropriate hierarchical level.
Ideas of use for Timeline: rhythm game, endless runner, …
On a personal note I like his idea: he gave himself one week to try creating a game using as much as possible this technology so that he could see what it’s worth.
What was interesting (and hard to summarize in a few lines here, but I recommend checking it out) is that he uses Timeline alternatively to dictate the gameplay and sometimes for the opposite. Used wisely it can be a great game design tool, to quickly build a prototype.
Timeline is able to instantiate objects, read scriptable objects and is very extensible.
It’s also used for programmers or game designers to quickly create the “scaffoldings” of a scene and give that to the artists and designers, instead of having them to guess how long each clip should take, etc.
Another interesting feature of Timeline is the ability to start or resume at any point very easily. Very handy in the case of the space shooter to test difficulty and level transitions for instance.
Suggest downloading “Default Playables” in the Asset Store to get started with Timeline.
Cygames: about optimization for mid-range devices
Features they used
Sun shaft
Lens flare (with the use of the Unity collision feature for determining occlusion, and it was a challenge to set colliders properly on all appropriate objects, including for example the fingers of a hand)
Tilt shift (not very convincing, just using the depth information to blur in post processing)
Toon rendering
They rewrote the lighting pipeline rendering entirely and compacted various maps (like the normal map) in the environment maps.
They presented where ETC2 is appropriate over ETC, which is basically that it reduces color banding, but takes more time to compress at the same quality and is not supported on older devices, and this was why they chose to not use it until recently.
Other than that, they mentioned various techniques that they used on the server side to ensure a good framerate and responsiveness. Also they mentioned that they reserved a machine with a 500 GB hard drive just for the Unity Cache Server.
Progressive lightmapper
The presentation was about their progress on the new lightmapper engine from which we already got a video some time ago (link below). This time, the presenter did apply that to a small game that he was making with a sort of toon-shaded environment. He showed what happens with the different parameters and the power of the new lighting engine.
A video: https://www.youtube.com/watch?v=cRFwzf4BHvA
This has to be enabled in the Player Settings (instead of the Enlighten engine).
The big news is that lights become displayed in the editor directly (instead of having to start the game, get Unity to bake them, etc.).
The scene is initially displayed without lights and little by little as they become available textures are updated with baked light information. You can continue to work meanwhile.
Prioritize view option: bakes what's visible in the camera viewport view first (good for productivity, works just as you’d expect it).
He explained some parameters that come into action when select the best combination for performance vs speed:
Direct samples -> simply vectors from a texel (pixel on texture) to all the lights, if it finds a light it's lit, if it's blocked it's not lit.
Indirect samples: they bounce (e.g. emitted from ground, then bounces on object, then on skybox).
Bounces: 1 should be enough on very open scenes, else you might need more (indoor, etc.).
Filtering smoothes out the result of the bake. Looks cartoonish.
They added the A-Trous blur method (preserves edges and AO).
Be careful about UV charts, which controls how Unity divides objects (based on their normal, so each face of a cube would be in a different UV chart for example), and stops at the end of a chart, to create a hard edge. More UV maps = more “facetted” render (like low-poly). Note that for a big number of UV maps, the object will become round again, because the filtering will blur everything.
Mixed modes: normally lights are either realtime or baked.
3 modes: subtractive (subtract shadows with a single color; can appear out of place), shadowmask: bake into separate lightmaps, so that we can recolor them; still fast and flexible, and the most expensive one where all is done dynamically (useful for the sunlight cycle for example), and distance shadowmask uses dynamic only for objects close to the camera, else baked lightmaps.
The new C# Job system
https://unity3d.com/unity/features/job-system-ECS ← available from Unity 2018.1, along with the new .NET 4.x.
They are slowly bringing concepts from entity / component into Unity.
Eventually they’ll phase out the GameObject, which is too central and requires too much stuff to be mono-threaded.
They explain why they made the choice
Let’s take a list of GameObject’s each having a Transform, a Collider and a RigidBody. Those parts are laid out in the memory sequentially, object per object. A Transform is actually a lot of properties, so accessing only a few of the properties of a Transform in many objects (like particles) will be inefficient for cache accesses.
With the entity/component system, you need to request for the members that you are accessing, and it can be optimized for that. It can also be multi-threaded properly. All that is combined with the new Burst compiler, which generates more performant code based on the hardware.
Entities don't appear in the hierarchy, like Game Objects do.
In his demo, he manages to display 80,000 snowflakes in the editor instead of 13,000.
Here is some example code:
public struct SnowflakeData: IComponentData { public float FallSpeedValue; public float RotationSpeedValue; } public class SnowflakeSystem: JobComponentSystem { private struct SnowMoveJob: IJobProcessComponentData { public float DeltaTime; public void Execute(ref Position pos, ref Rotation pos, ref SnowflakeData data) { pos.Value.y -= data.FallSpeedValue * DeltaTime; rot.Value = math.mul(math.normalize(rot.Value), math.axisAngle(math.up(), data.RotationSpeedValue * DeltaTime)); } } protected override JobHandle OnUpdate(JobHandle inputDeps) { var job = new SnowMoveJob { DeltaTime = Time.DeltaTime }; return job.Schedule(this, 64, inputDeps); } } public class SnowflakeManager: MonoBehaviour { public int FlakesToSpawn = 1000; public static EntityArchetype SnowFlakeArch; [RuntimeInitializeOnLoadMethod(RuntimeInitializeLoadType.BeforeSceneLoad)] public static void Initialize() { var entityManager = World.Active.GetOrCreateManager(); entityManager.CreateArchetype( typeof(Position), typeof(Rotation), typeof(MeshInstanceRenderer), typeof(TransformMatrix)); } void Start() { SpawnSnow(); } void Update() { if (Input.GetKeyDown(KeyCode.Space)) { SpawnSnow(); } } void SpawnSnow() { var entityManager = World.Active.GetOrCreateManager(); NativeArray snowFlakes = new NativeArray(FlakesToSpawn, Allocator.Temp); // temporary allocation, so that we can dispose of it afterwards entityManager.CreateEntity(SnowFlakeArch, snowFlakes); for (int i = 0; i < FlakesToSpawn; i++) { entityManager.SetComponentData(snowFlakes[i], new Position { Value = RandomPosition() }); // RandomPosition made by the presenter entityManager.SetSharedComponentData(snowFlakes[i], new MeshInstanceRenderer { material = SnowflakeMat, ... }); entityManager.AddComponentData(snowFlakes[i], new SnowflakeData { FallSpeedValue = RandomFallSpeed(), RotationSpeedValue = RandomFallSpeed() }); } // Dispose of the array snowFlakes.Dispose(); // Update UI (variables made by the presenter) numberOfSnowflakes += FlakesToSpawn; EntityDisplayText.text = numberOfSnowflakes.ToString(); } }
Conclusion
We hope that you enjoyed reading this little summary of some of the presentations which we attended to.
As a general note, I would say that Unite is an event aimed at hardcore Unity fans. There is some time for networking with Unity engineers between the sessions (who come from all around the world), and not many beginners. It can be a good time to extend professional connections with (very serious) people from the Unity ecosystem, and not great for recruiting for instance. But you have to go for it and make it happen. By default the program will just have you follow sessions one after each other with value that is probably similar to what you would have by watching official presentations in a few weeks or months from now on YouTube. I’m a firm believer that socializing is way better than watching videos from home, so you won’t get me saying that it’s a waste of time, but if you are to send people there the best is when they are proactive and passionate themselves about Unity. If they just use it at work, I feel that the value is rather small, and I would even dare that it’s a bit a failure from the Unity team, as it can be hard to see who they are targeting.
There is also the Unite party, which you have to book way before, that may improve value for networking, but none of us could attend.
1 note
·
View note
Text
CTO Talks: Hiring Edition
https://ctotalktokyo.doorkeeper.jp/ is a gathering of Tokyo-based CTO from different companies, organized by Paul McMahon from Doorkeeper. Last Tuesday was the first time I participated, and I have to say I was pleasantly surprised by everyone’s caliber. Many good questions, even better answers, and ultimately great debates.
Here are some of my notes from this last meeting.
Becoming CTO
We started with self-introductions.
A point of interest to me was that the ones who are CTO were in large portion hired executives, meaning that they neither founders or founding members. It was also of interest to notice that many did not feel like they could rely on good source material to familiarize themselves with the how to undertake this role.
Screening Developer Candidates
The main topic of the evening was recruiting. While not everyone participating were proper CTO, all were in a position of technical leadership with their respective organization.
The types of organizations were varied, from relatively small startups such as Adgo to larger companies such as Indeed (some did not fail to notice the irony here).
While the official title for the evening was about screening developers, it seemed clear to me that there is one more challenging issue most face: talent acquisition, and simply get people to apply. While some still function more or less organically, others rely (with often little success) on recruiters.
Everyone had their own processes, involving some amount of pair programming or overall skill testing. Robert Schuman from Zehitomo probably had the most systematic approach, with a full quantitative evaluation matrix aiming at reducing biases. Others such as Sergio Arcos allowed a bit more space to gut-feeling.
Interview duration appeared to be a dividing factor amongst participants; some have full-day screenings, others spread a few meetings over time.
Everyone did agree that culture fit is more important than skill. Many of the smaller organizations also tend to emphasize building a close and personal relationship with candidates once a certain point is reached, inviting them for more informal coffee/dinner/drinks.
Interestingly, most organizations regardless of size would be more than willing to get their staff members extensively involved in the screening and evaluation process. While I think it’s great to get staff members to evaluate newcomers, I wondered a bit how some of the mid-size organization could afford the time to do so on the scale being described.
Stay tuned!
Overall, this was a lot of good fun. Brian de Heus from Adgo (and ex-Wizcorp staff member) mentioned that it would be great if we could put some resources online to help other CTO and tech leaders with challenges we commonly share. I thought this would be a great idea. Many of the struggles we have are of the same kind, and I think it would be fantastic if we could make it easier for the next generation of technology executives to grow more smoothly in their position.
2 notes
·
View notes
Text
Megadata: Smart messaging for games
See https://github.com/Wizcorp/megadata#acknowledgements for acknowledgements regarding the logo
Making real-time networked web games can be somewhat challenging:
What serialization/deserialization format should I use?
What network layer should I use? Websockets, or HTTP/2 and SSE?
How can I share message definition between my client and my server so to speed up the development process?
Our answer to these questions is Megadata. Megadata is a TypeScript library which will help you define message types, and process message instances at runtime.
Our goals with Megadata are:
Transport agnostic: it should be simple to integrate Megadata with any network stack (HTTP/2 with SSE, Websockets, or event TCP/UDP using Electron as a game client wrapper)
Isomorphic: all Medata’scode can be run both in a browser (using Webpack) or Node.js
Extendable: developers should be able to create their own serialization formats, and even be able to use multiple ones within the same project
Feel free to join us on Gitter or open an issue if you have any questions!
1 note
·
View note
Text
Wizcorp at Node Fest 2017
Every year, the Node.js community of Japan organizes a big 2-day conference called Node Fest. Wizcorp has a big stake in the technology, and of course we always visit Node Fest. This year, we decided to do a little extra through. We signed up to be an official sponsor of Node Fest 2017.
Node.js has given us a lot, and while we have given back in code contributions and evangelizing, this is the first time we proudly decided to invest hard cash into the community. A strong and healthy Node community is of course good for all of us, and it's nice to be able to show it.
Personally, I did not attend the event to represent Wizcorp however, since I am also a member of the Node Fest organization. Last year, I interpreted talks from international speakers live on stage. This year, we did so on Slack. On top of that, my role involved more interaction with speakers and visitors, which was very rewarding.
There were speakers from all over the world, and the talks from the main auditorium have been shared on YouTube, so even if you did not attend the event you still get to enjoy a selection of the great talks that were held.

Besides talks, there were also some great workshops around some specific technologies, an open discussion with Node.js collaborators about what people would like to see improve in the Node.js ecosystem, and a 101 hands-on session where people got to contribute to the Node.js project directly, with mentoring from Node collaborators present at Node Fest.
I would like to express my immense admiration and gratitude to the Node Fest organization as a whole, and lead organizer Yosuke Furukawa in particular for throwing such a positive, fun and educational event. Based on my own impressions and on feedback I received, I think it was a very successful event!
Some other material about Node Fest 2017 you may enjoy:
Our Facebook page with event photos
Yosuke Furukawa’s blog
Gihyo.jp event report
0 notes
Photo






Wizcorp at TGS2017
Tokyo Game Show 2017 was on September 21st-22nd (business days) and 23rd-24th (public days), and more than 250,000 people in total attended this famous event, well-known not only in Japan but abroad as well. We at Wizcorp were at the Avex booth to introduce the new game Kakegurui - Cheating Allowed (賭けグルイ - チーティングアロード) inspired by the anime which was aired from July 1st to September 23rd this year in Japan.
Kakegurui is originally a manga about gambling written by Homura Kawamoto and illustrated by Tōru Naomura, published in Square Enix’s manga magazine Gangan Joker. The manga started in 2014 and is still on-going with 8 volumes so far.
Kakegurui manga and anime have a very unique setting and many original characters, which have led to the series’ popularity. The upcoming game, developed by Wizcorp, will strive to live up to the show by presenting fun and interesting features that all players can enjoy. The game is still in development but the team worked hard to prepare a nice demo for the TGS.

Visitors and fans trying the Kakegurui game demo at the Avex booth.
Two of Kakegurui’s voice actresses, Mariya Ise (character: Midari Ikishima) and Yūki Wakai (character: Itsuki Sumeragi) were also at the Avex booth on Saturday 23rd to animate the Kakegurui corner. They organized a small event to introduce the game by playing for real the first gambling game featured in the show, “Vote Rock-Paper-Scissors” (投票じゃんけん). After our CEO (who is also Kakegurui’s game director) Guillaume Hansali’s talk, Mariya Ise and Yūki Wakai enthusiastically played the game, involving the public during the event.

Yūki Wakai and Mariya Ise playing “Vote Rock-Paper-Scissors” against each other in front of the public.
Overall the event was a good success and a great opportunity for the team to meet with fans of Kakegurui. For more information about the game and upcoming release and events, please sign up on the pre-registration site: http://kakegurui-anime.com/game/


Screenshots of the Kakegurui game demo.
#kakegurui#賭けグルイ#tokyo game show#TGS2017#yumeko jabami#kirari momobami#mariya ise#yuki wakai#game#game development#wizcorp
1 note
·
View note
Text
Wizard Academy Summer 2017 at Wizcorp

Wizcorp has organized the Wizard Academy since the spring of 2016. It is a training camp to learn and practice new skills, guided by IT professionals, in a working environment.
The Wizard Academy was put in place for anyone interested in discovering game development and learning new technologies. They can gain more experience and knowledge in programming (both client-side and server-side), and since Wizcorp is a very diverse and international company, people participating in the Wizard Academy can also practice their English.
This year the Wizard Academy Summer 2017 took place from July 31st to August 18th.

The training camp consists of several lectures and practice sessions. Participants learn various technologies (MAGE, Docker, Git, PixiJS, JavaScript, etc) that are used in game development, and practice their new knowledge by making a very simple game. The Wizard Academy welcomes a group of (maximum) 5 people who are trained by professional developers.
The game: Wizard Strike
This year, participants worked on a “Monster Strike” type game that we called Wizard Strike. It is a multiplayer HTML5 game that up to four people can play at once.

Wizcorp has developed MAGE which is used in the game industry. For more details about MAGE, please visit the website https://www.wizcorp.jp/mage/
After 3 weeks of training, a closing ceremony is held to celebrate the participants’ achievements, in a very fun and relaxed atmosphere with pizza and beer!

The next Wizard Academy will be in the spring of 2018, please don’t hesitate to contact us if you are interested in participating!
#game development#training#programming#HTML5#Git#Docker#JavaScript#PixiJS#Wizard Academy#Summer 2017#Wizcorp#MAGE
0 notes
Text
Being a foreign developer in Japan
After having studied Japanese in Tokyo for 10 months, I found Wizcorp: a company of three non-Japanese people, a year after it was founded. That was about 6 years ago. So far, it's been quite a ride. Wizcorp has grown to be a very diverse group of people from all over the world, and today we can happily say that list of nationalities includes the Japanese one as well. Still, we mainly find ourselves conversing in English. That's one of the things that make Wizcorp such an easy place to work as a foreigner in Japan, but at the same time our Japanese language development may be below average because of that.
Some of us are lucky to interact a lot with Japanese customers, and are able to pick up new vocabulary through that. Wizcorp also offers all interested employees Japanese classes. On top of that, some of our Japanese staff have taken matters into their own hands and do study sessions based on Japanese comedy sketches, anime, song lyrics and movies. On top of the practice, it's just a really fun and social activity. So in a way, there is plenty of opportunity to learn Japanese.
A few years ago, one of our staff members started a word list with common technical jargon he picked up while working with customers. This list has been a great source of information, but I noticed that there was a need for this outside our company. Foreigners working in Japanese companies have been asking me for access. The last example of this was after I was asked to do live interpretation for NodeFest 2016.
I had never done interpretation work and secretly wished I had studied our word list a bit harder. Luckily, I survived interpreting the three talks I was asked to do and I hope it was of some benefit to the guests. Before one of the talks, I was approached by a guest and I mentioned the existence of our word list. I made him a promise, and I decided to Open Source the list on GitHub.
I shared the URL on Twitter and the response so far has been amazing. Within 24 hours we received our first pull requests with additions. The power of Open Source never disappoints. People are contributing in all kinds of ways and it's amazing to see that a hunger is being stilled by this initiative.
I would like to take this opportunity to share with our readers the URL to this document, so perhaps you too can learn a few (or a lot of) new Japanese words. Or perhaps you too can provide some great contributions. See you on GitHub!
Japanese Lingo for Developers: https://github.com/Wizcorp/japanese-dev-lingo
1 note
·
View note
Text
The next step for Node.js
Evolution schedule
On October 18, Node.js 6 was no longer dubbed the latest "stable" version, but rather the official new Long Term Support (LTS) version of Node. As you may remember, Node 4 has been LTS for a while now, and will remain it until April 2017. After that, it will move into a maintenance period for 18 months.
Up until now, other versions of Node have still been receiving regular upgrades when the latest bugs in OpenSSL (a popular cryptography library used in many web servers) get patched. But if you're still using a version of Node pre-4, it really is time to move on and start your upgrade process.
What you get when moving up to version 6
Besides the before-mentioned security reasons, you should expect a free performance bump, as well as:
A much improved debugger (demo video) that ties into Chrome, enhancing the development experience tremendously.
A safer Buffer memory allocation API.
Many productivity and performance enhancing JavaScript features (ES6).
For a more comprehensive idea of what you get by upgrading, please read the Node 6.9.0 release notes.
So what's next?
As Node 6 has become LTS, Node 7 (the new "stable") is peeking around the corner, and you can expect a release within a week or so. Should you be excited? Let's find out.
If you're still using Node 4, obviously you get all the benefits of Node 6. On top of that, V8, the JavaScript virtual machine at the heart of Chrome and Node has been been upgraded from 5.1 to 5.4 (which matches Chrome 51 and 54 respectively). This is easily one of the most exciting parts of the upgrade to Node 7. Let's start with the easy bits before moving on to the exciting part.
All of ES6 and ES7 features can be found in Node 7, except for ES6 modules (under investigation by both the JavaScript and Node technical committees) and tail calls (under revision by the JavaScript technical committee)
Performance improvements, of course!
Reduced memory usage
Reduced garbage collection interruptions
Improved startup time
There is also a post-ES7 feature coming to Node 7, albeit behind a flag, which are the async and await keywords. This is an incredibly exciting feature which should once and for all kill "callback hell" and all the solutions that have attempted to deal with this. That does not mean it's a workaround. It is a very elegant way to turn the already present Promise API into a much friendlier feature.
In a nutshell, it turns all I/O APIs that were designed to use it (so expect a migration period for libraries) from this:
readFile('foo.txt', function (error, result) { if (error) { console.error(error); } else { console.log('success:', result); } });
into this:
try { console.log('success:', await readFile('foo.txt')); } catch (error) { console.error(error); }
This reads as natural as most other programming languages, while maintaining the strengths of Node.js and its evented I/O model. While this may seem like a small shift, I predict will have a huge impact on how we write code in Node.js. We can finally use throw and try/catch/finally the way they were intended. That means error handling will become an order of magnitude easier compared to before. We will be able to manage the "uncaught exception" problem much better, since errors can be thrown from one function and caught 10 steps down the call stack. This is something that required a lot of asynchronous plumbing before, and will now require none at all. On top of that, our code becomes a lot shorter and readable.
In summary
(This is the part your manager probably wants to read)
Because our code will be shorter:
it will on average contain less bugs;
it will take less time to write;
it will be easier to comprehend.
Because of improved error handling:
our code will be more robust in the case of bugs;
errors will be much easier to control;
the process will be much less likely to crash due to the dreaded uncaught exception.
I for one am really looking forward to this Node release, as it may just be one of the biggest steps in the recent evolution of server-side JavaScript. Exciting times!
0 notes
Text
It pays to be a winner
How focusing on quality instead of revenue and cost is your best bet
Quality of work
Wizcorp has developed over the years a unique culture; one that is fresh, international, but most importantly focused on progress, process, and ultimately end-product quality.
There are multiple reasons why such a vibrant culture was able to flourish, but two of them stand out most in my view:
Because it is who we are. We are a team of passionate individuals, who love what they do and strive to always do it better.
Because the market needs it. Lack of quality and maintainability in the software development industry is endemic; this is especially true in game development and innovation businesses (startups, R&D based projects, and so on).
This last point always left me perplexed, personally. There are plenty of good engineers out there, both in Japan and internationally. There are plenty of capable managers as well. Available funds to engage in projects is generally available within given organisations.
And yet, time and time again, Wizcorp is required to engage in project recovery, that is in the process of getting projects out of a troublesome situation. While there are a number of reasons, variable from case to case, why projects ended up in the situation they find themselves when they come to us, there is one thing that is almost universally constant: the quality of the code - its readability, maintainability and suitability to the business it is meant to serve - is a major cause of problem.
Why is the quality of most software we encounter so poor? Why does Wizcorp need to provide project recovery services such as the A-Team, one of our most in-demand services in the last year?
Open-source: a case study
This question becomes mind-boggling once it is put in contrast with the world of open-source software; often, by an order of magnitude, the open-source code we are likely to encounter and work with - codebases where little to no funds are available to officially maintain the code - is of much better quality than the privately owned code we encounter in the industry.
Some open-source projects are funded through non-profit foundations, but the amounts are ridiculous at best when compared to the scope of their organisational interests. Take the Linux Foundation (including its Collaborative Projects): they manage over 30 million USD a year. With that, they fund the development of:
Linux, the most widely used operating system on servers, mobile and embedded devices, etc.
Xen, one of the most widely used open-source hypervisor, used by AWS and many other providers
Node.js, the most successful platform for server-side JavaScript programming
As well as many other smaller projects. Even considering the code contributions that are made by engineers from different companies (as part of their jobs, and therefore “sponsorized”), the budget to business impact ratio remains ridiculously small in comparison.
Money is not the issue
And keep in mind that these projects are some of the best funded. Most open-source projects software or game development companies are using have much, much less funding. Many of them, no funding at all. And yet, in most cases, the quality of those projects remain still higher than what we often encounter in proprietary code bases.
If you work in a company where developers are on staff, go to the nearest web developer you know (at your company, or somewhere else) and ask them what they are using as a development framework (in PHP, Ruby, Python, etc). Then, ask them to see the latest project they have built using this framework. Then ask a second engineer to look at both code bases (the proprietary code base and the framework’s), and ask them to qualify the quality of each. In most cases, you will find the following:
The framework is open-source, and has little to no funding backing it
The framework’s code is of superior quality than the project’s code.
Of course, there are more and more open-source projects that are backed by corporate interests these days - whether it is because it is part of the business model or because it helps making operation smoother. In this case, we see the quality level increase even further; not only that, but most of those companies gain significant strategic advantages thanks to this funding.
So the problem doesn’t appear to be money. And obviously, most businesses cannot go and start open-sourcing what consists their core business (for all sorts of reasons, ranging from financial to security).
What is it, then?
Finding good providers
According to John Seddon, creator of the Vanguard Method, the issue is that stakeholders tend to focus on costs (and revenues) instead of focusing on the service. The Vanguard Method, in short, is the Toyota Production System (or TPS), translated for service organisations. We do not use this methodology specifically at Wizcorp, but we do believe its premises and tenets are right on the dot.
But what does “driving cost out” mean in practice? More specifically, what does it mean when you are shopping for a software development company?
1. Mission statement
Know exactly what the current situation is, what is the problem to solve or the opportunity at hand. It should then be possible in one or two short sentences to describe what change needs to happen, what outcome is to be attained.
Equally important is to make sure that people surrounding you within your organisation agree with your analysis of the situation, and understand as clearly as you do the mission statement as you have laid it down. Everyone involved with the tasks at hand - colleagues, managers, contractors, etc - will be much better equipped to do whatever they will need to do if they can relate their tasks to the end goal.
2. Requirements
Identify what actions are required to reach the objective - and only what is required. Refrain from thinking about resources just yet; allow planning to be made without constraints. Instead, focus on identifying what systems are required, what features and subfeatures will they require, and how will each system interoperate with each others.
3. Research potential providers
At this point you should be ready to talk to the people who will be getting the job done; in this case, we will focus on external providers. We suggest that you use the following guidelines:
Try to meet in their offices, not yours. It will be more travelling, but will give you a sense of what kind of business they are
Meetings should be no shorter than an hour, but no longer than an hour and a half
Within the first 5-10 minutes (shorter being often better), explain your situation, what are you trying to accomplish, and what you want them to take care of
Ask them if they have any questions
At this point, a good provider (and especially one you have never worked with) should normally be asking some additional questions to clarify the purpose of what needs to be done. If it’s a game, what market spot is it trying to go for? What will be the novelty element? What should be the key element that will entice end-users?
Next, a good provider will normally ask if you have any specification documents. Come prepared: if you have a set of documents (SMEAC, Ishikawa Diagram, or any other kind of documents), make sure to have printed copies, and distribute them only once asked. If they do not, you probably do not want to work with them.
Then, ask them how they would do it. What technologies would they use, and why? Do not let the technology impress you; a good provider must be able to provide simple-to-understand answers to complicated problems. If they are not, then they likely do not master the tools of their own trade, and you probably should not use their services.
Finally, 10 minutes before the end of the meeting, ask the provider how much resources they think it would take to accomplish what your organisation wants. What you will be looking for is not a straight answer, but some rough estimate. A good provider should be able to give a ballpark estimate on the spot, but should also have enough experience to know that some level of research and discussion will need to happen within their own organisation before a more elaborate answer may be given.
You will want to go through this process with three to five providers, ideally providers you already have heard of or that have come recommended.
4. Trust over Cost
Peter F. Drucker once said:
Doing the right thing is more important than doing the thing right.
Following a process similar to the one described so far should go a long way to help you understand what the right thing to do is. Now the question that remains is, who can you trust to do the right thing? Which service provider did you find the most convincing? Which one seemed to care about what you do, and why you do it? Which one provided you with material you could understand - even if the subject at hand is not one you are familiar with?
Once your analysis is done, take the remaining providers (you should have two, maybe three left), and ask for an official quote.
Do not choose the cheapest by default. Software development is a service, not a product. You will need support. You will have unforeseen circumstances. You might need to ask for some last-minute changes. Do you want your provider to start complaining about the contract terms in such times? Or worse, simply say nothing and lower the quality down to a point where they might at least save enough time to break-even?
In other words, pay fair, not cheap. If you have two service providers giving you dramatically different price points, ask them where your money would be going. Strive to understand not just why one is more expensive, but also why one can be so cheap. If they are cheap because they are smart about process, then great, you are lucky and will be getting a great deal. If they are cheap because of some “obscure”, black-boxed reasons, or because they don’t feel comfortable billing their worked time (common practice in ブラック会社), then you probably should think twice about whether you want to do business with them.
If the provider is that much more expensive, try to see if it is because of specialisation. In some cases, we have seen businesses granting contracts to service providers which were way, way over specialised for their needs. In other cases, the service provider was charging a huge amount due to administrative reasons. Try to understand if what you are getting is actually serving your organisation. If it is not, then it’s waste; ask the provider if they can do anything about this that would affect price.
Overall, it is often better to negotiate cost - or payment modalities - with a service provider that is more expensive than to go with a cheaper. If anything, through this process, you should get to know your provider, to trust them and to have them trust you.
The cost of failure
Obviously, this process works well - ideally. But what if the timeline you have cannot be met by your provider of choice? What if you do not have the budget? What other choices then but to go with a cheaper offer?
You should then consider not doing your project, postponing it, or reducing its scope. Because going for an untrustworthy but cheaper provider will always cost you more, for less (in many cases, much less) quality.
Essentially, focusing on cost - that is, making cost reduction the first priority - drives bad decision making. As a service provider whom also specialises itself into project recovery, we often meet with customers whom because they focused on cost rather than on tasks and objectives, ended up hiring a team barely capable of handling the tasks at hand; even worse, because of the lack of clear objectives, they often don’t even know what, or why, they are doing any given features they were assigned to develop.
This year alone, we have seen a significant number of customers go through the following pipeline:
Customer meets with us; we send a quote
Customer finds a cheaper service provider; they opt to use said service provider
Three to six months pass
Customer comes back to Wizcorp and requests the A-Team service
For those not familiar with this service, the A-Team is an emergency response service we offer to customers who have projects with technical difficulties requiring immediate attention.
Project recovery is a long, arduous process, one that comes not only at an additional cost for your organisation, but also one that will likely affect your business operations, delaying release of content or feature updates to your game or application.
And the root issue for why the project has gone wrong is almost always one of the same two reasons: lack of proper planning or focus on cost. Ironically, it turns out that the very focus on cost is what drives many of these projects to end up costing a lot more - or at least, a lot more than initially anticipated.
Play to Win
And this is also one of the reasons why open-source projects tend to have a better quality than paid counterparts; by shifting interest from cost to quality - or, as John Seddon says, by driving cost out of the equation by managing value instead of managing cost - projects can be built at a reasonable price.
So focus on finding trustworthy service providers, not cheap service providers. To do this, you will need to:
Identify what needs to be done, and why
Identify what needs to be built, and what features should it have
Thoroughly screen providers based; make sure you understand them, and that they understand you
This way, you will have genuine control over your costs, and that control is more likely to provide you with an overall cheaper product than focusing on cost heads-on. Most importantly, by focusing on features and quality, genuine value can be produced. Which means you increase your likelihood for profit by providing something people will want.
In short, don’t play to avoid losing. Play to win.
0 notes
Text
The evolution of HTML5 and its impact on Game Development
What is HTML5?
HTML stands for HyperText Markup Language, it is used to place elements in a web page and is now in its 5th version. Beside the fact that almost no one knows what a markup language is, HTML5 has become the word to reference all the technologies that run in an web browser and therefore interact with the HTML language.
Among these technologies are the JavaScript programming language and the CSS styling language. While HTML/CSS/JavaScript can be seen as the triumvirate of HTML5, a bunch of other technologies keep being added in order to satisfy the new needs of web browsers.
What is pushing HTML5 forward?
In the past two decades, HTML5 applications have been emerging. Google has been a pioneer in the domain with applications like Google Mail, Google Maps, Google Docs or Google SpreadSheets. The main advantage of a web application is that the user does not need to install anything and the application is most likely to be compatible with your hardware and operating system (the browser taking care of it for you), therefore a lot of pain is taken out of the user.
These HTML applications need to be performant and to have access to the same functionalities as native applications in order to be competitive. It is one reason why a high tech giant like Google would push toward improving and developing web technologies. However, Apple and more lately Microsoft, are also increasing their efforts to enhance the web experience.
The main reason why HTML5 is getting an increased interest from those three companies is that more and more people get to browse the web on a regular basis through their mobile phones. The web navigation has to be intuitive, fast, responsive and risk-free. It is a reflection of the quality of the smartphone. Fast browsing experience is now a selling argument in a $350 billion market (smartphone + tablet market size as of 2015).
With the introduction of the iPhone, Apple pushed HTML5 forward and Microsoft, having recently joined the war for mobile device market share domination with its Surface tablet and its Windows phones, now sees the benefit of having a browser more efficient than the outdated Internet Explorer. Its new internet browser called Edge, coming with Windows 10, is meant to offer a user experience that is competitive with Chrome and Safari (see also a performance review of Edge).
How is HTML5 evolving?
As mentioned above, HTML5 seems to be driven by mobile technology as the gap in terms of possibilities between mobile native and mobile HTML is getting smaller and the pace at which the HTML5 is keeping up seems to be increasing as well. The improvements come in several forms (you can skip to the next section if you are not interested in technical details):
JavaScript compilers are getting more efficient as compiled JavaScript code is now running at a speed of the same order of magnitude as native code (as opposed to one order of magnitude slower 5 years ago).
Built-in debugging and profiling tools are gaining in friendliness and precision.
CSS engines are more rigorous in their implementations of the W3C standards and CSS behaviour is becoming more homogenous across all the browsers.
JavaScript language is becoming richer as implementations of the new ECMAScript specifications bring, among others things, new data-structures, new string and math functions, an API to easily chain asynchronous code (named Promise), destructuring assignment and a simpler and more convenient syntax for object instantiation and for defining function parameters.
WebGL, an API for hardware accelerated rendering, is finally supported by all the major browsers (Safari finally embraced it with the release of iOS 8). Some nice WebGL demos are accessible here.
Garbage collectors are getting smarter (here are some very recent Chrome improvements with a WebGL benchmark demonstration).
Games are among the most demanding applications in terms of resources. Until a few years ago, web technology was ill-suited for game development as the HTML5 engines were too slow to render animated graphics on screen due to heavy operations being performed on the CPU and the GPU being underpowered. Flash was the obvious solution for developing games for the Web.
HTML5 for Games
Thanks to the optimisations made to the JavaScript engines and to the availability of hardware accelerated graphics in all the major browsers, HTML5 engines now enable high graphical performance, close to what native can offer and is now largely surpassing Flash in terms of performance. Flash has lost its function as a development tool and the Flash player is becoming obsolete as browsers stop supporting it. All the small improvements made to HTML have slowly been killing Flash as a game engine (although it will probably remain a powerful tool for developing animations for some time).
However, as an alternative to Flash, HTML5 has become a possible export target for Unity and the Unreal Engine. This demonstrates that the Web is now ready for more ambitious games, and moreover, developers now have the means to do it.
HTML5 Development at Wizcorp
While Unity and the Unreal Engine seem to be obvious game engine choices to do complex 3d games, they are not necessary best suited for 2d game development.
At Wizcorp, we do not use complete game engines in order to produce HTML5 games but rather a combination of components that meets our needs on a per project basis. The Web browsers themselves have become real development tools, allowing to profile and debug code and manipulate the content of a page at runtime. JavaScript developers benefit from a large community of open source components available on GitHub. The web itself has become a giant development platform on which components can be developed, tested, shared and contributed to.
In addition, JavaScript being a shared coding language between HTML and Node.js (server side development environment) it allows components developed for the client side to be used on the server side, and vice-versa. This can lead to project emulation, improved productivity and greater flexibility. For instance, for one of our games, we are using the same canvas and WebGL rendering component on both an HTML client and a Node.js server for two different purposes: for the game rendering on the HTML client and for avatar creation on the server side.
Among the browser components we either created or contribute to are:
WUI, a DOM library for UI implementation
TINA, a tweening library for animations
PIXI, a WebGL rendering library (check out the benchmark demo)
Audio Manager, a sound player
Tomes, a component to add event emission on object modification
Constrained, a constraint solver that we use for responsive UI implementation
The rapid improvement of web browsers and the growth and maturity of open source JavaScript components are the 2 main reasons why a mobile HTML game developed today would be much smoother in terms of user experience, have better graphics and be produced in half the time as if it had been developed 2 years ago.
0 notes
Text
Contributing to Open Source
I recently blogged about the future of Node.js and how io.js 4.0.0 would become the official Node.js 4.0.0. By now this has come to pass, and I would like to reflect a little bit further on what this evolution means for people like us, and what it could mean for engineers like you.
Now that the project has been opened up and became a "by the community for the community" kind of project, it seems to have reinvigorated a lot of people who are now coming back out of the shadows to contribute. What I'm about to write does not just apply to Node.js, but to all open source projects. For the sake of simplicity, I will however stick to Node.js as a reference point.
Peek inside!
Now I've been known to urge people into opening up and peeking inside the platforms they use every day. Use PHP, Node, Rust? Whatever you use, as long as it's open source, open up its repository and look at how its built! You will learn tons of invaluable details that are often not documented, but are too good to know to ignore. It also changes your perspective on the technology you're using from "magical black box that tends to do what I ask it to do" into "that thing I understand and use".
In fact, I would go so far as to say that it changes how you debug your own code. When you're looking at stack traces of that neat application you wrote that just blew up, your thinking no longer stops at your own code. It goes beyond that and into the platform you've built your app on. It allows you to make much more accurate hypotheses about what might be going wrong. That doesn't mean that you will likely have remembered every line of code in the Node.js code base. But you will have enough of an understanding to instantly open up the Node repository on GitHub, go to the function you're calling and see how things play out from there. No documentation can ever reach this deep.
Now that I've hopefully convinced you to step out of your comfort zone and open up that magic box (hey, one can hope), the next exciting step is just around the corner: contributing!
Contributing to an Open Source project
At this point you may be thinking "Contributing? Are you crazy?? That stuff was crafted by geniuses. Code poets! Mere mortals like me would never be able to ...", and this is where I want to tell you that you're wrong. And that allowing yourself to believe this fable is holding you back.
Once you start looking at the code, look at some of the issues and perhaps some pull requests, you will slowly start to realize that some of the things you see fly by are not black magic after all. Like any project, it takes a bit of time to familiarize yourself with a code base. This is normal. Give yourself that time (weekends are great for this).
Once you've passed the initial fear of being out-classed to the point of embarrassment, you may want to start contributing. Still wanna play it safe? Contribute to the documentation! It's one of those areas that are often not given enough attention, yet have a major impact on people like yourself who are trying to understand how all the APIs behave. If you're scared of writing bad code, this is your safe zone.
Soon however, you may feel an urge to contribute code. To fix some of the issues people have pointed out in the issue tracker. Or to fix some of the issues you yourself may have submitted. Start coding already! From my own experience as a contributor and reviewer, I can tell you, there is no such thing as horrible code. Sure, people may ask you to change things, or that they're flawed. Embrace it, because that means you're about to learn something valuable! More than horrified by the code you conjured up, the reviewers will likely be grateful that you are taking on the task to make things better on their project. This is the power of open source.
Some final tips when you want to take on this challenge:
Most projects document how to contribute (often in a file called CONTRIBUTING.md).
Stick to the style of the project you are contributing to. When you visit a friend's house, you don't start redecorating their home, right? Treat a project’s code the same, some people are spending their whole days in there.
Some of the bigger projects will advocate a Code of Conduct. It protects you and others working on the project. Act professional and respect it.
Friendliness goes a long way.
Good luck coding, and see you on GitHub!
0 notes
Text
Node.js today and tomorrow
Node.js (or Node) is a JavaScript-for-the-server environment. It allows developers to use a single language for the web client and the web server: JavaScript. That opens the door to code re-use between server and client, less mental context switching and easier collaboration or fusion between front-end and back-end developers. At Wizcorp we have been using Node for years, with tremendous success, but also with its fair share of challenges.
Does Node have a future at all?
For the longest time, the future of Node has been a heavily debated topic. The original authors of the project and their sponsor Joyent held the position that after Node v1.0, no more new releases should be made (apart from bug-fixes). The product would be considered “done”. The current version of Node is 0.12 and the next release was going to be that final version.
The community, including many core contributors, never seemed quite on-board with this attitude to their favorite programming environment and clearly wanted to see the project evolve. This year, we have seen this clash of visions for Node reach their peak when the community branched from Node.js and released their own version called “io.js”. This project was to be completely community driven and new versions have been rolling off the shelf rapidly.
io.js, you say?
You may ask: what was driving this group of people? One of the changes in vision between Joyent and a big portion of the community was the adoption of ECMAScript 6 (the latest version of JavaScript).
The community saw no reason to wait, while Joyent wanted to travel the stable, no-surprises path. Good arguments can be made on both sides of this debate. The reality is however that the community has spoken, and io.js supports many modern JavaScript features that Joyent’s Node.js simply does not. Other than that, the 2 projects still look mostly identical. On the inside however, a lot has changed and io.js can rightfully be called a modernized Node.js.
It’s all about community
The io.js project has been very community minded, and the fact that for the longest time Node.js required every code contributor to sign a Contributor License Agreement surely did not help to increase collaboration. Add to this that in 2015, the Node.js project accepted hardly any contributions, it has been a frustrating time for people who want to see Node grow. While Wizcorp has made minor contributions in the past, we too have felt this inability to contribute.
The io.js story is a different one however. We are now making contributions and they are happily being received, with proper quality control of course. While the former project leads may have thought that Node was finished, contributions such as a recent Wizcorp contribution (which increased disk-write performance up to 100 times) show that there is still quite a way for the project to go. There are many more areas where performance can be improved, documentation clarified, and code simplified.
The Node Foundation
There is a happy ending to this story. In June, the io.js leadership and Joyent (under guidance from the Linux Foundation) founded the Node Foundation. The foundation has prominent members such as IBM, Intel, Joyent, Microsoft and PayPal. Recently the first efforts have been undertaken to transform io.js into the new official Node.js project. Despite this, we see a lot of contributions from non-member companies and individuals and io.js is as free (as in freedom) as ever.
The future
The first official release under the Node Foundation will probably be version 4. This is because while Node.js is still stuck at 0.12, io.js has already used up versions 1.x, 2.x and 3.x. The new strategy for Node also involves long-term support for every other release. That means that while you can enjoy new versions quickly, you can release your products based on incredibly stable versions that have stood the test of time.
At Wizcorp we are already using io.js a lot, and enjoy its improved performance on a daily basis. We are also looking forward to using more and more ECMAScript 6 (and later 7) features in our code bases; not just in the browser, but also on the server. Not to mention that it’s quite empowering to know that when we find a way to make Node faster, better, more amazing, there is a community ready to receive contributions with open arms. In the end, on top of our customers, the entire community will benefit from an improved and ever-improving platform.
0 notes