Text
Web Components & Modular UI

You might have heard the term ‘web components’ being thrown around in UI discussions. But what are web components? And how did they come about? In this article, I will attempt to cover a brief history of web components and the benefits they bring to teams today. I will also dive into how my team uses them. But first, let’s talk about what they are: Web components are a suite of different technologies that allow you to create reusable custom elements for use in your web applications. The functionality of web components is encapsulated away from the rest of your code. This goes a long way to making them more reusable. There are three main technologies: custom elements and their behavior, the ‘hidden’ shadow DOM, and the flexible HTML templates. These are used together to create versatile custom elements with encapsulated functionality that can be reused wherever you like, without fear of code collisions.
Back in 2011, a guy named Alex Russell first introduced the concept of ‘standard’ web components. A couple of years later Google jump-started the web components ‘revolution’ with the release of the Polymer library. The library was based on web components, and served as the authoritative implementation of Material Design for the web. It was soon after this time, a little over a decade ago, that I began to work on a new web application UI project, for which I decided that web components would be a key technology in my front-end tech stack. There were regrets, especially because of the flood of browser errors. I remember a lot of searching and sifting through obscure documentations and blogs to understand how to ‘glue’ the web components together with my application. A lot of the web component implementations felt experimental and it seemed like they were not completely ready for production, to say the least. It felt exciting for the wrong reasons: It was a new frontier of development, but ultimately it bred discouragement because of the challenges.
The React framework soon came along and changed many things. I, for one, rewrote my application in React and ‘never looked back’. It was so much easier to work with. And I’m sure that other developers, who were once web component hopefuls, had a similar experience. At the time, Facebook didn’t want to use web components and build on top of them because they didn’t fit React’s JSX model used for declaring elements. This was yet another reason to be doubtful about them. But obviously, not everyone was. In 2016, the Custom Elements v1 specification was released, which laid the foundation for designing and using new types of DOM elements. Soon after, in a bold statement by Google, YouTube was rewritten in Polymer web components. They kept evolving Polymer knowing that web components were a web standard, an approved spec that modern browsers had to implement.
In 2017, a few developments started to reaffirm web components as a viable modern front-end technology: First of all, the Polymer team started to develop LitElement, a lightweight framework for creating web components. Secondly, the Ionic team created StencilJS, a JavaScript compiler that generated web components. Both of these became reference technologies for web component development. In 2018, Firefox 63 enabled web component support by default and updated developer tools to support them. With Angular 6, came Angular Elements, allowing for packaging Angular components as custom web components. By the time the Lit framework was released in 2019, people were already realizing the value of having a layer of web components, especially because of the headaches from having so many different front-end frameworks. None of those frameworks were ‘native’ like web components.
In the last five years, web components have matured significantly, gaining wider adoption and becoming a more viable alternative to framework-based components, with key advancements through new features, the development of frameworks, and increased browser support. More recently, there has been a move towards more declarative APIs and the potential for fully declaratively defined web components. Web Components are now a commonplace part of front-end development practices, with major players like GitHub, Apple, and Adobe embracing them. They continue to evolve, with ongoing efforts to improve accessibility and other features like server-side rendering. They also continue to gain traction, with increasing browser support and usage in various projects.
Meanwhile, companies are feeling the pain of having built components using a specific framework. Of course, web components solve this problem; they live in harmony with other frameworks, not against them. Teams don’t have to change their frameworks either. Web components adapt to any JavaScript framework because they are natively supported elements in HTML. It’s the standard for components and it’s in every browser. This also makes debugging never too overly challenging because of framework abstractions. They are easy to share across teams and applications, and building a design system around web components means that your design system is framework-agnostic. Libraries have made web components very easy to add anywhere and to incorporate into logic systems, e.g. through native JS events. They work seamlessly across React, Vue, Angular, or plain HTML. This ensures long-term maintainability and prevents vendor lock-in , unlike framework-specific solutions. Web components are also compatible with micro-frontends and module federation, so clearly they are being considered during development of new technologies. Related to this, I’d like to point out that the ‘staying power’ of a technology is greatly enhanced when the technology is built into a specification required to be adopted by popular modern competitors. Such is the case for web components. This is important because some even speculate that native solutions such as web components could replace all frameworks.
So how have we used web components on my team? Our web components live in a repository dedicated to developing them, testing them, and publishing them. They are published and distributed as NPM packages, making them easy to share and import. Each component comes with a Storybook story that is also imported into a separate design-focused Storybook application, our ‘design lab’, where you can visually browse our inventory and interact with the components. Two application teams have adopted most of their components to be from our design system. Three other teams have adopted some of our web components. A few other teams are evaluating our components. The set of components used, and how, varies between application teams. Most will start with the Side Navigation component, especially because it serves as a visual backbone for our platform application UX. Our Grid System component is useful as it provides spacing alignment for all other components on your web page. Of course, our data grid component is central to the functionality of important data-driven UI pages.
Our design lab application has become a great place to start exploring our component offering. Storybook gives you the tools to display each individual component in an organized way and allows people to not only learn about them but also ‘shop the look’ by changing the controls of the component and playing with the component to see if it makes sense for them. We have also built a demo application in our design lab, showcasing visual integrations of these components. This allows users to see an entire UI built with our components, but this also allows us, under the hood, to test these component integrations. More recently, we have built theme-ing previews into our design lab, allowing users to apply a completely custom theme, and see how it affects the built-in demo application and each individual component separately. This ability is quite important, because our web components are compatible with theme-ing changes, and the design lab allows one to preview these ahead of time before you apply a specific theme in your application.
It probably goes without saying that we have used the web component technology to build all of these components. This means that, no matter what front-end framework you are using, you could bring these components into your application already, and even apply theme-ing through them. Using a common set of components that work anywhere, allows you to build applications faster and with a consistent look and feel. This has huge implications, and web components are the best technology suited to deliver this kind of central, modular approach to building UI elements. We don’t want to be limited by a less-robust technology that serves as a barrier to cross-application modularity.
Thank you for reading!
Be sure to also check out this wonderful resource for web components in general: https://github.com/web-padawan/awesome-web-components#articles
0 notes
Text
Automating Dependency Management

After watching a great Developer Week presentation by Mike Hansen, SVP of Product and Engineering at Sonatype, I started thinking about how ‘magical’ automation is for everyday developer chores. The famous writer Arthur C. Clarke once wrote, “any sufficiently advanced tech is indistinguishable from magic.” Of course, when it comes to managing code dependencies and packages, it doesn’t quite feel like ‘magic’ — at least, not yet. Most of the dependencies we use in our codebases today are open source libraries; in fact, if open source software did not exist, it is estimated that companies would have to spend three to four times more money on software development than they currently do. In 2022, the global open source software market was valued at almost 28 billion dollars, and projected to reach 75 billion dollars by 2028. That’s a major world economy, yet we are still in the early stages of reaching its potential.
Cacheing proxies like Artifactory and Sonatype have improved build times in many software projects, working alongside software package registries like Node Package Manager. Such services, alongside modern package-related build tools, have helped abstract some of the complexity of managing dependencies, even in very large projects. Yet, updating those dependencies is another issue altogether, especially since the average production-grade public-facing application has 150 open source dependencies. Multiply this by an average of ten released versions per year, and you’re looking at thousands of updates per year! This averages to about six dependency updates per day, per application! This is quite a dynamic ecosystem, in which the slogan “if it ain’t broke, don’t fix it” should be considered very naive. What’s interesting is that about 75% of all dependencies don’t get updated by developers at all. This basically accumulates ‘open source tech debt’; not to mention, the longer you wait the tougher the update can become. What makes it even worse, is that approximately 96% of dependency vulnerabilities already have a fix in a newer version; yet, 62% of the time developers use an avoidable version of the package that still contains vulnerabilities.
This is where the ‘magic’ of automation comes into play. Automating the updating of dependencies gives you a better chance to catch vulnerabilities earlier. Simply put, better dependency automation provides more monetary value to your development team by cutting down vulnerability fixes occurring much too late in the production release cycle. According to Mike Hansen’s studies, proper ‘software supply chain’ automation can increase ‘productivity’ by 5% and ‘net innovation’ by 25%. Furthermore, studies have shown that teams which prioritize security alongside developer productivity usually perform better. So what are the barriers to this? For one, the pace of security reviews is slow as there are tons of dependencies to sift through and it is often hard to gauge the impact of individual updates. On average, developers outnumber ‘security’ personnel 100 to 1, meaning that a lack of expertise can also contribute to the problem.
Luckily, as the pace of modern coding increases, the opportunity for AI-based automation through AI tools increases as well. Everyone knows that ChatGPT was released in 2022; however, security firms have been researching AI for years. They often consider AI as ‘new heavy equipment’ for information. Of course nowadays, we have Dependabot, another great product from Microsoft that helps developers keep dependencies up to date through the use of AI and machine learning. Yet it seems as if there are still some important signals being missed. It takes time to sift through all of the vulnerabilities to fix them, making it a pain to do this process ‘manually’. This is why the best part of using Dependabot might be its Security Updates feature: Through the use of ‘dependency graphs’, Dependabot can check whether it is possible to update a dependency as soon as the vulnerability risk is identified. And since Dependabot is always scanning the dependencies for vulnerabilities, it can even open a pull request right away. The pull requests contain all of the info to review what is being fixed and updated, and as soon as they’re merged the Dependabot vulnerability alert becomes resolved. Pretty much an all-in-one solution.
Now Dependabot cannot guarantee that it won’t break your application or its build, so it uses ‘compatibility scores’, taken from other similar updates in public repositories and fed through its ML+AI systems. These can guide you and improve your confidence in the pull request being effective, but ultimately the developer can still make the decision whether to upgrade. Of course, Dependabot can be configured with varying degrees of ‘automation’. You can select which repositories are to be automated in this way and what criteria is used to automate the updates. To be honest, I do think that it will take time for developers to really trust such a system to run on its own, with more configurability or control flexibility being key aspects of building that trust. Most likely, developers might have to adopt such a process incrementally, but I do believe that ultimately, with proper results, developer teams should even be able to start relying on such automation. Will you?
0 notes
Text
DORA Metrics for Quality Assurance
After seeing a presentation by Farah Chabchoub (a CTO and head of QA from France) at Developer Week a few months back, I realized there is a great approach to Quality Assurance at software companies that needs to be talked about more. In some cases, a whole ‘quality revolution’ is needed. First, or course, the ‘quality’ objectives and goals need to be well defined at the leadership level so that the details of these objectives can be implemented correctly (e.g. adhering to specific regulations or compliance). Next, the product should be fully reviewed by the quality assurance team to see where short-term vs long-term impact can be made. And after this, the work to ensure quality should be coupled with timelines that are actually achievable (i.e. monthly or quarterly or yearly goals).
But to really create the most productive quality improvement plan, metrics known as DORA should be defined. Simply put, DORA is the largest and longest running research program of its kind, seeking to understand the capabilities that drive software delivery and operations performance. DORA helps teams apply those capabilities, leading to better organizational performance. The most famous part of the group’s research are the four software delivery performance metrics (now known as ‘DORA metrics’):
Deployment frequency: How often a software team pushes changes to production. Also we should note how often changes are new features, versus patches/urgent fixes.
Change lead time: The time it takes to get committed code to run in production. This includes the whole development process as well as the time/fluidity of the deployments and production updates.
Change failure rate: The share of incidents, rollbacks, and failures out of all deployments. We need to have metrics on all the different types of changes needed (e.g. bugs and incidents are two different things).
Time to restore service: The time it takes to restore service in production after an incident. Care must be taken to avoid adding tech debt when this happens.
Keeping track of these metrics is essential in my opinion. They need to be measurable so that we can evaluate ‘quality maturity’ and more clearly see where we are in our quality objectives. We need to know things that affect productivity, like:
How common are incidents? (Versus bugs for example).
How ‘reactive’ or rushed is development? (As opposed to being ‘proactive’ through well planned-out objectives/timelines)
Are we adding tech debt when we ‘repair’ things?
These metrics are all important to understand when, for example, responding to customer complaints. Of course, to more closely align these metrics with business objectives, we should also consider the following to adjust our quality assurance strategy:
How satisfied are customers?
Are we meeting our business objectives on time and with a high level of quality?
Is our backlog being taken care of, or is it being ignored and what is the impact of that?
To achieve a high level of proactivity that also ensures a high level of quality in our products, we need a unified vision of what the goals are around this. This can be difficult as there can be some costs to adding processes or tools to help with this, but these costs have to be balanced with the benefits to the quality of our products. Ignoring this can put us in a bad place, as unexpected customer complains/requests will increase due to a lower level of quality. As we all know, the impact of unexpected changes can be quite taxing on a development team. To fix this is no small feat, but with enough leadership buy-in and drive behind this vision, along with more team engagement, I think this can be achieved.
0 notes
Text
Development during Design

There is often a gap between our ‘Design’ and ‘Development’ stages of product delivery. Basically, when design is done, there is usually some hand-off process to the developers. But how do these two phases interplay with each other? Well, there are different groups of people responsible for each stage: One could argue that developers do not want to receive requirements that are not ‘fully baked’ or subject to change, causing the developer to be hesitant to make major changes for fear of having to redo the work. One could also argue that a designer doesn’t want developers to start working on something that the designer is still tinkering with, again fearing the developers will waste time pursuing an outdated design. But is it possible that both of these arguments are wrong? What if you could have design and development happen in a more-or-less the same time frame? Can this be achieved through increased interactivity, collaboration, and feedback? I believe such a development process is possible, and I believe it could help turn ideas into delivered products faster and with more efficiency.
But do we have the tools to support this today? Having a tokenized design system is a good start. It means that your team has at least thought about how to distribute the design tokens and variables into your application code base. This is a form of integration between design and development that achieves some of the desires of the two groups but still doesn’t improve on the linear hand-off process. For designers, it means that what they have envisioned visually has a more-defined path into the code; however, the developers still have to turn the design into code, and this transformation is imperfect. For developers, it means the designs begin to resemble modular code pieces; but again, they still need to transform them into actual code. The gap between design and code is still there because there is no technological bridge between them. This means that doing these phases at the same time will still have too much unnecessary back-and-forth, especially from developers having to turn design updates into code changes all the time. Until we solve the technological gap here, designers will still be ‘throwing things over a wall’ to developers to some extent.
So how do we create this ‘technological bridge’ without compromising standards or vital processes on either side? The simple answer is to fuse the design tools with the development tools. Have them operate under one language under the hood, and have them generate the code that developers can use. Meanwhile, the UI of the designer tools should behave the same way as before so that designing the application still works the same way. I mean, why shouldn’t the developer get the work that has already been done to position and layout the elements in their correct places by the designer? Why can’t the developer get visual HTML elements that have already shown up in the design mock-ups? As developers, we should be able to leverage the flexibility of the design tools to generate the visual and layout elements of early code structures.
A developer could request to get the generated code of the UI design or of a specific element within it; yet, even this still doesn’t feel interactive enough. There is still a ‘one-way flow’ from ‘finished’ designs to developer. To get larger productivity gains, we need developers to leverage the design tools in a more interactive way. Design should almost become a developer-oriented experience, somewhat akin to an online ‘white-boarding’ tool. But for this, we need something more advanced. We need a design tool that doubles as a code-playground. In other words, you should be able to change the visual design by updating its code, and change its code by updating the visual design. As most know, the main formats of code within a web application are HTML, CSS, and JavaScript; but we can just start with the first two to make things simple. As long as the CSS and HTML is updated when designers make a visual change, and as long as the visual design is updated when developers change the CSS or HTML, then I think this suffices for the minimal functionality that such a ‘playground’ requires.
An even more advanced playground could have actual components as part of the pieces that you can play with. You could then quickly pick and choose which components you want to integrate together and play around with a prototype that could be converted into actual code easily. Seems like a dream scenario, but how can we realistically get there? For one, Web Components are not a bad place to start. By definition, they are self-contained and able to work on any web application since web browsers have natively supported this technology for a while now. Indeed, Web Components could be a great fit for such a design playground, especially since there would be less “framework” overhead to worry about during QA. By bringing the components into such a design playground and keeping them accessible to developer and designer alike, we can create entire integration setups of applications ‘ahead of time’. Perhaps this could even be done, dare I say it, all in one phase. Does this mean the distinctions between the ‘Design’ and the ‘Development’ phases become blurred? I think the answer is yes, and I see development process possible where we have designers and developers start almost simultaneously on new feature development. However, I also think this is an intriguing idea that needs to have the proper tools built out to support such a playground first. But nonetheless, the idea that your designs start their life as usable developer code is quite powerful!
But really, how do we build this? Creating and configuring such an environment might not be easy, but it is not impossible either. In the meantime, we can explore tools like Figma’s ‘Dev Mode’, which was built to appease developers trying to turn designs into code. They seem to support lots of helpful features that help with this and have made it easier for their developers to use the designs. Is it the full solution? I don’t think so. They admit that it is not always easy to make the switch to using their ‘Dev Mode’, but they are eagerly racing ahead with more development on their product so I am sure it will keep growing. There are probably a number of other products that attempt to achieve similar technological improvements to how we operate between ‘Design’ and ‘Development’. Building something more custom yourself might also be the way to go since I don’t think the actual technologies involved are that complicated. Most of all, I think it is the way that we are used to operate within these phases that might be the hardest to change.
But the sell here is not difficult. There are so many benefits. By fusing design with development in this way, some issues can then be avoided or fixed during the design phase, before ‘development’ even starts. Even QA can be involved earlier, and we can feel more confident and be assured of quality earlier in the process. Developers will have access to actual code from the designs, earlier in the process. Designers will even get up-to-date developer changes in their designs, so that the feedback and review process between developer and designer is minimized. The playground becomes a central place where designers and developers can start work almost at the same time. A place that focuses on interactiveness and real-time feedback of changes to the working design. The design itself actually becomes a working prototype version of the code. In other words, developers can take it and build on top of it without having to change what is already in the design. So there now becomes less of an ‘owner’ to this phase, and more of a collaboration between everyone who interacts with this playground. The utility of this playground would also be expanded to product owners and decision makers of all kinds. The common goal being the faster delivery from idea to product through more powerful research and prototyping interactivity during the ‘Design’ phase.
0 notes
Text
XR meets AI

A year ago I saw a great presentation by Hugh Season (@hughseaton), an innovation leader in the XR and e-learning space, and by Jeff Meador (@jeffmeador), an entrepreneur in the immersive reality and AI space. The talks were about using some of the mixed and immersive reality technologies (VR, AR, etc…) for real-world applications, with help from AI added into the mix as well. The development of Artificial Intelligence, as you all know, is moving very quickly these days. But when you combine AI with some of the immersive technologies that are out there these days, you really get amazing new developments. These are experiences that people can now have that enhance how they interact with the environments around them, how they explore information, and even how they learn new skills. Here are some of the more interesting things I have learned in these talks:
First of all, there are a few different ways to build AI systems for users to experience. One way is to build your own multi-model solutions from the ground up. In this case, you control everything and build it however you want. This can get expensive though, and requires a lot of talent and data just to get started. Another way is to integrate your system with an existing API (like Bixby, for example). This way is faster, as it requires less talent and data. It also means your product will become more advanced as the API that you have integrated with becomes better with upgrades. Of course, the easiest option is to just build your complicated stuff completely on top of an existing API without having to acquire much talent or data to get you started. While this last option is the most efficient and flexible in the short-term, it might not fit your exact needs and goals quite as well as you would hope. Still, there are a lot of tools out there to help you do this, like Google’s TensorFlow (an end-to-end open source machine learning platform). Furthermore, other great APIs, including ones with deep learning options, are also available from AWS, Microsoft, and IBM. These can be useful for things like intent modeling, conversational AI, business processes, and analytics.
But, if you have ever been involved in the development of any of these systems, then you know that data is everything if you want it to be successful. If you apply your machine learning straight to XR tech, you will definitely generate a lot of data, but the sheer size of the data points involved with spatial interactive data can be overwhelming. Furthermore, to make good model equations, a lot of data is required; and most of the effort involved in model building is the painstaking collection and massaging of the data. Nonetheless, you can capture data in these new-age XR environments on how people move, act, speak, sound, and other actions. The VR scene itself is huge, and there is a major need in it to specify what is recorded and categorize that data with characteristics and quality measurements. Unsupervised learning is sometimes recommended in this frontier of human interactions, in order to discover new patterns and ideas. Defining interaction zones can be helpful for this also. But most importantly, the barriers to applying machine learning effectively in this space do not lie in the technology, or in the statistics, or in the algorithms, but rather in understanding the data in meaningful ways, and in having adequate time to accomplish that.
Some of the places you might have already seen AI combined with human interactions are in predictive models, like the ones used in recommendations for shoppers and in AI assistants. However, it is in the area of learning and skill development where things like virtual reality can really shine, especially when combined with AI. Virtual reality can put you through valuable experiences by simulating situations, earlier in your career, which can become critically important to you in the future. You can basically asses your readiness for certain future situations in this way. For example, crane simulators can reveal wether new operators are actually afraid of heights ahead of time, which is done by revealing how they deal with specific spatial dimensions. Companies today are actually building such simulations, and are bringing AI into the mix when constructing these virtual training environments. Simulations which involve interactions with avatars are great for on-boarding in some occupations, and can reveal early how well a person deals with things like difficult conversations, conflicts, task delegation, mentoring, providing feedback, social engagement, and even diversity and inclusion. These are basically great ways to assess leadership skills in candidates early, and some companies are starting to even create ‘baselines’ for how employee interactions should flow, based on these models. This has been especially possible through the infusion of AI into the various immersive technologies to create consistent XR experiences and simulations.
And as it turns out, the potential to enhance how we learn is really tremendous here. Using VR, it is often the case that younger consultants can explain newer concepts to older people. This is achieved by the created virtual experiences someone can learn through, as opposed to other conventional means of learning. Virtual trainers and avatars in these XR environments can possibly replace traditional learning one day, as statistics show that these ways of learning are quite effective. The reason for this lies in the fact that during these immersive experiences, your mind is more likely to take things seriously, since it feels like you are at the center of the experience, almost like in a dream. However, unlike a dream, your mind remains fully focused in the virtual reality that it is engulfed in, and this makes things easier to remember. As the VR experience progresses, your sense of responsibility grows in it, and it even ‘forces’ you to recall other memories...
Knowing that all of this is not only in development but is already being applied in real world use cases really does make me feel like we are in the future. Hopefully this article sparks your interest to pursue more information about the fields of XR and AI. Hope you enjoyed it.
1 note
·
View note
Text
WebAssembly

By now, you might have heard about WebAssembly. However it still seems rather new to a lot of web developers out there. So what is it? It’s really an open standard that includes an assembly language and a binary format for use in applications, such as those that run in a browser. Essentially, It allows for different languages to be run on the web, which can be very useful for enabling high-performance apps on the browser. Ever wanted to run your C++ high-performance graphics application in the browser? Now you can!
Before WebAssembly there was asm.js: a highly optimizable subset of JavaScript designed to allow other languages to be compiled into it while maintaining high performance characteristics that allowed applications to run much faster than regular JavaScript. So web applications now had a way to perform more efficiently on the browser. Compilers to JavaScript existed before asm.js, but they very terribly slow. WebAssembly, however, went one step further and completely took JavaScript out of the picture. This is now done with Wasm (WebAssembly for short) by compiling a language such as C++ into a binary file that runs in the same sandbox as regular JavaScript. A compiler capable of producing WebAssembly binaries, such as Emscripten, allows fo the compiled program to run much faster than the same program written in traditional JavaScript. This compiled program can access browser memory in a fast and secure way by using the “linear memory” model. Basically this means that the program can use the accessible memory pretty much like C++ does.
WebAssembly is useful for facilitating interactions between programs and their host environments, such as on devices which translate higher code into byte-size code. It is also useful in heavier-weight applications. For example, you can now run CAD software on the browser. WebAssembly removes some limitations by allowing for real multithreading and 64-bit support. Load-time improvements of the compiled programs can be achieved with streaming and tiered compilation, while already compiled byte code can be stored with implicit HTTP cacheing. WebAssembly can sometimes even be faster than native code, and is being utilized in things like CLI tools, game logic, and OS kernels.
Still, there are some things that WebAssembly lacks. Calls between WebAssembly and JavaScript are slow, lacking ES module integration and without a means for easy or fast data exchanges. Toolchain integration and backwards compatibility can also be largely improved on. When transpiling from other languages, garbage collection integration is not great; neither is exception support. Debugging is also a pain, as there is no equivalent of DevTools for WebAssembly.
But the advantages of using WebAssembly are still very intriguing: JavaScript that runs on the browser puts all of its code in a “sandbox” for security purposes; however, with WebAssembly, you can have the browser sandbox, while taking the code out of the browser and still keeping the performance improvements! This could better than Node’s Native Modules, although WebAssembly doesn’t have access to system resources without having to pass system functions into it.
WebAssembly is still in its early stages and is not yet feature complete. Features are still being “unlocked” like a skill tree in a video game. But this doesn’t mean it hasn’t already made an impact in the application development space. Will it be on your list of things to try in 2020?
1 note
·
View note
Text
We have software specialists for your team!

Is your software team in need of help? I am working with top-notch, high quality developers and other software specialists that are looking for contract work. Hiring a contractor means that the total cost to employ a badly needed extra resource to augment your staff could be much lower. Furthermore, it could mean a much more flexible arrangement for your development roadmap. Please contact me at your convenience if this is of interest.
...you can reach me at: [email protected]
1 note
·
View note
Text
Tech Recommendations for Web Apps

I was recently asked to give some recommendations for what front-end and back-end technologies I would choose for a new web application, and here is what I wrote:
For the front end, I highly recommend using React and related libraries to build out the client side of the application. It is the most popular way to build front-ends today and comes with a huge community of support. Its unidirectional data flow and “reactive” approach makes it very easy to debug complicated rendering flows. This also allows for a separation of concerns that brings easier analysis at each layer (from in-memory data preparation business logic to local components’ states), and all of these come with great DevTools plugins for Chrome. I personally have a ton of experience working in the React ecosystem and highly appreciate many aspects of it, especially the virtual DOM diffing, with which I have built very complex and very fast visualizations; SVG + React + D3 is a favorite combination for myself, for instance. The virtual DOM reduces the interactions with the Browser API to an efficient set by using a powerful diffing algorithm that compares the last ‘snapshot’ of a component with the new one about to be rendered. The JSX syntax for building the components’ DOM also allows for more intuitive logic by combining imperative JavaScript with HTML declarations.
Furthermore there are plenty of libraries that together with React make for a powerful client development ecosystem. Redux manages application state and information flow between visual components in a very debuggable and straightforward way, especially when combined with the Immutable.js library and Connect. React-Router manages routing in a declarative way that is very simple to understand and easily gives the client single-page application features (like syncing URL to app state and back/forward state movement) without much hassle. React Native allows you to make web applications for mobile that are essentially OS-agnostic. Typescript is easily integrated into react to reduce run-time errors and lets you be more confident when refactoring. Jest and Enzyme allow you to build out a nice unit testing framework for your components and the app state. The bottom line is that React is one of the most used JS frameworks, is excellent for building advanced applications, and comes with a plethora of supporting tools.
For the back end, there are two clear approaches that stand out for me: Node.js (JavaScript) and Java. I think both approaches can be great but let me outline some pros and cons for each approach as well as the right situations for when to use each one:
Using Node.js to serve your application and building it with Javascript can have its advantages. Building your back end in JavaScript could mean that you only need to know one language to build the entire application (since the client is usually written in JavaScript already). Node also comes with an http server that is very easy to set up and get running, and generally building apps from scratch using Node and Javascript for the back end usually takes a lot less time than say building a Java application and setting it up to be served from an Apache server (or something of the sort). Of course, faster doesn’t mean safer; however, typescript can be used to “transform” JavaScript into basically a strictly-typed language so that should help. Node has been used widely to build highly-scalable “real time” applications and is versatile and quite popular because of its simplicity and use of Javascript. Another advantage of it is that NoSQL databases (such as Mongo) can remove a lot of complexity between your application business logic and storage by using the same language (JavaScript).
However, since the Node environment runs asynchronous I/O, it could be harder to orchestrate how different user requests share resources in memory. The speed gain that you get out of JavaScript being more flexible in how it handles different requests (it runs on a single “non-blocking” thread) has to be weighed against the complexity this brings when application logic grows very large. In Java, each request is generally given a thread to work with, which allows you to isolate the “effect” of the request’s logic and better control exactly how and when resources are shared (via Java’s multithreading capabilities). Still, out of the box, Node can even be more scalable than the Java runtime environment when having to wait for external resources such as reading from a database. This is because JavaScript’s event loop allows it to keep processing the code in an asynchronous way (later checking the event loop to see if previous processes have finished). Of course, in a finely-tuned Java application, the threads know the best time to block and will do so in the appropriate situations, mitigating the issue altogether (although with more code necessary to achieve this than what Node will start you with).
Java has been in use for the better part of three decades now and is a much more mature approach than Node. It is widely supported with a ton of libraries and tools, so building a complex application with it will hardly be a problem. Because of its maturity, and for other reasons (like being compiled and strictly-typed, its memory allocation, its automatic garbage collection mechanism, and its exception handling), it is considered better suited for building more secure back ends for applications. It could also spend less time on handling requests than Node because of its built-in multithreading, which allows the application to perform better by performing tasks simultaneously in a more efficient manner. Java is useful for large-scale applications that have a lot of concurrency needs (e.g. for building a “smart spreadsheet” that can have many realtime modifications from different users). It is also useful for processing large sets of data in high frequencies, and for applications that require CPU-intensive calculations.
In conclusion, writing a Java program will take longer and require more effort; however, the application will be more safe and robust, as well as process data and its calculations faster and in a more efficient way. Picking the right approach here will depend on the resources at your disposal (team knowledge, time constraints, etc…), the nature of the application (type of processing, concurrency needs, importance of security, etc…), considerations for the type of database used (usually relational versus NoSQL), and other factors. So…chose wisely!
PS: You can also read my article on choosing the right server for you application here: https://risebird.tumblr.com/post/140322249595/node-vs-apache-vs-lighttpd-vs-nginx
2 notes
·
View notes
Text
Mentorship
Earlier this year I saw a presentation at ForwardJS by Suraksha Pai (@PaiSuraksha), a full-stack engineer from Yelp!, on mentorship. She talked about how she used to feel ‘lost’ until she started watching videos by successful people and those she idolized, and found something extremely important: That a lot of times success comes with the aid of a long-lasting friend; a “mentor”.
But why do we need a mentor? Well, a mentor can give you all kinds of advice, even some on social issues and career changes. They can give you unwavering support and encouragement, giving you the confidence needed to drive harder towards your goals. They can challenge doubt in yourself, and erase imaginary mental barriers. They allow you to expand your network and knowledge, as they themselves have experienced many things you might not have. They give you a fresh perspective on things and might see new angles that you haven’t. It might be hard because it means opening up to a stranger at first, but having a mentor helps your career growth through learning from them, their boost to your confidence, them helping you develop specific skills, and in general helping your happiness and progress. All it takes to find a mentor is asking someone you work with or know whether they would want to do it, and at the worst case, they could perhaps refer you to someone else. A good way to ease into this process is to describe to your would-be mentor how you see them, what they could help you with, what your goals are, and why you picked them.
A good mentor will use their experience as a guide for you. They will create a safe space for you to open up to them by first building the trust needed for this. They will provide you with honest feedback, since without this they wouldn’t be able to truly help you improve. They won’t just give out solutions, but rather foster a sense of independence for you to try to come to the same conclusions, perhaps through clues or other resources. This will give you confidence to solve things on your own next time. They don’t necessarily need to be the “best” at everything, when in reality, this is impossible. They just need to advise you enough to help you achieve what you are trying to do. Good mentors will be honest and tell you when they don’t have the answers.
Some companies like Yelp! have mentorship programs. There, mentors are paired up with mentees and they work together to come up with goals and ideals, create a plan of action, and implement it. Some mentees try to keep their mentorship secret because they fell like they are lacking something, when getting help in this way could really just mean that they are ready for the next level of their career. Some mentees even have more than one mentor, with different mentors acting as different ‘experts’ in different areas of interest. A mentor is really just someone that knows more than them in a specific domain of knowledge, regardless of other characteristics like the mentor’s age. After working with her mentor, Suraksha eventually became a mentor to someone else also, and so can you! You can get started with this through mentorship programs, meet-ups, conferences, or even offering to mentor new hires at your workplace. As they say: Two heads are always better than one, and helping others and receiving help is a key ingredient in being successful.
1 note
·
View note
Text
67 Articles later...

Three and a half years ago I wrote an article which summarized my foray into the world of software blogging, and provided clickable links into each of the thirty three articles I had written up to that point (risebird.tumblr.com/post/115727628135/thirty-three-blog-articles-later). Although my article-writing pace has slowed down somewhat since then, the articles I’ve focused on writing have not lost their thirst for cutting edge developments in the software industry (especially of those in the quickly-evolving front-end space). I have evolved my career since that April 2015 to be sure, and hopefully my writing ability has progressed also; but as always, you will be the judge of that. So without further ado, I give you the clickable summary of my latest thirty four articles, enjoy!
When building a new mobile app you might have asked yourself whether to build a web app, a native app, or perhaps a hybrid (risebird.tumblr.com/post/134179609930/how-should-i-build-my-mobile-app). Of course, nowadays you can even make a “Progressive Web App” that can still work offline (risebird.tumblr.com/post/172163267760/progressive-web-apps)! In any case it is extremely valuable for a web developer to understand how modern browser performance is advanced through the use of parallelism (risebird.tumblr.com/post/177300904440/modern-browser-performance). In fact, GPUs have re-invented data visualization because of it (risebird.tumblr.com/post/136172693860/getting-deeper-into-d3). Combined with the awesome D3 visualization library (risebird.tumblr.com/post/130034738155/the-d3js-visualization-library), we can accomplish a lot nowadays. D3 experts will get deeper into this library (risebird.tumblr.com/post/136172693860/getting-deeper-into-d3) and even explore it to a very detailed level (risebird.tumblr.com/post/167332736675/details-on-d3); however, the real sparks fly when you combine it with React (risebird.tumblr.com/post/147473986500/d3-react-awesome)!
With the UX expectations of users constantly changing, such as those of modern layouts (risebird.tumblr.com/post/124227499425/modern-layouts), new methods of delivering data to the users, such as immersive analytics (risebird.tumblr.com/post/164978764330/immersive-analytics) have been developed. Dealing with the data itself can be tricky enough (risebird.tumblr.com/post/179818036890/dealing-with-data), but you also need an application capable of bringing it to the end user as quickly as possible (risebird.tumblr.com/post/133313428230/modern-js-apps-single-page-vs-isomorphic). Will you create a standard service oriented architecture around Node (risebird.tumblr.com/post/142205462000/node-microservices) and use a popular tool to build your database (risebird.tumblr.com/post/119012149675/popular-databases-in-2015) to achieve this? Or will you perhaps rely completely on other third party providers for data access/storage (risebird.tumblr.com/post/123359620325/the-front-end-is-the-new-back-end)? Don’t forget that there is no such thing as “serverless” (risebird.tumblr.com/post/158339348525/there-is-no-such-thing-as-serverless). No matter what application server you do choose (risebird.tumblr.com/post/140322249595/node-vs-apache-vs-lighttpd-vs-nginx), be sure user access is regulated with JWTs (risebird.tumblr.com/post/139393722945/json-web-tokens), and that your applications are taking advantage of the new features available in HTTP2 (risebird.tumblr.com/post/132841671265/why-you-should-deploy-http-2-immediately). And always unit test your JavaScript with Jest (risebird.tumblr.com/post/163897551825/javascript-unit-testing-with-jest).
In the future, our applications will all be connected throughout the physical world (risebird.tumblr.com/post/137612619330/centralized-iot-development), and even in the present IoT is already impacting modern applications (risebird.tumblr.com/post/127988661815/a-real-world-iot-application). In such a connected world, it pays to understand how the networks and protocols that deliver these electronic messages to and from us operate (risebird.tumblr.com/post/156345541115/the-lte-physical-layer), and how the messages use sensors and actuators to interact with physical environments (risebird.tumblr.com/post/144442019200/fun-with-development-boards). Drones are especially interesting in this context (risebird.tumblr.com/post/117830816210/drones-in-2015). I also thought greatly about how AI can be used to evolve logic on its own (risebird.tumblr.com/post/170245126095/simulating-life-an-ai-experiment), and how abstract syntax trees can go a long way in achieving this (risebird.tumblr.com/post/171045918950/artificial-neural-networks-versus-abstract-syntax). After starting with some basic ideas for this (risebird.tumblr.com/post/174655336230/simulating-life-early-code), I dove into some of the complexities of mutations in directed evolution (risebird.tumblr.com/post/175132399985/simulating-life-mutations).
Finally, I wrote about the power of the emerging blockchain technology (risebird.tumblr.com/post/162633306305/blockchains-the-technological-shift-of-the-decade), the usefulness of using Wikipedia (risebird.tumblr.com/post/129037111590/trust-in-wikipedia), an interactive news map website I built with my father (risebird.tumblr.com/post/161497655355/mapreport-the-dynamic-news-map), and about making music with JavaScript (risebird.tumblr.com/post/166247227950/make-music-with-javascript)!
I hope you’ve had as much fun reading these articles as I’ve had writing them, it will be great to reflect again on these when I write my 100th article sometimes soon :)
2 notes
·
View notes
Text
Dealing with Data

This fall I attended an event called “Lab Data to Machine Learning in 30 Seconds”, put on by Riffyn (@riffyn, https://riffyn.com/) and AWS, as part of the synbiobeta 2018 (@SynBioBeta) Synthetic Biology Week. It included a panel discussion lead by Riffyn’s CEO Tim Gardner and a panel of data science experts from Amyris, GSK, and Novartis (all three are listed on NASDAQ or NYSE). The focus of the panel was the rapid advance of machine learning and AI in biotech and pharmaceutical R&D; however, the discussion also ventured into other interesting areas like structuring and integration in data systems and the goals and challenges of dealing with data in the biotech industry.
In order to understand how data science had changed the directions and shaped the perspectives of the panelists, the panel explored some of the challenges that they were dealing with in their projects. Sometimes their challenges included acquiring a deeper grasp of where signals or noise came from in the data. Sometimes the challenges included strenuous efforts to 'wrangle’ and structure the data (possibly from thousands of data sources) in order to create new molecules and compounds. Most of the time, however, the challenges involved long and painful analysis of countless spreadsheets. It was also interesting to hear about the kind of goals the panelists had for the data. When data poured in quickly, the panelists wanted to find ways to motivate teams while minding timelines, and to define objectives and limits for the data gathering. Often though, the goals were centered around creating better data models (which are only as good as the data that is fed them); but there was also a longing for an easier system.
The panelists had learned a lot about working with data in their projects and about trying to automate data-related processes. For instance, sometimes the ‘secret sauce’ they were looking for was buried deep within the data, and could only be ‘discovered’ through painstaking data combing and company cross-learning. Sometimes to ensure that they had good data, they had to hire outside help. Sometimes there was excitement in joining data, which connected the scientists with the data’s impact. Other times, they had to mitigate the risk of using ‘biased’ data, since the way the data was trained could result in missed discoveries. In some cases, data that was valuable actually lived in the outliers, yet was still thrown away. Sifting through ‘garbage’ data had to be done in a large-scale automated way so that biased decisions would not be relied on.
They also talked in depth about the aspect of private data (known as “dark data”), which is the data that companies keep to themselves. But how can incentive be increased for companies to start to bring the different sources of data together? There was no simple answer for this! There is usually a lot of mistrust inside of the organizations themselves, and often, the data is not easily available or sharable, hidden away in numerous SharePoint drives and Excel sheets. Most of the time, there is simply too much ego and territoriality attached to the data to allow the data to be accessible by everyone. The scientists working on the data are not cross-training enough or learning about each others’ processes. The culture of sharing data and cross-company involvement definitely needs to change and it usually starts at the top of the organizations. One big thing that could help this happen faster is the development of standards for the data sharing process itself. Such standards have often driven industries to the next level by making it easier for different companies and products to work together; however, standards take a long time to develop because they require companies to ‘opt-in’ to them first.
In tuning into this panel, I also found that there was a genuine enjoyment from the panelists in working with data, especially when it led to discoveries. They even had names for themselves like “data detective” and “molecule artist”. When all of their data could be used, it was very gratifying, as was the ability to navigate the space of billions of predictions in their models. When their work lead to a big landmark finding, it was all worth it! After all, isn’t science all about making discoveries?
1 note
·
View note
Text
Modern Browser Performance

Earlier this year I saw a presentation by Mozilla’s Lin Clark (@linclark) in a talk that had “The future of browser” in its name. Really this was a talk about web performance in modern browsers, with a key focus on parallelism (but more on that later). Here is what I learned:
With the increasing demand of faster web experiences, like those needed for animations, browsers had to become much faster at doing their job. If the browser did not repaint quickly enough, the experience would not flow as smoothly, at least not as smoothly as we have become accustomed to. Today we expect better screen resolutions and ultra-performant UX experiences, even ones that are rendered on web pages of our phones. This has been brought to us not only by the new hardware constantly being developed, but also by the ever-evolving capabilities of the modern browsers like Chrome and Firefox. But how did they achieve the higher processing prowess necessary to utilize the more advanced hardware?
Let’s start by examining how the hardware first improved: We began with single-core CPUs, which performed simple short-term memory operations with the help of their registers and had access to long-term memory stored in RAM. But in order to make a more powerful processing system, we had to split processing into multiple cores, all of them accessing a shared RAM storage. The cores, working side by side, take turns accessing and writing to RAM. But how do they coordinate this? How do they cooperate and ensure the distributed order of the reads and writes is correct? To solve such “data race” issues involved keen strategy and proper timing. It required a network of sharing work that is optimized to maximize the limits of processing power by using GPUs (which can have thousands of separate cores working together). For an example how GPUs are used to distribute work, check out my article on GPUs here: http://risebird.tumblr.com/post/159538988405/gpus-re-inventing-data-visualization
To take full advantage of these hardware improvements, browser developers had to upgrade the rendering engine. In basic terms, the rendering engine in a browser takes HTML and CSS and creates a plan with it, after which, it turns that plan into pixels. This is done in several phases, culminating with the use of GPUs to compose paint layers; all of which is explained in more detail in an article I wrote four years ago, here: http://risebird.tumblr.com/post/77825280589/gpu-youre-my-hero
To upgrade this amazing engine, which brings daily web experiences to people around the globe, browser developers turned to parallelism. Parallelism, which is now fully used by Firefox (since summer of 2017), always existed in Chrome, and is the reason why that browser was always faster. What is parallelism? In the context of a web browser, it is the splitting of computational work done by the browser into separate tasks, such as simultaneous processes running on different tabs. But utilizing it correctly, like when using fine-grained parallelism to share memory between many cores, requires very complicated technology and coordination. Not to mention that the resulting data races can cause some of the worst known security issues. Firefox developers, which instead of starting from scratch, slowly merged parallelism into their existing browser, described the process, “like replacing jet engine parts in mid flight”.
These new browser powers allowed us to do much more than run different tabs at the same time on separate cores. We can now assign different cores to different parts of the same page’s UI (e.g. each Pinterest image on a different core). We render web experiences by allowing JavaScript to run on separate cores with the use of web workers, which can even now share memory between each other. Finally, with the advent of WebAssembly, a low-level assembly-like language that runs with near-native performance and can be compiled from C/C++, performance is really starting to soar. For more information on WebAssembly and how it is used alongside the JavaScript in your browser, see: https://developer.mozilla.org/en-US/docs/WebAssembly
1 note
·
View note
Text
Simulating Life: Mutations
[BLOG ARTICLE HAS BEEN REMOVED. PLEASE EMAIL [email protected] FOR MORE DETAILS]
0 notes
Text
Simulating Life: Early Code
[BLOG ARTICLE HAS BEEN REMOVED. PLEASE EMAIL [email protected] FOR MORE DETAILS]
0 notes
Text
Progressive Web Apps
So you might have heard about Progressive Web Apps being the “new hot thing”, but what are they really? According to developers.google.com they are reliable, fast, and engaging. But is this just nonsense made up by Google, or is this the new way to create experiences for interacting with the web, the biggest platform of applications known to man, in terms of users. Thanks to a great presentation by Jon Kuperman (@jkup), which I saw recently at a ForwardJS conference, I now know that PWAs have begun to earn users’ trust all over the world, by being more performant than other web apps that do not meet the “Progressive Web App standard”.
What is this standard exactly? For one, PWAs will never act like something is wrong when an asset can’t be downloaded from the host or when an API call is taking too long. This is because they attempt to employ all means possible to not ruin your experience by utilizing things like the cacheing system, local storage, and service workers, to keep you in an immersive experience for as long as possible. This is especially important since users’ attention spans seem to plummet every decade. PWAs always use HTTPS, load quickly on your browser, and even work when there is no network connection at all. They also tend to be more user friendly by asking users how they want to interact with the app in a “progressive” way, such as asking for permissions to use native device features, and falling back gracefully to a still-usable UX if such requests are denied.
But are PWAs ready for the World Wide Web? While Chrome and Firefox have taken steps to support PWAs, other browsers like Safari are still behind the ball. It is up to the PWA right now to fall back gracefully when being used on an older browser. You can get a nice progress page on PWA features’ support (broken down by browser) by going here: https://ispwaready.toxicjohann.com/
It does seem that PWAs are the future, considering that you don’t need to browse some app store to download them. Instead, they can be easily saved directly (and immediately) to your home screen via a browser link. In fact, app stores and other similar collective applications are actually including them in their catalogs, because PWAs have become “first class citizens” in the application world. In any case, the decoupling of apps that can provide “native-like” experiences from the app stores they have mostly been found on in the past, has allowed us to skip the commercial mumbo-jumbo (and sometimes payments) that are normally associated with downloading native applications. Emerging markets, such as those found in countries where the network carriers can only provide 2G networks, also stand to gain a lot from PWAs. This is because PWAs perform so well under poor network conditions, and are expected to work in a minimal way, even when there is no network available at all. Oh and PWAs can also do push notifications, provide APIs for sharing things natively, and offer full screen experiences. Sold yet?
But how does one make a PWA? This is done by including a manifest JSON file in your application, in which you provide information on things like application name, screen orientation, icons, background, etc… Basically, this file allows you to control how your app appears and how it is launched. A great site/library for getting started with PWA development, Workbox, can be found here: https://developers.google.com/web/tools/workbox/
If you have already built a Progressive Web App, Google’s Lighthouse is a great tool to measure how “up to standard” your PWA really is. Lighthouse assesses how fast your “time to interactive” is for your application, as opposed to the normal “time to first byte/render” for web apps, because PWAs pride themselves on delivering a decent, usable experience, as quickly as possible to the users. Lighthouse is available as a Chrome plugin and you can start using it to measure you PWA’s performance right away, via the DevTools’ Audit tab.
The future of web apps is here folks, and Progressive Web Apps are the new way to provide great application experiences to the users, no matter how many bars they have on their network.
0 notes
Text
Artificial Neural Networks versus Abstract Syntax Trees
[BLOG ARTICLE HAS BEEN REMOVED. PLEASE EMAIL [email protected] FOR MORE DETAILS]
0 notes
Text
Simulating Life, an AI Experiment
[BLOG ARTICLE HAS BEEN REMOVED. PLEASE EMAIL [email protected] FOR MORE DETAILS]
10 notes
·
View notes