#top level await typescript
Explore tagged Tumblr posts
Text
Using Top Level Await In Typescript: Simplifying Asynchronous Javascript Code
0 notes
Text
Best Node.js Frameworks to Use in 2025
The Node.js development platform remains popular because it provides developers high performance capabilities alongside scalable features and an extensive framework ecosystem. More secure powerful web applications together with improved performance will increase in demand when we reach 2025. The performance and maintenance quality of your project heavily relies on selecting the most suitable Node.js framework for applications ranging from RESTful APIs to real-time chat services and microservice architectures.
Before starting to develop your complex application you need to consider. Hire dedicated node.js developer who master advanced frameworks because they enable maximum framework utilization. This piece examines Node.js frameworks suitable for 2025 along with supporting evidence for their value in development.

1. Express.js – The Classic Favorite
Express.js represents the main framework selection for Node.js developers throughout multiple years. Express.js matches the minimalist framework structure along with unopinionated design to serve applications of medium and small scale. The year of 2025 finds Express.js functioning as an established framework which supports developers needing maximum control to build their application architecture.
Why Use Express.js in 2025?
Large community and rich documentation.
Extensive middleware support.
Users find it straightforward to merge Express.js with databases while integrating it with front-end frameworks.
Hiring software developers with experience in Express.js along with knowledge of your chosen application will lead to efficient and scalable solutions in backend development.
Read more: Top software development companies in 2025
2. NestJS – Best for Scalable Enterprise Applications
NestJS became increasingly popular during recent years because it enables developers to manage complex enterprise-level applications without difficulties. The system utilizes TypeScript as its main language while implementing Angular concepts which enables front-end developers to handle it easily.
Key Features:
Modular architecture.
It includes in-built support for microservices when combined with GraphQL features.
Strong typing with TypeScript.
Organizations planning to grow swiftly and accomplish clean codebases can select NestJS as their perfect solution. Hiring developers with NodeJs expertise for NestJS applications delivers the advantage of developing applications which are both sustainable and resistant to future changes.
3. Fastify functions as a light-weight high-performance application.
Performance-driven organizations pick Fastify above other options. Security professionals recognize Fastify for accomplishing fast HTTP server operations with minimal performance weighting. Fastify provides both its essential plugin system and validate-by-schema capabilities that prepare developers to develop production-grade applications up until 2025.
What Makes Fastify Stand Out?
High throughput performance
JSON schema validation.
Fastify operates efficiently within environments dedicated to development as well as those built for production needs.
You will achieve maximum Fastify performance capabilities by hiring developers who specialize in maximizing speed and efficiency.
4. Koa.js – The Modern and Elegant Alternative
A production of the Express team has established Koa.js as a contemporary and refined way to build server-based systems. The use of ES6 generators and async/await syntax allows developers to simplify middleware control and enhance error management features.
Advantages of Using Koa in 2025:
Cleaner and more expressive code.
Great for building lightweight, modular applications.
The framework does not include built-in middleware that enables complete developer control during application development.
Businesses launching contemporary apps should use Koa.js as their lean framework choice. Developers who possess skills in both NodeJs and advanced JavaScript concepts as well as async programming principles should handle your project to maximize framework potential.
5. Sails.js – Ideal for Data-Driven Applications
Sails.js serves as an optimal selection when creating data-intensive applications that include CRM systems dashboards and APIs. Implementation of convention-over-configuration and integrated WebSockets as well as ORM capabilities are standard features of this framework.
Sails.js Highlights:
Follows MVC architecture.
Sails.js offers Waterline as its database system coupled with the ORM functionality.
Auto-generated REST APIs.
The technology serves as an excellent solution for developing scalable backend systems which suits both startups and large enterprises in need of quick system expansion. Qualitatively superior applications result from experienced Sails.js developers who work to cut development periods while maintaining exceptional output quality.
6. AdonisJS – The Laravel of Node.js
Users commonly refer to AdonisJS as Node.js's Laravel equivalent because it provides complete full-stack features along with a framework that emphasizes certain conventions during development as well as command-line enhancement tools. A fully featured backend solution emerges from the package set by AdonisJS which includes routing together with middleware authentication and ORM support.
Why AdonisJS is Rising in 2025:
Full-stack development capabilities.
Through its built-in functions the platform provides complete authentication capabilities together with data validation features.
TypeScript by default for type safety.
Conducting application development that requires complex features such as authentication and payments and real-time functionality necessitates expert NodeJs developers who have experience with AdonisJS for creating secure and maintainable code bases.
Choosing the Right Framework in 2025
Each Node.js framework has its strengths and ideal use cases. Here’s a quick comparison to help you decide:
No matter which framework you choose, success depends on the expertise of your development team. When you hire experienced software developers, you ensure your application is built using best practices and industry standards.
Final Thoughts
Node.js developers in 2025 will find a wide selection of frameworks which address different development requirements across APIs that focus on performance to full-scale backend solutions. The selection of proper framework constitutes an essential step yet its implementation success depends heavily on qualified personnel.
The ideal moment to hire experts who deeply understand modern frameworks for application development now exists since your project requires robust and scalable code. The combination of suitable tools with proper teams will enable your upcoming project to reach remarkable achievement levels all the way to 2025.
0 notes
Link
What is TypeScriptTypeScript is a popular JavaScript superset created by Microsoft that brings a type system on top of all the flexibility and dynamic programming capabilities of JavaScript. The language has been built as an open-source project, licensed under the Apache License 2.0, has a very active and vibrant community, and has taken off significantly since its original inception. Installing TypeScriptTo get started with TypeScript and try out all the examples, you can either install the TypeScript transpiler on your computer (more about this in the following paragraph), use the official online playground or any other online solution you prefer. In case you want to try the examples locally, you need to install the command-line transpiler, which runs on Node. First, you need to install Node.js and npm on your system. Then, you can create a Node.js project and install the TypeScript transpiler package: mkdir typescript-intro cd typescript-intro npm init -y npm i typescriptThis will install the tsc (TypeScript Compiler) command in the current project. To test the installation, create a TypeScript file called index.ts under your project directory with the following code: console.log(1);Then, use the transpiler to transform it to JavaScript: npx tsc index.tsThis will generate a new file called index.js with the exact same code of the TypeScript file. Use the node command to execute this new file: node index.jsAlthough the transpiler did nothing else besides creating a JavaScript file and copying the original code to it, these steps helped you validate that your TypeScript installation is in good shape and ready to handle the next steps. Note: TypeScript versions can have substantial differences even though they get released as minor revisions. It's common to bump into transpilation problems after a minor version update. For that reason, it is better to install TypeScript locally in your project and execute it using npx when needed instead of relying on a global TypeScript installation. Defining a TypeScript ProjectTo define a TypeScript project within your Node.js project, you need to create a tsconfig.json file. The presence of this file in a directory denotes that the directory is the root of a TypeScript project. tsconfig.json contains a number TypeScript configuration options that change the transpiler's behavior, such as which files to check or ignore, the transpilation target, the imported types, among many others. You can create the TypeScript configuration file easily by running the following command: npx tsc --initThe generated tsconfig.json file contains almost all available options with a brief description of what they let you accomplish. Fortunately, most of these options have a good default value, so you can remove most of them from your file. This blog post will spend some time talking about the compiler options later on. For now, let's focus on writing some code. TypeScript FeaturesThe features of TypeScript are thoroughly explained in the TypeScript handbook. However, this article will focus on a more practical approach to some of these features. It will also give light to some features that are often left out from content you find on the internet. Typing fundamentalsTypeScript's basic idea is to keep the dynamism and flexibility of JavaScript under control through the usage of types. Let's see this concept in action through a practical exercise. Create a file called test.js under your project directory and populate it with the following code: const addOne = (age) { return age + 1; }; const age = "thirty two"; console.log(addOne(age)); console.log(addOne(20));Execute that file as follows: node test.jsWhat was the output of the program?Do you think the output is correct?It turns out that running it on Node.js, or on any browser for that matter, would output thirty two1 without generating any warning. Nothing new here; it's just JavaScript behaving as flexible as always. But, what if you want to guarantee that the addOne() function accepts only numbers when called? You could change the code to validate the parameters typeof during runtime, or you could use TypeScript to restrict that during compile time. Head back to the index.ts file you created earlier and replace its content with the following: const addOne = (age: number): number { return age + 1; }; console.log(addOne(32)); console.log(addOne("thirty two"));Note that you are now restricting the parameter age to only accept values of type number as valid. Transpile the file again: npx tsc index.tsUsing the TypeScript compiler to generate JavaScript produces the following error: index.ts:6:20 - error TS2345: Argument of type 'string' is not assignable to parameter of type 'number'. 6 console.log(addOne("thirty two")); ~~~~~~~~~~~~ Found 1 error.Defining types during application design can help you avoid mistakes like passing the wrong variable type to functions. string and number are two of the basic types that TypeScript supports. Besides these, TypeScript supports all the JavaScript primitive types, including boolean and symbol. On top of these, TypeScript defines some types that do not map to anything in JavaScript directly but are very useful to represent some of the methodologies that are commonly used in the ecosystem: enum is a constrained set of values.any indicates that a variable/parameter can be anything, effectively skipping the type system.unknown is the type-safe counterpart of any.void indicates that a function won't return anything.never indicates that a function always throws an exception or never finishes its execution.Literal Types are concrete subtypes of number,string, or boolean. What this means is that "Hello World" is a string, but a string is not "Hello World" inside the type system. The same goes with false in the case of booleans or 3 for a number:declare function processNumber(s: 3 | 4); declare function processAnyNumber(n: number); const n: number = 10; const n2: 3 = 3; processNumber(n); processAnyNumber(n2) AggregatesTypeScript supports aggregate types (maps, array, tuples), allowing a first-level of type composition: MapsMaps are commonly used to manage an association of keys to values and to represent domain application data: type User = { id: number, username: string, name: string }; const user: User = { id: 1, username: "Superman", name: "Clark Kent", };VectorsVectors are a sequential and indexed data structure that has a fixed type for all its elements. While this is not a feature that JavaScript supports, TypeScript's type system allows developers to emulate this concept: type User = { id: number; username: string; name: string; }; const user1: User = { id: 1, username: "Superman", name: "Clark Kent", }; const user2: User = { id: 2, username: "WonderWoman", name: "Diana Prince", }; const user3: User = { id: 3, username: "Spiderman", name: "Peter Parker", }; const userVector: User[] = [user1, user2, user3];TuplesTuples are also a sequential indexed data structure, but its elements' type can vary according to the fixed definition: type User = { id: number; username: string; name: string; }; const user1: User = { id: 1, username: "Superman", name: "Clark Kent", }; const userTuple: [User, number] = [user1, 10];UnionsAnother way to compose types is through unions which are very handy when a function argument can have multiple types. Suppose you want to write a function that will fetch the user's address details using either a User object or a string representing an email address. First of all, let's install node-fetch in our project so that we can use the fetch function: npm i node-fetch @types/node-fetch…and then in the code, we can discriminate the two cases by type using the typeofoperator: import fetch from 'node-fetch'; type User = { id: number, username: string, name: string email: string }; async function fetchFromEmail(email: string) { const res = await fetch('https://jsonplaceholder.typicode.com/users'); const parsed: User[] = await res.json(); const user = parsed.find((u: User) u.email === email); if (user) return fetchFromId(user.id); return undefined; } function fetchFromId(id: number) { return fetch(`https://jsonplaceholder.typicode.com/users/${id}`) .then(res res.json()) .then(user user.address); } function getUserAddress(user: User | string) { if (typeof user === 'string') return fetchFromEmail(user); return fetchFromId(user.id); } getUserAddress("[email protected]") .then(console.log) .catch(console.error)The type system is smart enough that note that, according to the if result, the type under check is a string or not; this is an implicit type guard. Let's take a look at that. As side note, tuples and unions play well together: const userTuple: Array<User | number = [u, 10, 20, u, 30]; It is also possible to specify both the size and the type of every element in the array: const userTuple: [User, number] = [u, 10, 20, u, 30]; const anotherUserTuple: [User, number] = [u, 10]; Type guardsType guards are expressions that perform a runtime check whose result can be used by the type system to narrow the scope of the checked argument. The typeof operator is a type guard; in the previous example, it has been used to narrow down the scope of the user argument. There are other expressions that TypeScript treats as type guards, such as instanceof, !== and in; the documentation has the complete list To handle situations where the type system is not able to infer the specific type in the current scope, it is possible to define custom type guards through a predicate (a typed function returning a boolean): function isUser(u: unknown): u is User { if (u && typeof u === 'object') return 'username' in u && 'currentToken' in u; return false; } function getUserAddress(user: User | string) { if (isUser(user)) return fetchFromEmail(user); return fetchFromId(user.id); }User defined type guards are completely under the developer's control, and TypeScript has no way to verify their correctness. A very common and legit use case for custom type guards is when validating external data against a JSON Schema through an external library, such as Ajv. This usually happens in web applications where the request body is typed as unknown (or any, depending on the framework you're using), and we want to type-check it before moving on with its processing: import Ajv from "ajv"; const ajv = new Ajv(); const validate = ajv.compile({ type: 'object', properties: { username: { type: 'string' }, currentToken: { type: 'string' } }, }); function validateUser(data: unknown): data is User { return validate(data); }This mechanism relies upon the developer's discipline of keeping the JSON Schema definition in sync with the type. In fact, if we modify the type but not the JSON Schema, we would get TypeScript narrowing a type to something that it's not. We'll see in a different section an alternative to keep these up to date automatically. Discriminated unionsUnions with a common literal field are called discriminated unions. When working with these, TypeScript is able to provide an implicit type guard, avoiding the burden of writing a custom one: type Member = { type: 'member', currentProject: string }; type Admin = { type: 'admin', projects: string[] }; type User = Member | Admin; function getFirstProject(u: User) { if (u.type === 'member') return u.currentProject; return u.projects[0]; }You can see in the getFirstProject function TypeScript can narrow the scope of the argument without having to write any predicate. Trying to access the projects array in the first branch or currentProjectsin the second branch would result in a type error. Runtime ValidationsWe have briefly explained how, in case of custom-type guards, it is up to the developer to test and make sure that the returned result is correct. In case of bugs in the predicate, the type system will have inaccurate information. Consider the following code snippet: function validateUser(data: unknown): data is User { return true; }The following predicate will always return true — effectively leading the type checker narrowing a type on something that it is not: const invalidUser = undefined; if (validateUser(invalidUser)) { console.log(invalidUser.name); }TypeScript has a set of libraries that can help us to keep the runtime validation in sync with the associated type automatically, providing an implicit type guard we do not have to manage. A notable one is runtypes, but in this article, we're going to take a look at io-ts. Essentially the deal is to define the shape of a type using io-ts included primitives; that defines a decoder we can use in our application to validate data we do not trust: Once we have installed the required dependency npm i io-tsWe can try the following code: import * as D from 'io-ts/Decoder'; import * as E from 'io-ts/Either'; import { pipe } from 'fp-ts/function'; const UserDecoder = D.type({ id: D.number, username: D.string, name: D.string email: D.string }); pipe( UserDecoder.decode(data), E.fold( error console.log(D.draw(error)), decodedData { console.log(decodedData.username) } ) );TypeScript ConfigurationThe transpiler's behavior can be configured through a tsconfig.json file that indicates the root of a project. In particular, the file contains a series of key values controlling 3 main parts: The project structure, such as what files to include and exclude from the transpiling process, what are the dependencies of the various TypeScript projects, and how these projects can refer through each other through aliases.Type checker behavior, such as whether to check or not for nulland undefined in the codebase, preserve the const enums, and so onRuntime transpilation process.TSConfig presetsTypeScript's transpiler can produce down to ES3 code and supports multiple module definitions (CommonJS, SystemJS). The combination of the two is dependent on the runtime environment that you're using. For instance, if you're targeting Node 10, you can comfortably transpile to ES2015 and use CommonJS as for the module resolution strategy. In case you're using a newer Node runtime, such as 14 or 15, then you can target ESNext or ES2020 and even dare to use the ESNext module strategy. Finally, if you're targeting the browser and you're not using a module bundler such as wepack or parcel, you might want to use UMD. Fortunately, the TypeScript team provides good presets that you can import in your own tsconfig file that handles most of these parameters for you. Using them is relatively straightforward: { "extends": "@tsconfig/node12/tsconfig.json", "include": ["src"] }Notable configuration optionsdeclaration: controls whether TypeScript should produce declaration files (.d.ts) with the transpilation. If your project is a library, it's good to enable this so that other developers using your code can benefit from the type checking. If the project is a deployable artifact, such as a web application, you can set this to falsenoEmitOnError: controls whether TypeScript should abort the transpilation in case there is a type error. If set to false, the type erasure and the JavaScript production will continue anyway. Generally speaking, true is the value to useremoveComments: true,suppressImplicitAnyIndexErrors: true,strict: controls a family of additional type checks. Unless there are some good reasons (such as legacy libraries that haven't been correctly migrated/typed), disabling this is not a good idea.noEmitHelpers: When necessary, TypeScript will emit functions and helpers to polyfills newer features that are not available in older standards, such as ES3 and ES5. If set to false, these helpers will be put at the beginning of your code; otherwise, they will be omitted, and you can install the tslib dependency separately.ConclusionsUnlike most of the other introduction articles on TypeScript, hopefully, this one gave you a different perspective on the capabilities that are often ignored in TypeScript. While not perfect, TypeScript's type system is pretty powerful and, for the people interested in using it to the extreme — I highly recommend taking a look at fp-ts and io-ts.
0 notes
Text
May 10, 2018
News and Links
Protocol
Latest Plasma implementation call.
Kelvin Fichter: Plasma XT. Plasma Cash but much less per-user data checking.
Xuanji Li: Plasma Cash with smaller exit procedure
Blockchain at Berkeley’s Plasma implementation
BankEx plasma on Rinkeby. github.
Latest Casper standup
Casper FFG v0.1 "First Release" to get serious about versioning
Latest Prysmatic Labs sharding implementation call
Justin Drake: a scheme for proposers to omit witnesses in stateless validator execution.
Scaling Ethereum panels: Deconomy and Edcon
Stuff for developers
Sūrya. Gonçalo Sá’s extension of soli for exploring Solidity code
scry-one resilient log awaiter in Typescript
Parr, a query tool
Manticore v0.1.9
Conditional scheduling with the Ethereum Alarm Clock
Embark is now part of Status and released v3.0
Solidity event debugger
FOAM’s Cliquebait. PoA testnet in a Docker container
You can now inspect smart contract variables with Truffle’s built-in Solidity debugger in v4.1.8. Plus Truffle hits 500k downloads.
Zastrin basics of Ethereum course (free!)
Yoichi releases Bamboo v0.0.03
Ecosystem
Aragon Nest announces first give grants: Gnarly optimistic UI from XLNT, Prysmatic Labs for sharding, Frame: OS-level signing provider, toolkits for planning on Aragon from Space Decentral and Giveth, and a git-like versioning system using Aragon.
EF grant winner vipnode’s roadmap
Amberdata analytics dashboards
A look into Ethereum mining pools using miner payouts
Etherscan on its values and monetization strategy
Flippening: Ethereum’s fee market is now larger than Bitcoin’s
45 new members of the Enterprise Ethereum Alliance
Ron Resnick on how the EEA’s stack release makes Ethereum more attractive for enterprise than competitors with vendor lockin
An interview with Christine Moy, the new head of Quorum
Governance and Standards
the goals of EIP0 going forward
ERC1068: Loan standard
ERC1070: Standard Bounties
ERC1066: Status code proposal
ERC780 claims registry updates
simplify ENR for v4 compatibility merged into EIP778
EIP908 incentives for archive nodes update
ERC1046 to “extend ERC20 token interface to support the same metadata standard as ERC721” merged
EIP 1062: Formalize IPFS hash into ENS resolver
ERC1067: Upgradeable token approach
EIP1057: programmatic PoW
Discussion on ERC1056 lightweight identity
‘Final call’ added as draft to EIP process
Composables ERC998 Update
Dan Finlay proposes “Strange Loop,” a type of liquid democracy signaling process for governance
Project Updates
Kauri is live on Rinkeby, with knowledge base article requests for uPort, Maker, Aragon, Remix, Toshi, Dharma
Following up on their video, the Aragon Manifesto in print form.
Steph Curry has his own CryptoKitty
draft of the Civil Constitution
0x April update
OmiseGo full roadmap update
Townsquare Media to test BAT and Brave for adblock visitors
uPort’s take on data onchain in the age of GDPR
Interviews, Podcasts, Videos, Talks
Edcon live streams. See schedule to find a specific talk.
Linda Xie’s intro to crypto talk at an Andreessen Horowitz event
Interview of Kiev’s DappDev speakers
Blockchain UX with Jose Caballer and Chris Pallé
10 min Vitalik Buterin interview from Deconomy
Thibaut Sardan on setting up a full Parity node on Zero Knowledge
Fabian Vogelsteller with Jeffrey Tong
Tokens
SEC Commissioner Hester Peirce speech on how regulators should be lifeguards at the beach, not sandbox monitors
a16z’s Scott Kupor and Sonal Chokshi: Analogies, the big picture, and considerations for regulating crypto
Fred Wilson: Is Buying Crypto Assets “Investing?”
“12 Graphs That Show Just How Early The Cryptocurrency Market Is”
Bloomberg graphic on cryptoasset and non-cryptoasset correlations
General
Chris Ballinger launches MOBI to explore blockchain applications with companies comprising >70% of worldwide vehicle production, eg BMW, Ford, GM, Renault etc
Brian Armstrong publishes the decision making framework at Coinbase
Starkware raised 6m from an allstar list of investors for a zkSTARK hardware and software stack
Lots of melodrama about “WSJ FUD” when there was no public SEC hearing on May 7. Paul Vigna is a reputable reporter and only reported a “working group.” Seems to me like most of the kerfuffle was due to over-extrapolation by crypto media in search of pageviews.
Eric Schmidt talking web3 with EF and ConsenSys
Facebook’s messenger lead now leading blockchain efforts.
Dates of Note
Upcoming dates of note:
May 11-12 -- Ethereal (NYC)
May 15 -- Kleros sale
May 16-17 -- Token Summit (NYC)
May 17 -- Blockchain, Accounting, Audit, and Tax conference (NYC)
May 17-19 -- Melonport hackthon in Zug
May 18-20 -- EthMemphis hackathon
May 19-20 -- Hacketh (Warsaw)
May 25 - 27 -- EthBuenosAires hackathon
May 28 -- Zeppelin’s zepcon0 conference (Buenos Aires)
June 1 – Blockchain for Social Impact Conference (Washington, DC)
If you appreciate this newsletter, thank ConsenSys
I'm thankful that ConsenSys has brought me on and given me time to do this newsletter.

Editorial control is 100% me. If you're unhappy with editorial decisions, blame me first and last.
Shameless self-promotion
Here's the link to this issue: http://www.weekinethereum.com/post/173769093623/may-10-2018 Most of what I link to I tweet first: @evan_van_ness
I also have an Ethereum podcast on the Bitcoin Podcast network.
This newsletter is supported by ConsenSys, which is perpetually hiring if you’re interested.
Did someone forward this email to you? Sign up to receive the weekly email (box in the top blue header)
2 notes
·
View notes
Photo

Do all JavaScript tools lead to Rome?
#501 — August 14, 2020
Unsubscribe | Read on the Web
JavaScript Weekly

Rome: Unifying The Frontend Development Toolchain — We first casually mentioned this in-beta tool, dubbed the “spiritual successor of Babel”, back in April, but things have come a long way since then. Rome aims to replace Babel, ESLint, Webpack, Prettier, Jest, and more, to ostensibly simplify the frontend workflow. We’re all for it if it works. Here’s the introductory blog post.
Sebastian McKenzie
React 17.0 Release Candidate: No New Features but Still Lots That's Interesting — When a new major release comes along after two-and-a-half years the absence of new features is going to disappoint many. However, the ability to incrementally upgrade React sites both now and in the future is welcome news. There's still lots about which to be excited, so definitely have a skim at the least.
Dan Abramov and Rachel Nabors
Quickly Find Front-End Issues Like JavaScript or Network Errors — Get proactively alerted on any client-side issues such as JavaScript and network errors, optimize the load time of your front-end resources, and detect any UI issues that affect critical user journeys. Try it free with Datadog Synthetics.
Datadog sponsor
Tabris.js 3.6 Released with Support for Redux, PDFs and More — Tabris is a toolkit for building native mobile apps with JavaScript (including iOS apps without having a Mac). 3.6 adds support for building apps based on Redux.
Tim Buschtoens
⚡️ Quick bytes:
To commemorate our 500th issue (last week), Michael Rambeau kindly put together a timeline of 20 JavaScript projects between 2006-2020. It's not without its omissions but it's a good reminder of how things have progressed.
Node 14.8.0 has been released with (unflagged) top level await support.
BuckleScript and Reason have rebranded to ReScript. It's a popular way to write JavaScript by way of OCaml.
Gatsby has made a bunch of announcements around their new home page, the project's roadmap, and Gatsby Cloud.
Last week we mentioned Salesforce's $10K donation to ESLint and said we'd mention other similarly big donations to JavaScript projects. ESLint is double lucky, as Microsoft has donated $10K to it too.
As of this coming Saturday, GitLab will not be doing MFA resets on free accounts - so double check your backup login methods (e.g. recovery codes) if you use GitLab.
💻 Jobs
Full Stack Developer @ Huckberry (Columbus, Ohio) — We're striving to build the most beloved men's retail site on the planet, with React / Redux / Ruby on Rails.
Huckberry
Find a Job Through Vettery — Use Vettery to connect with hiring managers at startups and Fortune 500 companies. It's free for job-seekers.
Vettery
📚 Tutorials, Opinions and Stories

Natively Formatting JavaScript Numbers — Leverage native formatting capabilities such as locale, currency, units, compact mode, etc…
Elijah Manor
▶ A 30 Minute Chat with Allen Wirfs-Brock — Allen is a true JavaScript expert and has a wealth of programming experience dating from the 60s as well as through working on the ECMAScript spec.
TC39er Podcast podcast
Become a Node.js Cloud Developer - Free Book and Video Course — Learn how to deploy, manage, secure and ship Node.js applications like a pro with this practical, hands-on course.
Kati Frantz sponsor
Supercharging <input type=number> with JavaScript — The number input type provides a nice control for working with numbers on most platforms, with min and max bounds, stepping up and down, etc. But what if you want to add more power to it with custom stepping types and controls? Kilian has a go at this here.
Kilian Valkhof
Building Single Page Applications Using Rust — WebAssembly continues to throw up all sorts of interesting options for the future development of Web applications. This isn't about JavaScript at all, but shows how Rust could perhaps act as, conceptually, a successor to it (or things like Elm) in some domains.
Sheshbabu Chinnakonda
Using Node Modules in Deno — Ada calls this ‘a bad practice’ but notes that ‘sometimes there is no alternative’ especially considering how huge the npm ecosystem is compared to Deno’s nascent one.
Ada Rose Cannon
Building a Programming Language in JavaScript From Scratch
Dmitry Soshnikov Education sponsor
How I Became A Senior JS Developer with Personal Projects — An inspiring tale. “One of the best decisions in my developer career was to go beyond just using tools, I decided to create them.”
Afonso Pacifer
Using ESLint to Improve Your App’s Performance
Pawel Wolak
🔧 Code & Tools

Mock Service Worker: API Mocking Library for Browser and Node — Interestingly uses the Service Worker API to intercept requests which you can then mock. Capture outgoing requests using an Express-like routing syntax, complete with parameters, wildcards, and regexes. GitHub repo.
Artem Zakharchenko
Deno 1.3.0 Released — Deno 1.0 came out in May and work has continued on the Rust and V8-based JavaScript/TypeScript runtime. We’re still publishing Deno Weekly too – here’s the latest issue.
Deno Land
omg-curry: A Feature Complete JavaScript Currying Library
Debdut Karmakar
Vest: Validations Library Inspired by Testing Frameworks — If you’re used to unit testing, the syntax will be familiar. We linked this quite recently but the author has added a variety of new features since.
Evyatar
AppSignal Automatically Instruments Apollo for Node.js Applications — With AppSignal’s automatic instrumentation, you get to debug GraphQL queries that seem to take a long time with ease.
AppSignal sponsor
React Sortable Tree: Maintain Your Trees Intuitively — Dragging-and-dropping is what many users expect of apps these days. Why not give them what they want? The gif demo illlustrates the promise of this hierarchy maintenance component.
Frontend Collective
crypto-hash: A Hashing Module Using the Native Crypto API in Node and Browser — Get the same hashing API in both environments. Uses the crypto module on Node and window.crypto in the browser.
Sindre Sorhus
babel-plugin-native-error-extend: Babel Plugin for Native Error Extending
Ivan Nikolić
🆕 Quick releases:
Storybook 6.0 — UI component building system.
Hapi 20.0 — Batteries included Node webapp framework.
MQTT.js 4.2 — MQTT client library.
Fetch 3.4 — window.fetch polyfill by GitHub.
Prism 1.21 — Syntax highlighter.
by via JavaScript Weekly https://ift.tt/3kSulPV
0 notes
Text
Annotated edition of May 3 Week in Ethereum News
As it is wont to do, the newsletter buried the lede: ProgPoW is indefinitely shelved.
I think it’s been relatively clear since last time that ProgPoW wasn’t going to happen. The leads of the two largest clients are against it personally, plus it’s quite clear that there isn’t anything close to community consensus. If anything, at the moment the majority of the community opposes it. Greg Colvin bringing it up again last week unfortunately made it harder to do in the case where we do actually need it, ie an ASIC manufacturer has a 10x breakthrough but is only selling the machines privately to control 50%+ of the network.
I’d say it’s unclear whether ACD continues to be a thing. To me it feels like an experiment which was worth trying but has become calcified, which needs a complete refresh in terms of both process and non-technical people involved. But inertia is also a very strong force. To overcome that, Ethereum should have a strong culture of continuously sunsetting things if they are not working.
One amusing thing to me has been the idea that ProgPoW is an AMD/Nvidia conspiracy. Given that ETH price declining in 2018 absolutely destroyed their earnings and share price, those two should have been conspiring! Yet if they were, then they did an exceptionally bad job at it. Instead everyone I know got the impression that the GPU manufacturers were indifferent. There are some competing interests for them of course - the anger of their traditional gaming market, plus AI/neural net researchers - but it still surprises me how they did not get involved at all.
Despite the noise, Ethereum governance works! I remember polling everyone I talked to at EthDenver2019 about whether they supported ProgPoW and (at the time I was pro-ProgPoW; I’d say my position is much more complicated now) being disappointed at how everyone I talked to was against it.
I’m very glad we don’t have on-chain governance where a few exchanges/whales could collude to push things through. Because of that, I’d say on-chain governance will drastically limit the market cap of any basechain’s native token.
Eth1
Latest core devs call. Tim Beiko’s notes. Updates on EIPs for eth2 curve, EVM subroutines. ProgPoW shelved due to clear lack of consensus. Discussion of migrating to binary trie
Analysis of EIP-2315 simple EVM subroutines
DHT+SkipGraph for chain and state data retrieval
Notes from the fee market change call
Vitalik’s EIP1559 fee market change FAQ
There’s a risk of being repetitive, but much of the eth1 work does not lend itself to high-level summaries. Folks are discussing the technical details of EVM improvements (eg, subroutines), as well as getting clients to be stateless (eg the DHT and Skipgraph link). And we’re also talking through EIP1559 in light of Dan Finlay’s escalator algo alternative proposal.
One development not mentioned is that Martin Swende has come around to Alexey’s gas/oil proposal instead of his previous approach of penalties for trie misses.
Eth2
Latest what’s new in Eth2
Schlesi multi-client testnet launched with Lighthouse and (slightly updated) Prysmatic clients. Then Nimbus joined Schlesi a few days later.
Bitfly has a Schlesi explorer
Nimbus client update – up to date, joining Schlesi testnet, RFP for security audits, and benchmarking Nimbus on a 2018 midrange phone
Update from ConsenSys’s TXRX team: prkl network monitoring tool, verifiable precompiles, cross-shard tx simulator, fork choice testing, discv5 sim, and work on turning off proof of work.
A step-by-step guide on joining Prysmatic’s Topaz testnet for Windows10 and MacOS
ConsenSys’s high-level eth2 FAQ
I don’t really do corrections in the newsletter, because once you send an email, you can’t easily clarify your language without sending another email.
But, if you click the “Nimbus joined Schlesi,” then it appears to me that Nimbus is receiving the blocks and following the chain, but not proposing/attesting/etc. I probably should have been more clear when I said “joined.”
Layer2
Channels funding channels: how state channels reduce latency and onchain transactions
This series feels to me like a “yes, state channels are almost here now, let’s get ready to reconsider how to use them.” Productionizing any new technology isn’t easy, and finding the uses that best fit the tradeoffs is not trivial. Seems like this is that series.
This newsletter is made possible by Chainlink
Decentralized oracles are a key capability enabling smart contracts to reach their full potential. Come work with the leading team providing oracles for Ethereum. Join us to help build the next generation of smart contracts in roles including Software Engineering, Developer Evangelism, DevOps, Product Management, and more.
I’m very excited that Week in Ethereum News will continue for the next year due to Chainlink, Celer, Trail of Bits, and 0x.
Stuff for developers
buidler v1.3 – test time-based cases in Buidler EVM, works with TheGraph
Waffle’s plan for making testing better with v3
Testing with Python and Brownie
Typechain v2 – Typescript bindings. truffle v5 support, natspec
Solidity docgen v0.5 – documentation generation for Solidity project
Running async/await scripts in Remix IDE
Austin Griffith’s scaffold-eth, a toolkit to prototype and win hackathons
A linked hashmap in Solidity
How to add proxy Ethereum addresses to BigQuery
Authereum’s batched transactions API for interest rate arbs
discv5 feasibility study for Status
Tutorial to testing on mainnet fork with Ganache, Jest and Uniswap
Etheroll security issue: hacker monitoring for onchain forks and then uses that info to frontrun transactions. Novel (to me!) hack
Dragonfly releases an oracles tracker
Synthetix CTO Justin Moses on 10 things they did to improve their Ethereum development experience. tldr: Buidler, Slither, TheGraph, and Tenderly.
It feels like a very undercommented trend how most devs now tell me that their stack is Buidler + Waffle + ethers, and increasingly Typescript as well. Of course, dev tool stacks are perpetually in flux, but this seems to be the stack du jour.
This isn’t new either. After writing the paragraph above, I remembered that Connext’s Rahul had written something about a similar stack 3 months ago. I go back and check Rahul’s recommendation: Buidler + Waffle + Ethers + typescript. If this was a chatbox, I would put a rofl emoji, but in prose this seems less appropriate.
Ecosystem
Contribute to the TornadoCash trusted setup ceremony. It takes about 5 secs of clicking and requires you to leave the browser tab open a few minutes.
Multisigs controlling multisigs: Avsa’s vision for a usable web3
Renew your ENS names or you will lose them. Names start to expire May 4th
Forgive me the clickbait - you actually have 90 days grace period if your domain expired, but I don’t want anyone to miss this if their domain has expired.
If you haven’t contributed to Tornado’s trusted setup ceremony, I recommend that you do. Assuming that the software works correctly, you can ensure that Tornado becomes trustless for you by participating! It literally takes just a few seconds to start, and then you leave your browser tab open for about 3 minutes. You can even contribute multiple times.
Enterprise
Hyperledger Besu v1.4.4, added priv_getLogs, added Splunk integration
Governance, DAOs, and standards
Governance processes for Maker and Compound add WBTC to Maker and USDT to Compound. TBTC also proposed for Maker
Maker’s MIPs ratification vote is live
MetaClan: DAOs for in-game coordination
ERC2611: Geotimeline Contact Tracing Data Standard
Last call: ERC1363 Payable Token
Last call: EIP1193 Eth provider Javascript API
ERC 2612: permit, 712-signed approvals
EIP2357: Total difficulty in block header
Lots of blowback to Maker adding WBTC. I very much understand the criticism, but to me it looks like Maker is taking reasonable measures, given the current situation where DAI is trading a little rich on the peg. It’s true that permissioned assets have some risk, but this is literally why MKR is supposed to have value: because those MKR holders make good decisions.
Now perhaps you don’t like that model, and that also makes sense, designing for stablecoins is a large solution space. But this has always been the Maker vision. And I say this as someone who does not hold any MKR, and never has (though you’re welcome to give me some!).
Application layer
DeFiZap and DeFiSnap merged to be ZapperFi: now track and trade your DeFi together
Gnosis Safe apps: interact with apps straight from the Gnosis Safe interface
dforce/lendfme plan post-hack: user airdrop, dSAFU insurance fund, large bug bounty
OpenBazaar now supports Eth
A rough proposal for a GasToken forward
Everest: a project registry from TheGraph and MetaCartel
I know I have said this before, but the ebb and flow between sections is fascinating to me. The stuff for devs section was full this week, but the app layer was a little light. Maybe I just missed stuff.
Arbitrary “how much of this section is DeFi” count: 3/6
Tokens/Business/Regulation
UMA did an Initial Uniswap Offering, and there was a 5-10x spike
It appears Telegram will have to return $1.2 billion to investors
Ideo’s Simple Agreement for Future Governance for DeFi
Auditing the 10k top Eth addresses: ETH is better distributed than BTC and a bunch of other interesting claims
I again note that US federal regulators continue to bailout Silicon Valley investors from the worst deals that Silicon Valley did in late 2017/early 2018.
I’d say it’s inevitable that we’re going to see some folks copy UMA. Watch for it.
Adam Cochran’s onchain activity of top 10k addresses is very interesting. Definitely some undersupported claims in there, but certainly worth a read. This is the second time he wrote a 100+ tweetstorm and then compiled it to a blog post. Personally I prefer viewing it as a blog post.
General
EtherScan Connect: an alpha for mapping addresses with a leaderboard
a16z raises $515m crypto fund
Vitalik’s review of Gitcoin grants round 5
SuperMarlin: no trusted setup with DARK polynomial commitment
“alpha for mapping addresses with a leaderboard“ is another thing I could have said more eloquently. It’s an interesting attempt by Etherscan to give something to their community, though of course it comes with risks.
There’s something amusing about a16z announcing a new fund, mentioning Bitcoin, and then mostly talking about the stuff that’s being built on Ethereum, without actually mentioning Ethereum. People like to talk about being contrarian investors. Wanna know how buying ETH is somehow still a contrarian play in crypto right now? It’s right there.
zk continues to just explode. It almost seems like plug and play, where people are pulling out the parts of different schemes that they like and putting in others, depending on the tradeoffs you want around trusted setups, verifier time, prover cost, etc.
Housekeeping
First issue post-ConsenSys. As a reminder, this newsletter is and has always been 100% owned by me.
Did you get forwarded this newsletter? Sign up to receive it weekly
Permalink: https://weekinethereumnews.com/week-in-ethereum-news-may-3-2020/
Dates of Note
Upcoming dates of note (new/changes in bold):
May 6-20 – Gitcoin’s virtual hackthon
May 8-9 – Ethereal Summit (NYC)
May 22-31 – Ethereum Madrid public health virtual hackathon
May 29-June 16 – SOSHackathon
June 17 – EthBarcelona R&D workshop
0 notes
Link
One of the most important things which is also often neglected by developers - the performance. One of the key focus area for the 1.0 release was making it blazingly fast ⚡
TypeGraphQL is basically an abstraction layer built on top of the reference GraphQL implementation for JavaScript - graphql-js. To measure the overhead of the abstraction, a few demo examples were made to compare it against the "bare metal" - using raw graphql-js library.
It turned out that in the most demanding cases like returning an array of 25 000 nested objects, the old version 0.17 was even about 5 times slower!
library execution time TypeGraphQL v0.17 1253.28 ms graphql-js 265.52 ms
After profiling the code and finding all the root causes (like always using async execution path), the overhead was reduced from 500% to just 17% in v1.0.0! By using simpleResolvers it can be reduced even further, up to 13%:
execution time graphql-js 265.52 ms TypeGraphQL v1.0 310.36 ms with "simpleResolvers" 299.61 ms with a global middleware 1267.82 ms
Such small overhead is much easier to accept than the initial 500%! More info about how to enable the performance optimizations in the more complex cases can be found in the docs 📖.
Schema isolation
This is another feature that is not visible from the first sight but gives new possibilities like splitting the schema to public and private ones 👀
In 0.17.x and before, the schema was built from all the metadata collected by evaluating the TypeGraphQL decorators. The drawback of this approach was the schema leaks - every subsequent calls of buildSchema was returning the same schema which was combined from all the types and resolvers that could be find in the metadata storage.
In TypeGraphQL 1.0 it's no longer true! The schemas are now isolated which means that the buildSchema call takes theresolvers array from options and emit only the queries, mutation and types that are related to those resolvers.
const firstSchema = await buildSchema({ resolvers: [FirstResolver], }); const secondSchema = await buildSchema({ resolvers: [SecondResolver], });
So just by modifying the resolvers option we can have different sets of operations exposed in the GraphQL schemas! Proper isolation also makes serverless development easier as it allows to get rid of the "Schema must contain uniquely named types" errors and others.
Directives and extensions
This two new features are two complementary ways to put some metadata about the schema items.
GraphQL directives though the syntax might remind the TS decorators, as "a directive is an identifier preceded by a @ character", but in fact, they are a purely Schema Definition Language feature. Apart from the metadata capabilities, they can also modify the schema and e.g. generate the connection type for pagination purposes. Basically, the looks like this:
type Query { foobar: String! @auth(requires: USER) }
To apply them, we just need to put the @Directive decorator above and supply the string argument, e.g.:
@Resolver() class FooBarResolver { @Directive("@auth(requires: USER)") @Query() foobar(): string { return "foobar"; } }
However, on the other side we have the GraphQL extensions which are the JS way to achieve the same goal. It's the recommended way of putting the metadata about the types when applying some custom logic.
To declare the extensions for type or selected field, we need to use @Extensionsdecorator, e.g.:
@ObjectType() class Foo { @Extensions({ roles: [Role.User] }) @Field() bar: string; }
We can then read that metadata in the resolvers or middlewares, just by exploring the GraphQLResolveInfo object, e.g.:
export const ExtensionsMiddleware: MiddlewareFn = async ({ info }, next) => { const { extensions } = info.parentType.getFields()[info.fieldName]; console.log(extensions?.roles); // log the metadata return next(); };
More info about directives and extensions features can be found in docs 📖
Resolvers and arguments for interface fields
The last thing that was preventing TypeGraphQL from being fully GraphQL compliant thus blocking the 1.0 release - an ability to provide interface fields resolvers implementations and declare its arguments.
Basically, we can define resolvers for the interface fields using the same syntax we would use in case of the @ObjectType, e.g.:
@InterfaceType() abstract class IPerson { @Field() avatar(@Arg("size") size: number): string { return `http://i.pravatar.cc/${size}`; } }
...with only a few exceptions for cases like abstract methods and inheritance, which you can read about in the docs.
More descriptive errors messages
One of the most irritating issues for newcomers were the laconic error messages that haven't provided enough info to easily find the mistakes in the code.
Messages like "Cannot determine GraphQL input type for users" or even the a generic "Generating schema error" were clearly not helpful enough while searching for the place where the flaws were located.
Now, when the error occurs, it is broadly explained, why it happened and what could we do to fix that, e.g.:
Unable to infer GraphQL type from TypeScript reflection system. You need to provide explicit type for argument named 'filter' of 'getUsers' of 'UserResolver' class.
or:
Some errors occurred while generating GraphQL schema: Interface field 'IUser.accountBalance' expects type 'String!' but 'Student.accountBalance' is of type 'Float'
That should allow developers to safe tons of time and really speed up the development 🏎
Transforming nested inputs and arrays
In the previous releases, an instance of the input type class was created only on the first level of inputs nesting. So, in cases like this:
@InputType() class SampleInput { @Field() sampleStringField: string; @Field() nestedField: SomeNestedInput; } @Resolver() class SampleResolver { @Query() sampleQuery(@Arg("input") input: SampleInput): boolean { return input.nestedField instanceof SomeNestedInput; } }
the nestedField property of input was just a plain Object, not an instance of the SomeNestedInput class. That behavior was producing some unwanted issues, including limited support for inputs and args validation.
Since 1.0 release, it's no longer an issue and all the nested args and inputs are properly transformed to the corresponding input type classes instances, even including deeply nested arrays
0 notes
Text
Angular 9.1 Now Available
Angular 9.1 is a minor release of the framework and the CLI that is a drop-in replacement for 9.0 containing new features and bug fixes. Angular 9.1 added support for TypeScript 3.8, in addition to our existing support for 3.6 and 3.7. Angular 9.1 adds many great features to the TypeScript language, including: Type-Only Imports and Exports; ECMAScript Private Fields; Top level await. In Angular…
View On WordPress
0 notes
Text
[Udemy] Angular Progressive Web Apps (PWA) MasterClass & FREE E-Book
Practical Guide to Angular 6 Progressive Web Applications (PWA) What Will I Learn? Learn in detail multiple features made available by the Angular Service Worker Learn How to turn your Angular Application into a PWA Make your Angular Application Downloadable and Installable Make you Angular Application One-Click Installable with an App Manifest Add an Application Shell for boosting Initial Startup Performance Add Push Notifications using the Angular Service Worker Improve Application Performance with Application Data Caching Learn Service Worker Fundamentals Learn the Service Worker Lifecycle in Detail Debug Service Workers using the Chrome Dev Tools Requirements Just some previous knowledge of Angular and Typescript Description This Course In a Nutshell - Part 1 Angular Service Worker (includes the Typescript Jumpstart Ebook) In the first part of the course we are going to dive straight into the Angular Service Worker and the built-in Angular CLI PWA support. We are going to take as the starting point a plain Angular Application, and we are going to start adding PWA features on top of it in order to turn it into a Progressive Web Application. We will start by adding the Angular Service Worker and have the Application Download & Installationfeature up and running. We will learn all about Application Version Management and we will learn about the multiple caching strategies available for caching not only the application bundles, but also the application data. We will then add an Application Shell using the Angular CLI and Angular Universal, and make the application One-Click Installable using an App Manifest. Finally we will add Push Notifications to the application using the Angular Service Worker, for a complete mobile-like PWA experience. This Course In a Nutshell - Part 2 - Service Workers Fundamentals In this second part, you will learn about Progressive Web Applications in general (including Service Workers, App Manifest, and the Cache Storage API.), and you will use these APIs to implement several native-like features: One-Click Install, Offline support, application download and installation, application data caching and more. We will also learn Service Worker Web Fundamentals from first principles using built-in browser APIs directly and we will confirm our understanding of what is going on in every step of the way using the PWA built-in Chrome Dev Tools. The knowledge learned on this section has a long shelf live, as its about browser fundamentals and how the new PWA Web works in general, so everything learned here is still applicable to any other development platform other than Angular. Why PWAs? Build the Future Of the Web Today The future of the Web is clearly going in one direction: Progressive Web Applications! With the upcoming Safari support and the ability to install a PWA directly to a Windows desktop, PWAs are the new generation of web applications that will bridge the gap towards Native Apps: this will even include in the near future things like Bluetooth and Payments. PWAs look and feel like native apps, but there is no App Store and all its issues involved: instead a web application can now be installed directly to the user desktop or home screen with one single click! The application will then install itself immediately on the background, so that it does not have to be downloaded again unless a new version is available. The application will feel blazing fast in a way that until now only native apps could: A PWA is offline, background sync and notification capable. PWAs are not mobile-specific: any application can benefit from starting up an order of magnitude faster! The Angular CLI PWA support and the Angular Service Worker take PWA features to another level: we will be able to add PWA features reliably to our application in a transparent way via the Angular CLI, as part of the production build process. But we will still need to understand how PWA features work under the hood, in order to troubleshoot any issues, interact with the Angular Service Worker or configure the Angular CLI. Course Overview In part 1, we are going to take PWA features one at a time.These features where designed to be used together, so using them to implement a concrete use case is the best way to learn them. The main component of a PWA is a Service Worker, so that is our starting point. We are going to learn all about the Service Worker lifecycle, how multiple versions of the service worker are handled by the browser, and we will learn multiple Service worker installation and activation GOTCHAs. We are then going to build a service worker that gives to an application the ability of showing an offline page when the network is down, using the Cache API. Most of the PWA APIs are Promise-based, so to make the code more readable we are going to be using Async / Await in our examples. We are then going to implement Application Download and Installation: we are going to see how we can use a Service Worker to completely download a version of the website into a user’s mobile or desktop, and how to handle the installation of multiple versions of the application. We are going to demo the performance benefits of caching the complete application locally by simulating a slow network, and learn how to clean previous versions of an application. We will also learn how to cache UI-specific data and derived View Model data on the client side using Indexed DB and a Service Worker, and we will also earn how to implement Background Sync. The next section of the course will cover all about the App Manifest and how to configure that one-click installation experience that we are looking for. After that, we will inspect our application with the Lighthouse tool and analyze the generated report. The last part of this PWA fundamentals section will cover Web Push and Notifications. In this section, we are going to setup a node server with Web Push capabilities, and then we are going to link it to our service worker in order to implement mobile-like Notifications. What Will you Learn In this Course? You will learn how the PWA Browser features work in general, not only in the context of an Angular application. You will know how to analyze and troubleshoot PWAs using the Chrome PWA Dev Tools. You will also learn how to leverage the Angular Service Worker, and all the Angular CLI PWA features to build PWA-enabled web applications. What Will You be able to do at the End Of This Course? You will be able to design, develop and troubleshoot PWAs, plus you will know how to quickly add PWA features to an Angular Application leveraging the Angular CLI and the Angular Service Worker. Have a look at the course free lessons below, and please enjoy the course! Who is the target audience? Developers looking to get in-depth knowledge on the Angular Service Worker Developers looking to learn Service Worker Fundamentals (part 2 of the course) source https://ttorial.com/angular-progressive-web-apps-pwa-masterclass-free-e-book
source https://ttorialcom.tumblr.com/post/178801130413
0 notes
Text
Using Top Level Await In Typescript: Simplifying Asynchronous Javascript Code
TypeScript code where you needed to use the “await” keyword outside of an asynchronous function? TypeScript 3.8 introduced a new feature called “Top Level Await” which allows you to use the “await” keyword at the top level of your code. Isn’t that exciting?
In this article, we will explore what exactly Top Level Await is, how it works, and the benefits and limitations of using it in TypeScript.
What is Top Level Await in TypeScript?
Have you ever encountered situations where you wanted to use the ‘await’ keyword at the top level of your TypeScript code? Well, you’re in luck because TypeScript now supports top level await!
In traditional JavaScript and TypeScript, the ‘await’ keyword can only be used within an ‘async’ function. This means that you need to wrap your code in an ‘async’ function and call it to use the ‘await’ keyword. However, this can sometimes lead to messy and unoptimized code. With the introduction of top level await TypeScript, you can now use the ‘await’ keyword directly at the top level of your code without the need for an ‘async’ function. This makes your code cleaner, more readable, and easier to maintain.
Top level await allows you to write asynchronous code in a synchronous manner. It provides a convenient way to handle Promises and async/await syntax without the need for additional boilerplate code. When using top level await, you can directly await a Promise or any other asynchronous operation in the global scope. This means that you don’t need to wrap your code in an ‘async’ function or use an immediately invoked async function expression.This feature is especially useful when you want to perform initialization tasks or fetch data from an API before your application starts running. Instead of chaining multiple ‘await’ calls or nesting them within ‘async’ functions, you can now simply use top level await to await the promises and continue with the execution of your code.
However, it’s important to note that top level await is currently an experimental feature in TypeScript and may not be supported by all JavaScript runtime environments or transpilers. It is recommended to check the compatibility of the runtime environment or transpiler you are using before using this feature in production code.
How Does Top Level Await Work?
Top level await changes this by allowing you to use await at the top level of your module, meaning you can use it outside of any function or block scope. This way, you can directly await a promise without having to wrap your code in an async function.
When you use top level await in TypeScript, the compiler automatically wraps your code in an immediately invoked async function. This function encapsulates your top level await expression and executes it synchronously, blocking the execution of any following statements until the promise is resolved or rejected.
For example, let’s say you have a module that needs to fetch some data from an API. With top level await, you can write code like this:import { fetchData } from './api';const data = await fetchData();console.log(data);
In this example, the fetchData function returns a promise that resolves to some data. Instead of using a traditional then callback, we can directly await the promise at the top level of our module and assign the result to the data variable. The console log statement will only execute once the promise is successfully resolved.
Top level await also allows you to use try-catch blocks to handle promise rejections. If the awaited promise is rejected, the catch block will be executed just like in regular async/await code.
It’s important to note that top level await is only available in modules that are marked as ES modules (i.e., have the module flag set to “ESNext” or “ES2020”). If you’re using CommonJS modules, you won’t be able to directly use top level await.
Furthermore, you need to ensure that your runtime environment supports top level await.
Benefits of Using Top Level Await in TypeScript
Top level await offers several benefits for asynchronous programming in TypeScript:
Simplified Syntax:
With top level await, you can directly use “await” at the outermost level of your code, eliminating the need for async function wrappers. This results in cleaner and more readable code.
Improved Error Handling:
Using top level await allows you to handle errors in a centralized manner. Instead of scattering error handling throughout multiple async functions, you can use a try/catch block at the top level to catch and handle errors from the awaited operations.
Easier Debugging:
Top level await simplifies the debugging process by providing a straightforward way to pause the execution and inspect the state of the code. This can be especially useful when working with complex asynchronous operations.
Faster Development:
By eliminating the need for explicit async function wrappers, top level await reduces boilerplate code, resulting in faster development time. It allows you to directly use “await” wherever needed, reducing the cognitive load of managing async functions.
Limitations of Top Level Await in TypeScript
While using top level await in TypeScript can be beneficial, it is important to note that there are also some limitations to be aware of. These limitations may impact your decision to use top level await in certain scenarios.
1. Compatibility
One major limitation of top level await in TypeScript is that it is only supported in environments that have native support for ECMAScript modules and top level await. This means that if you are working with older versions of JavaScript or in environments that do not support these features, you will not be able to use top level await.
Additionally, not all browsers and JavaScript engines have implemented top level await yet, which can restrict where you can use this feature. It is important to check the compatibility of your target environments before deciding to use top level await in your TypeScript code.
2. Module Systems
Another limitation of top level await is its interaction with module systems. When using top level await, your code must be within an ECMAScript module, which may require additional configuration and setup, especially if you are working with existing codebases that use different module systems.
For example, if you are using CommonJS modules in Node.js, you would need to convert your code to ECMAScript modules in order to use top level await. This can be a time-consuming process, and may not be feasible in all projects.
3. Debugging and Tooling
Debugging and tooling support for top level await in TypeScript can also be limited. Some IDEs and development tools may not fully understand and handle top level await, which can lead to issues with code completion, type checking, and other features.
Similarly, when debugging code that uses top level await, the debugger may not be able to step through the asynchronous code in the expected manner. This can make debugging more challenging and time-consuming.
4. Code Complexity
Using top level await can also introduce additional complexity to your code. Especially if you are not familiar with asynchronous programming concepts. It may require a deeper understanding of promises, async/await syntax, and potential pitfalls such as handling errors and cancellations.
This added complexity can make your code harder to read, understand, and maintain, especially for developers. It is important to consider the skill level and experience of the developers who will be working with the codebase.
5. Performance Implications
Lastly, it is worth noting that using top level await can have performance implications, especially in certain scenarios. Top level await blocks the execution of the module until the awaited promise is resolved. It can potentially delay the loading and execution of other modules or scripts that depend on it.
This can lead to slower application startup times and decreased performance, especially in large codebases with complex dependencies. It is important to carefully consider the performance impact of using top level await in your specific use cases.
Conclusion
While top level await can be a powerful feature in TypeScript, it is important to be aware of its limitations. By understanding these limitations, you can make an informed decision and ensure that your codebase remains maintainable and performant.
0 notes
Text
[Udemy] Angular Progressive Web Apps (PWA) MasterClass & FREE E-Book
Practical Guide to Angular 6 Progressive Web Applications (PWA) What Will I Learn? Learn in detail multiple features made available by the Angular Service Worker Learn How to turn your Angular Application into a PWA Make your Angular Application Downloadable and Installable Make you Angular Application One-Click Installable with an App Manifest Add an Application Shell for boosting Initial Startup Performance Add Push Notifications using the Angular Service Worker Improve Application Performance with Application Data Caching Learn Service Worker Fundamentals Learn the Service Worker Lifecycle in Detail Debug Service Workers using the Chrome Dev Tools Requirements Just some previous knowledge of Angular and Typescript Description This Course In a Nutshell - Part 1 Angular Service Worker (includes the Typescript Jumpstart Ebook) In the first part of the course we are going to dive straight into the Angular Service Worker and the built-in Angular CLI PWA support. We are going to take as the starting point a plain Angular Application, and we are going to start adding PWA features on top of it in order to turn it into a Progressive Web Application. We will start by adding the Angular Service Worker and have the Application Download & Installationfeature up and running. We will learn all about Application Version Management and we will learn about the multiple caching strategies available for caching not only the application bundles, but also the application data. We will then add an Application Shell using the Angular CLI and Angular Universal, and make the application One-Click Installable using an App Manifest. Finally we will add Push Notifications to the application using the Angular Service Worker, for a complete mobile-like PWA experience. This Course In a Nutshell - Part 2 - Service Workers Fundamentals In this second part, you will learn about Progressive Web Applications in general (including Service Workers, App Manifest, and the Cache Storage API.), and you will use these APIs to implement several native-like features: One-Click Install, Offline support, application download and installation, application data caching and more. We will also learn Service Worker Web Fundamentals from first principles using built-in browser APIs directly and we will confirm our understanding of what is going on in every step of the way using the PWA built-in Chrome Dev Tools. The knowledge learned on this section has a long shelf live, as its about browser fundamentals and how the new PWA Web works in general, so everything learned here is still applicable to any other development platform other than Angular. Why PWAs? Build the Future Of the Web Today The future of the Web is clearly going in one direction: Progressive Web Applications! With the upcoming Safari support and the ability to install a PWA directly to a Windows desktop, PWAs are the new generation of web applications that will bridge the gap towards Native Apps: this will even include in the near future things like Bluetooth and Payments. PWAs look and feel like native apps, but there is no App Store and all its issues involved: instead a web application can now be installed directly to the user desktop or home screen with one single click! The application will then install itself immediately on the background, so that it does not have to be downloaded again unless a new version is available. The application will feel blazing fast in a way that until now only native apps could: A PWA is offline, background sync and notification capable. PWAs are not mobile-specific: any application can benefit from starting up an order of magnitude faster! The Angular CLI PWA support and the Angular Service Worker take PWA features to another level: we will be able to add PWA features reliably to our application in a transparent way via the Angular CLI, as part of the production build process. But we will still need to understand how PWA features work under the hood, in order to troubleshoot any issues, interact with the Angular Service Worker or configure the Angular CLI. Course Overview In part 1, we are going to take PWA features one at a time.These features where designed to be used together, so using them to implement a concrete use case is the best way to learn them. The main component of a PWA is a Service Worker, so that is our starting point. We are going to learn all about the Service Worker lifecycle, how multiple versions of the service worker are handled by the browser, and we will learn multiple Service worker installation and activation GOTCHAs. We are then going to build a service worker that gives to an application the ability of showing an offline page when the network is down, using the Cache API. Most of the PWA APIs are Promise-based, so to make the code more readable we are going to be using Async / Await in our examples. We are then going to implement Application Download and Installation: we are going to see how we can use a Service Worker to completely download a version of the website into a user's mobile or desktop, and how to handle the installation of multiple versions of the application. We are going to demo the performance benefits of caching the complete application locally by simulating a slow network, and learn how to clean previous versions of an application. We will also learn how to cache UI-specific data and derived View Model data on the client side using Indexed DB and a Service Worker, and we will also earn how to implement Background Sync. The next section of the course will cover all about the App Manifest and how to configure that one-click installation experience that we are looking for. After that, we will inspect our application with the Lighthouse tool and analyze the generated report. The last part of this PWA fundamentals section will cover Web Push and Notifications. In this section, we are going to setup a node server with Web Push capabilities, and then we are going to link it to our service worker in order to implement mobile-like Notifications. What Will you Learn In this Course? You will learn how the PWA Browser features work in general, not only in the context of an Angular application. You will know how to analyze and troubleshoot PWAs using the Chrome PWA Dev Tools. You will also learn how to leverage the Angular Service Worker, and all the Angular CLI PWA features to build PWA-enabled web applications. What Will You be able to do at the End Of This Course? You will be able to design, develop and troubleshoot PWAs, plus you will know how to quickly add PWA features to an Angular Application leveraging the Angular CLI and the Angular Service Worker. Have a look at the course free lessons below, and please enjoy the course! Who is the target audience? Developers looking to get in-depth knowledge on the Angular Service Worker Developers looking to learn Service Worker Fundamentals (part 2 of the course) source https://ttorial.com/angular-progressive-web-apps-pwa-masterclass-free-e-book
0 notes
Text
The Node.js Community was amazing in 2017! Here's the proof:
Let me just start by stating this:
2017 was a great year for the Node.js Community!
Our favorite platform has finally matured to be widely adopted in the enterprise, also, a lot of great features have been merged into the core, like async/await & http2, just to mention a few.
The world runs on node ✨🐢🚀✨ #NodeInteractive http://pic.twitter.com/5PwDC2w1mE
— Franziska Hinkelmann (@fhinkel) October 4, 2017
Not only the developers who contribute to Node make this platform so great, but those who create learning materials around it were also exceptional.
In our final article of 2017, we collected a bunch of longreads which were created by the Node community. We used data from Reddit, Hacker News, Twitter & Medium to search for the most read, most shared & liked stuff out there. We intentionally omitted articles written by RisingStack (except 1, couldn't resist) - if you're interested in those, please visit our top15 of 2017 post!
I'm sure we left a bunch of amazing articles out - so if you'd like to get attention to an exceptional one you read, please let us know in the comments section.
Also, keep in mind that this post is not a classic "best of" list, just a collection that shows how amazing the Node community was in 2017! I figured that a big shoutout to the developers who constantly advocate & teach Node would be a great way to say goodbye to this year.
So, here come the proof I promised in the title:
1. We have insanely useful community curated lists:
If you'd like to gain useful knowledge quickly, you can count on the Node community. These lists were created in 2017, and contain so much best practices & shortcuts that you can definitely level up your skills by going through them.
The largest Node.JS best practices list is curated from the top ranked articles and always updated.
Awesome Node.js is an extensive list of delightful Node packages & resources.
The Modern JS Cheatsheet contains knwledge needed to build modern projects.
These lists have over 300 contributors & gained around 50K stars on GitHub!
Also, we recommend to check out these articles written by Yoni Goldberg & Azat Mardan which will help you to become a better developer:
Node.JS production best practices
10 Node.js Best Practices: Enlightenment from the Node Gurus
2. There are actually insightful Case Studies available!
One of the best articles on using Node.js was written by Scott Nonnenberg in April. The author summarized his Node experiences from the past 5 years, discussing basic concepts, architectural issues, testing, the Node ecosystem & the reason why you shouldn't use New Relic.
"I’ve already shared a few stories, but this time I wanted to focus on the ones I learned the hard way. Bugs, challenges, surprises, and the lessons you can apply to your own projects!" - Scott
..
Also, did you know that Node.js Helps NASA to Keep Astronauts Safe? It's hard to come up with something cooler than that.
NASA choose Node.js for the following reasons:
The relative ease of developing data transfer applications with JavaScript, and the familiarity across the organization with the programming language, which keeps development time and costs low.
Node.js’ asynchronous event loop for I/O operations makes it the perfect solution for a cloudbased database system that sees queries from dozens of users who need data immediately.
The Node.js package manager, npm, pairs incredibly well with Docker to create a microservices architecture that allows each API, function and application to operate smoothly and independently.
To learn more, read the full case study!
Also, shoutout to the Node Foundation who started to assemble and distribute these pretty interesting use-case whitepapers on a regular basis!
3. Node.js Authentication & Security was well covered in 2017:
When it comes to building Node.js apps (or any app..) security is crucial to get right. This is the reason why "Your Node.js authentication tutorial is (probably) wrong" written by micaksica got so much attention on HackerNoon.
tl;dr: The author went on a search of Node.js/Express.js authentication tutorials. All of them were incomplete or made a security mistake in some way that can potentially hurt new users. His post explores some common authentication pitfalls, how to avoid them, and what to do to help yourself when your tutorials don’t help you anymore.
If you plan on reading only one security related article (..but why would you do that?!), this is definitely one of the best ones!
..
Also, we recommend to check out the Damn Vulnerable NodeJS Application github page, which aims to demonstrate the OWASP Top 10 Vulnerabilities and guides you on fixing and avoiding these vulnerabilities.
..
Other great articles which were receiving a lot of praise were Securing Node.js RESTful APIs with JSON Web Tokens by Adnan Rahic, and Two-Factor Authentication with Node.js from David Walsh.
4. API development with Node.js has been made even more easy:
One of the main strengths of Node.js is that you can build REST APIs with it in a very efficient way! There are a lot of articles covering that topic, but these were definitely the most popular ones:
RESTful API design with Node.js walks beginners through the whole process in a very thorough, easy to understand way.
Build a Node.js API in Under 30 Minutes achieves the same result, but it uses ES6 as well.
10 Best Practices for Writing Node.js REST APIs (written by us) goes a little further and includes topics like naming your routes, authentication, black-box testing & using proper cache headers for these resources.
5. We're constantly looking under the hood of Node.js.
Luckily, the Node/JS community delivers when you want to go deeper, in fact there were so many interesting deep-dives that it was really hard to pick out the best ones, but one can try! These articles are really insightful:
Understanding Node.js Event-Driven Architecture
What you should know to really understand the Node.js Event Loop
Node.js Streams: Everything you need to know
How JavaScript works: inside the V8 engine + 5 tips on how to write optimized code
ES8 was Released and here are its Main New Features
6 Reasons Why JavaScript’s Async/Await Blows Promises Away (Tutorial)
What were the best articles that peeked under the hood of JS/Node in your opinion? Share them in the comments!
6. Awesome new Tools were made in 2017:
Two of the most hyped tools of the year were Prettier & Fastify!
In case you don't know, prettier is a JavaScript formatter that works by compiling your code to an AST, and then pretty-printing the AST.
The result is good-looking code that is completely consistent no matter who wrote it. This solves the problem of programmers spending a lot of time manually moving around code in their editor and arguing about styles.
Fastify was introduced by Matteo Collina, Node.js core technical commitee member during Node Interactive 2017 in Vancouver.
Fastify is a new web framework inspired by Hapi, Restify and Express. Fastify is built as a general-purpose web framework, but it shines when building extremely fast HTTP APIs that use JSON as the data format.
..just to mention a few.
7. There are Amazing Crash Courses Available for Free
If you'd like to start learning Node, you can count on the commmunity to deliver free, high-quality resources that can speed up your journey!
One particular author - Adnan Rahić - received a huge amount of praise for releasing crash courses for different topics on Medium. Here they are:
A crash course on testing with Node.js
A crash course on Serverless with Node.js
We hope Adnan will continue this series and create crash courses on other areas of Node as well!
8. Miscallaneus Topics are Covered as Well
What do we mean under miscallaneus topics? Well, those are topics that go beyond the "traditional" use cases of Node.js, where the authors describe something interesting & fun they built with it.
For example there were exceptional articles released that used the OpenCV library for either face recognition or just regular object recognition.
Chatbots were a hype topic too, and there was no shortage of blogposts describing how to build them using different languages.
Building A Simple AI Chatbot With Web Speech API And Node.js was one of the most praised articles. In this post Tomomi Imura walked us through building a chat-bot which can be controlled by voice. Yep, it's pretty cool.
We also recommend to check out Developing A Chatbot Using Microsoft's Bot Framework, LUIS And Node.js too, which is the first part of a series dealing with the topic.
What other fun projects did you see? Add them to the comments section!
9. The community can attend great conferences, like Node Interactive!
Node Interactive Vancouver was a major event which provided great opportunities for it's attendees to meet with fellow developers & learn a lot about their favorite subject.
Fortunately every prezentation is available on YouTube, so you can get up-to-date even if you couldn't participate this year (just like us.)
The 10 most watched right now are:
Node.js Performance and Highly Scalable Micro-Services by Chris Bailey
New DevTools Features for JavaScript by Yang Guo
The V8 Engine and Node.js by Franzi Hinkelmann
High Performance JS in V8 by Peter Marshall
The Node.js Event Loop: Not So Single Threaded by Bryan Hughes
Welcome and Node.js Update by Mark Hinkle
Take Your HTTP Server to Ludicrous Speed by Matteo Collina
WebAssembly and the Future of the Web by Athan Reines
High Performance Apps with JavaScript and Rust by Amir Yasin
TypeScript - A Love Tale with JavaScript by Bowden Kelly
Of course this list is just the tip of the iceberg, since there are 54 videos upladed on the Node Foundations' YouTube channel, and most of them are really insightful and fun to watch.
10. Node.js is finally more sought after than Java!
Although Ryan Dahl recently stated in an interview that..
for a certain class of application, which is like, if you're building a server, I can't imagine using anything other than Go.
.. we have no reason to doubt the success of Node.js!
Mikeal Rogers, one of the core organizers of NodeConf, community manager & core contributor at the Node Foundation stated that Node.js will overtake Java within a year in a NewStack interview this summer!
We are now at about 8 million estimated users and still growing at about 100 percent a year. We haven’t passed Java in terms of users yet, but by this time next year at the current growth, we will surpass. - Mikeal.
Mikeal is not alone with his opinion. There is hard data available to prove that Node is becoming more sought after than Java.
According to the data gathered by builtinnode, the demand for Node.js developers has already surpassed the demand for Java devs by the summer of 2017 in the Who is Hiring section of Hacker News!
Since Node was already adopted and is being advocated by the greatest tech companies on Earth, there's no doubt that it will remain a leading technology for many years ahead!
We hope that the Node community will continue to thrive in 2018 as well, and produce a plethora of exellent tools & tutorials.
At RisingStack, we'll keep on advocating & educating Node in 2018 as well, for sure! If you're interested in the best content we produced in 2017, take a look!
The Node.js Community was amazing in 2017! Here's the proof: published first on http://ift.tt/2w7iA1y
0 notes
Link
Have you ever wanted to write your own compiler? … yes? … of course you have! I’ve always wanted to have a go at writing a compiler, and with the recent release of WebAssembly, I had the perfect excuse to have a go. My original plan was to invent my own programming language, create a compiler that targets WebAssembly, and share my experiences at FullStackNYC. The first part went to plan, I spent many-an-evening building, tinkering and refining my compiler. Unfortunately the last part of my plan didn’t go quite so well. Long delays, and an eventual cancellation, meant that I wasn’t going to make it to New York after all. 😔😢😭 So, rather than waste all that work - I thought I’d write up my talk as a blog post instead - hence the ‘19 min’ reading time for this article. So sit back, make yourself comfortable, and we’ll begin … What is WebAssembly? (and why does it exist?) If you haven’t heard of WebAssembly before, and want a really detailed introduction, I’d thoroughly recommend Lin Clark’s Cartoon Guide. You’ll learn the ‘what’ of WebAssembly throughout this blog post, but I do want to briefly touch on the ‘why’. From my perspective, this diagram sums it up quite succinctly: The top diagram shows a simplified timeline for the execution of some JavaScript code within the browser. From left-to-right, the code (typically delivered as a minified mess!) is parsed into an AST, initially executed in an interpreter, then progressively optimised / re-optimised until it eventually runs really quite quickly. These days JavaScript is fast - it just takes a while to get up to speed. The bottom diagram is the WebAssembly equivalent. Code written in a wide variety of languages (Rust, C, C#, etc …) is compiled to WebAssembly that is delivered in a binary format. This is very easily decoded, compiled and executed - giving fast and predictable performance. So why write your own compiler? WebAssembly has been causing quite a stir over the last year. So much so, that it was voted the fifth ‘most loved’ language in Stack Overflow’s developer insights survey. An interesting result, considering that for most people WebAssembly is a compilation target, rather than a language they will use directly. This was part of my motivation for proposing the FullStackNYC talk in the first place. The technical aspects of WebAssembly are really fascinating (and remind me of 8-bit computers from a few decades back), yet most people will never get the chance to dabble with WebAssembly itself - it will just be a black box that they compile to. Writing a compiler is a really good opportunity to delve into the details of WebAssembly to find it what it is and how it works. And it’s fun too! One final point, it was never my aim to create a fully-featured programming language, or one that is actually any good. My goal was to create ‘enough’ of a language to allow me to write a program that renders a mandelbrot set. This language is compiled to WebAssembly using my compiler, which is written in TypeScript and runs in the browser. Here it is in it’s full glory: I ended up calling the language chasm and you can play with it online if you like. Enough rambling - time for some code! A minimal wasm module Before tackling the compiler, we’ll start with something simpler, creating a minimal WebAssembly module. Here is an emitter (the term used for the part of a compiler that outputs instructions for the target system), that creates the smallest valid WebAssembly module: const magicModuleHeader = [0x00, 0x61, 0x73, 0x6d]; const moduleVersion = [0x01, 0x00, 0x00, 0x00]; export const emitter: Emitter = () Uint8Array.from([ ...magicModuleHeader, ...moduleVersion ]); It is comprised of two parts, the ‘magic’ header, which is the ASCII string \0asm, and a version number. These eight bytes form valid WebAssembly (or wasm) module. More typically these would be delivered to the browser as a .wasm file. In order to execute the WebAssembly module it needs to be instantiated as follows: const wasm = emitter(); const instance = await WebAssembly.instantiate(wasm); If you run the above you’ll find that instance doesn’t actually do anything because our wasm module doesn’t contain any instructions! If you’re interested in trying out this code for yourself, it is all on GitHub - with a commit for each step. An add function Let’s make the wasm module do something more useful, by implementing a function that adds a couple of floating point numbers together. WebAssembly is a binary format, which isn’t terribly readable (to humans at least), which is why you’ll more typically see it written in WebAssembly Text Format (WAT). Here’s a module, presented in WAT format, that defines an exported function named $add that takes two floating point parameters, adds them together and returns them: (module (func $add (param f32) (param f32) (result f32) get_local 0 get_local 1 f32.add) (export "add" (func 0)) ) If you just want to experiment with WAT you can use the wat2wasm tool from the WebAssembly Binary Toolkit to compile WAT files into wasm modules. The above code reveals some interesting details around WebAssembly - WebAssembly is a low-level language, with a small (approx 60) instruction set, where many of the instructions map quite closely to CPU instructions. This makes it easy to compile wasm modules to CPU-specific machine code. It has no built in I/O. There are no instructions for writing to the terminal, screen or network. In order to wasm modules to interact with the outside world they need to do so via their host environment, which in the case of the browser is JavaScript. WebAssembly is a stack machine, in the above example get_local 0 gets the local variable (in this case the function param) at the zeroth index and pushes it onto the stack, as does the subsequent instruction. The f3.add instruction pops two values form the stack, adds them together than pushes the value back on the stack. WebAssembly has just four numeric types, two integer, two floats. More on this later … Let’s update the emitter to output this ‘hard coded’ WebAssembly module. WebAssembly modules are composed of a pre-defined set of optional sections, each prefixed with a numeric identifier. These include a type section, which encode type signatures, and function section, which indicates the type of each function. I’ll not cover how these are constructed here - they are quite dull. If you’re interested, look at the next commit in the project. The interesting part is the code section. Here is how the above add function is created in binary: const code = [ Opcodes.get_local /** 0x20 */, ...unsignedLEB128(0), Opcodes.get_local /** 0x20 */, ...unsignedLEB128(1), Opcodes.f32_add /** 0x92 */ ]; const functionBody = encodeVector([ ...encodeVector([]) /** locals */, ...code, Opcodes.end /** 0x0b */ ]); const codeSection = createSection(Section.code /** 0x0a */, encodeVector([functionBody])); I’ve defined an Opcodes enum (I’m using TypeScript), which contains all of the wasm instructions. The unsignedLEB128 function is a standard variable length encoding which is used for encoding instruction parameters. The instructions for a function are combined with the function’s local variables (of which there are none in this case), and an end opcode that signals the end of a function. Finally all the functions are encoded into a section. The encodeVector function simply prefixes a collection of byte arrays with the total length. And there you have it, the complete module, which is about 30 bytes in total. The JavaScript hosting code can now be updated to involve this exported function: const { instance } = await WebAssembly.instantiate(wasm); console.log(instance.exports.add(5, 6)); Interestingly if you inspect the exported add function with the Chrome Dev Tools it identifier it as a ‘native function’. You can see the complete code for this step (with unit tests - go me!) on GitHub. Building a compiler Now that you’ve seen how to dynamically create wasm modules, it’s time to turn our attention to the task of creating a compiler. We’ll start with a bit of terminology. Here’s some chasm code annotated to show the key components of a language: Rather than give a ‘textbook definition’ of each, you’ll become familiar with them as the compiler evolves. The compiler itself will be formed of three parts, the tokenizer which breaks up the input program (which is a string), into discrete tokens, the parser that takes these tokens and converts them into an Abstract Syntax Tree (AST), and finally the emitter which converts the AST into wasm binary module. This is a pretty standard compiler architecture: Rather than dive into a complete implementation, we’ll tackle a small subset of the problem. The goal is to create a compiler for a language that just supports print statements which print simple numeric literals … The Tokenizer The tokenizer works by advancing through the input string, one character at a time, matching patterns that represent specific token types. The following code creates three matches (number, keyword, and whitespace), using simple regular expressions: const keywords = ["print"]; // returns a token if the given regex matches at the current index const regexMatcher = (regex: string, type: TokenType): Matcher (input: string, index: number) { const match = input.substring(index).match(regex); return ( match && { type, value: match[0] } ); }; const matchers = [ regexMatcher("^[.0-9]+", "number"), regexMatcher(`^(${keywords.join("|")})`, "keyword"), regexMatcher("^\\s+", "whitespace") ]; (Note, these regular expressions are not terribly robust!) The Matcher interface defines a function that given an input string and an index returns a token if a match occurs. The main body of the parser iterates over the characters of the string, finding the first match, adding the provided token to the output array: export const tokenize: Tokenizer = input { const tokens: Token[] = []; let index = 0; while (index < input.length) { const matches = matchers.map(m m(input, index)).filter(f f) const match = matches[0]; if (match.type !== "whitespace") { tokens.push(match); } index += match.value.length; } return tokens; }; Here is the tokenised output of the program "print 23.1": [ { "type": "keyword", "value": "print", "index": 1 }, { "type": "number", "value": "23.1", "index": 7 } ] As you can see from the above input, the tokeniser removes whitespace as it has no meaning (for this specific language), it also ensures that everything in the input string is a valid token. However, it doesn’t make any guarantees about the input being well-formed, for example the tokeniser will happily handle "print print", which is clearly incorrect. The array of tokens is next fed into the parser. The Parser The goal of the parser is the creation of an Abstract Syntax Tree (AST), a tree structure that encodes the relationship between these tokens, resulting in a form that could potentially be sent to an interpreter for direct execution. The parser iterates through the supplied tokens, consuming them via an eatToken function. export const parse: Parser = tokens { const iterator = tokens[Symbol.iterator](); let currentToken = iterator.next().value; const eatToken = () (currentToken = iterator.next().value); [...] const nodes: StatementNode[] = []; while (index < tokens.length) { nodes.push(parseStatement()); } return nodes; }; (I’ve no idea where the concept of eating tokens comes from, it appears to be standard parser terminology, they are clearly hungry beasts!) The goal of the above parser is to turn the token array into an array of statements, which are the core building blocks of this language. It expects the given tokens to conform to this pattern, and will throw an error (not shown above) if it does not. The parseStatement function expects each statement to start with a keyword - switching on its value: const parseStatement = () { if (currentToken.type === "keyword") { switch (currentToken.value) { case "print": eatToken(); return { type: "printStatement", expression: parseExpression() }; } } }; Currently the only supported keyword is print, in this case it returns an AST node of type printStatement parsing the associated expression. And here is the expression parser: const parseExpression = () { let node: ExpressionNode; switch (currentToken.type) { case "number": node = { type: "numberLiteral", value: Number(currentToken.value) }; eatToken(); return node; } }; In its present form the language only accepts expressions which are composed of a single number - i.e. a numeric literal. Therefore the above expression parser expects the next token to be a number, and when this matches, it returns a node of type numberLiteral. Continuing the simple example of the program "print 23.1", the parser outputs the following AST: [ { "type": "printStatement", "expression": { "type": "numberLiteral", "value": 23.1 } } ] As you can see the AST for this language is an array of statement nodes. Parsing guarantees that the input program is syntactically correct, i.e. it is properly constructed, but it doesn’t of course guarantee that it will execute successfully, runtime errors might still be present (although for this simple language they are not possible!). We’re onto the final step now … The Emitter Currently the emitter outputs a hard-coded add function. It now needs to take this AST and emit the appropriate instructions, as follows: const codeFromAst = ast { const code = []; const emitExpression = node { switch (node.type) { case "numberLiteral": code.push(Opcodes.f32_const); code.push(...ieee754(node.value)); break; } }; ast.forEach(statement { switch (statement.type) { case "printStatement": emitExpression(statement.expression); code.push(Opcodes.call); code.push(...unsignedLEB128(0)); break; } }); return code; }; The emitter iterates over the statements that form the ‘root’ of the AST, matching our only statement type - print. Notice that the first thing it does is emit the instructions for the statement expressions, recall that WebAssembly is a stack machine, hence the expression instructions must be processed first leaving the result on the stack. The print function is implemented via a call operation, which invokes the function at index zero. Previously we’ve seen how wasm modules can export functions (as per the add example above), they can also import functions, which are supplied when you instantiate the module. Here we provide an env.print function that logs to the console: const instance = await WebAssembly.instantiate(wasm, { env: { print: console.log } }); This function is addressable by index, i.e. call 0. You can see the complete code for the compiler at this point on GitHub - you can also have a play with this example via the online chasm compiler playground. Also, for completeness this is how the program progresses through the various compiler stages: So far we’ve put quite a lot of structure in place, but not really felt the benefit. A separate tokenizer, parser and emitter is overkill for a language that only prints simple numerics. However, as the language complexity grows, this structure really starts to pay dividends. Implementing expressions Next up, we’ll look at implementing binary expressions, allowing the language to perform simple mathematics, for example print ((42 + 10) / 2). For the tokeniser the changes are quite trivial, involving adding a couple of additional regex matchers for parentheses and operators. I’ll not reproduce them here - instead, just show the resultant output: [ { "type": "keyword", "value": "print" }, { "type": "parens", "value": "(" }, { "type": "parens", "value": "(" }, { "type": "number", "value": "42" }, { "type": "operator", "value": "+" }, { "type": "number", "value": "10" }, { "type": "parens", "value": ")" }, { "type": "operator", "value": "/" }, { "type": "number", "value": "2" }, { "type": "parens", "value": ")" } ] Next up, we’ll look at the changes to the parser - where the expression parser can encounter either number of parens tokens: const parseExpression = () { let node: ExpressionNode; switch (currentToken.type) { case "number": [...] case "parens": eatToken(); const left = parseExpression(); const operator = currentToken.value; eatToken(); const right = parseExpression(); eatToken(); return { type: "binaryExpression", left, right, operator }; } }; Notice that parsing of parens expressions is recursive, with the nodes for the left and right invoking the parseExpression function once again. The AST for the program print ((42 + 10) / 2) is given below: [{ type: "printStatement", expression: { type: "binaryExpression", left: { type: "binaryExpression", left: { type: "numberLiteral", value: 42 }, right: { type: "numberLiteral", value: 10 }, operator: "+" }, right: { type: "numberLiteral", value: 2 }, operator: "/" } }]; The tree structure is becoming more obvious in this example. Finally, the emitter needs to be updated in order to handle the binaryExpression node type, as follows: const emitExpression = (node) traverse(node, (node) { switch (node.type) { case "numberLiteral": code.push(Opcodes.f32_const); code.push(...ieee754(node.value)); break; case "binaryExpression": code.push(binaryOpcode[node.operator]); break; } }); The traverse function in the above code traverses tree structures invoking the given visitor for each node. While linear structures only have one logical way to traverse them (i.e. in order), trees can be traversed in a number of different ways. The traversal method used by the emitter is a depth-first post-order traversal, in other words as it encounters each node it visits left, right, then root - this order ensures that the wasm instructions are output in the correct order for the stack machine, operands then operator. And that’s it, all the changes that are required to support expressions. Give it a go online. The compiler architecture is starting to prove its value! Variables Next up, we’ll add variables, allowing for more interesting chasm programs … Variables are declared using the var keyword, and can be used in expressions as identifiers. We’ll not look at the changes to the tokeniser, it’s just yet more regex! The main loop of the parser, which reads successive statements from the token array, determines the statement type based on the keyword it encounters: const parseVariableDeclarationStatement = () { eatToken(); // var const name = currentToken.value; eatToken(); eatToken(); // = return { type: "variableDeclaration", name, initializer: parseExpression() }; }; const parseStatement: ParserStep<StatementNode = () { if (currentToken.type === "keyword") { switch (currentToken.value) { case "print": return parsePrintStatement(); case "var": return parseVariableDeclarationStatement(); } } }; Variable declaration parsing is quite straight-forwards - notice that the parseVariableDeclarationStatement function also makes use of the expression parser, which ensures that variables can be declared and assigned an initial value from an expression, e.g. var f = (1 + 4). Next up, the emitter. WebAssembly functions can have local variables, these are declared at the beginning of the function definition, and are accessed via the get_local and set_local functions that also retrieve function parameters. The variables in our AST are referenced via their identifier name, whereas wasm identifies locals by their index. The emitter needs to maintain this information in a symbol table, which is a simple map from the symbol name to index: const symbols = new Map<string, number(); const localIndexForSymbol = (name: string) { if (!symbols.has(name)) { symbols.set(name, symbols.size); } return symbols.get(name); }; Within the node traversal, when a variable declaration is encountered, the expression is emitted, them set_local used to assign the value to the respective local variable. case "variableDeclaration": emitExpression(statement.initializer); code.push(Opcodes.set_local); code.push(...unsignedLEB128(localIndexForSymbol(statement.name))); break; Within expressions, when identifiers are found, the get_local operation is used to retrieve the value: case "identifier": code.push(Opcodes.get_local); code.push(...unsignedLEB128(localIndexForSymbol(node.value))); break; Also, the function encoding we saw right back at the beginning is updated to add the locals for the function that the emitter builds. The chasm language has a single variable type, everything is a float. Have a go at defining variables and using them within print statements online while loops One of the final language constructs we need in order to achieve the goal of rendering a mandelbrot set is some kind of loop. For chasm I opted for a while loop, as show in this simple program that prints the numbers 0 to 9: var f = 0 while (f < 10) print f f = (f + 1) endwhile WebAssembly has various control flow instructions (branch, if, else, loop, block). The following WAT show how a while loop can be constructed: (block (loop [loop condition] i32.eqz [nested statements] br_if 1 br 0) ) Branching within WebAssembly is based on stack depth. The outer block and loop instructions push entries onto the control-flow stack. The br_if 1 instruction performs a conditional branch to a stack depth of one, and br 0 an unconditional branch to a depth of zero, repeating the loop. Here’s how the emitter produces the same in binary format: case "whileStatement": // outer block code.push(Opcodes.block); code.push(Blocktype.void); // inner loop code.push(Opcodes.loop); code.push(Blocktype.void); // compute the while expression emitExpression(statement.expression); code.push(Opcodes.i32_eqz); // br_if $label0 code.push(Opcodes.br_if); code.push(...signedLEB128(1)); // the nested logic emitStatements(statement.statements); // br $label1 code.push(Opcodes.br); code.push(...signedLEB128(0)); // end loop code.push(Opcodes.end); // end block code.push(Opcodes.end); break; And here it is running in the online playground. graphics! We’re nearly there - up to the very last step now! Currently the only way we’ve been able to see output from our chasm programs is via the print statement, which is wired to the console via a function imported by the WebAssembly module. For the mandelbrot set we somehow need to render graphics to the screen. To achieve this we’ll make use of another very important component of WebAssembly modules, linear memory: As I mentioned previously, WebAssembly only has 4 numeric data types. You might be wondering how languages with richer type systems (string, structs, arrays) can compile to WebAssembly? WebAssembly modules can optionally define (or import) a block of linear memory, this is a contiguous block of memory that is shared by the wasm module and its host - in other words both can read and write to this memory. Therefore, if you want to pass a string to your WebAssembly module, you do this by writing it to linear memory. For chasm we just want some sort of display, so will use linear memory as a form of Video RAM. The chasm languages supports a simple set-pixel command which takes three expressions, the x location, y location and colour. For example, the following program fill the screen with a horizontal gradient: var y = 0 while (y < 100) var x = 0 while (x < 100) setpixel x y (x * 2) x = (x + 1) endwhile y = (y + 1) endwhile (Try it online) The setpixel command is implemented using the wasm store instruction that writes to linear memory. On the JavaScript ‘hosting’ side, this same linear memory is read and copied to a HTML canvas. I’ll not reproduce the changes to the code here, you can see them on GitHub And with that - the chasm language is complete, and able to render the mandelbrot set: (Try it online) Conclusions I hope you enjoyed this journey and have either learnt a bit more about WebAssembly or how compilers work? For me, this project was a lot of fun - I’ve never written a compiler before, but have always wanted to. As you can probably imagine, I’ve not stopped there, the temptation was too great to keep going - I’ve already implemented if / else, and functions / procedures are in the pipeline. I’d also really like to explore some of the more involved topics like memory management, for example introduce string, arrays and a memory allocator for storage within linear memory. All topics for a future post!
0 notes
Text
The Node.js Community was amazing in 2017! Here's the proof:
Let me just start by stating this:
2017 was a great year for the Node.js Community!
Our favorite platform has finally matured to be widely adopted in the enterprise, also, a lot of great features have been merged into the core, like async/await & http2, just to mention a few.
The world runs on node ✨🐢🚀✨ #NodeInteractive http://pic.twitter.com/5PwDC2w1mE
— Franziska Hinkelmann (@fhinkel) October 4, 2017
Not only the developers who contribute to Node make this platform so great, but those who create learning materials around it were also exceptional.
In our final article of 2017, we collected a bunch of longreads which were created by the Node community. We used data from Reddit, Hacker News, Twitter & Medium to search for the most read, most shared & liked stuff out there. We intentionally omitted articles written by RisingStack (except 1, couldn't resist) - if you're interested in those, please visit our top15 of 2017 post!
I'm sure we left a bunch of amazing articles out - so if you'd like to get attention to an exceptional one you read, please let us know in the comments section.
Also, keep in mind that this post is not a classic "best of" list, just a collection that shows how amazing the Node community was in 2017! I figured that a big shoutout to the developers who constantly advocate & teach Node would be a great way to say goodbye to this year.
So, here come the proof I promised in the title:
1. We have insanely useful community curated lists:
If you'd like to gain useful knowledge quickly, you can count on the Node community. These lists were created in 2017, and contain so much best practices & shortcuts that you can definitely level up your skills by going through them.
The largest Node.JS best practices list is curated from the top ranked articles and always updated.
Awesome Node.js is an extensive list of delightful Node packages & resources.
The Modern JS Cheatsheet contains knwledge needed to build modern projects.
These lists have over 300 contributors & gained around 50K stars on GitHub!
Also, we recommend to check out these articles written by Yoni Goldberg & Azat Mardan which will help you to become a better developer:
Node.JS production best practices
10 Node.js Best Practices: Enlightenment from the Node Gurus
2. There are actually insightful Case Studies available!
One of the best articles on using Node.js was written by Scott Nonnenberg in April. The author summarized his Node experiences from the past 5 years, discussing basic concepts, architectural issues, testing, the Node ecosystem & the reason why you shouldn't use New Relic.
"I’ve already shared a few stories, but this time I wanted to focus on the ones I learned the hard way. Bugs, challenges, surprises, and the lessons you can apply to your own projects!" - Scott
..
Also, did you know that Node.js Helps NASA to Keep Astronauts Safe? It's hard to come up with something cooler than that.
NASA choose Node.js for the following reasons:
The relative ease of developing data transfer applications with JavaScript, and the familiarity across the organization with the programming language, which keeps development time and costs low.
Node.js’ asynchronous event loop for I/O operations makes it the perfect solution for a cloudbased database system that sees queries from dozens of users who need data immediately.
The Node.js package manager, npm, pairs incredibly well with Docker to create a microservices architecture that allows each API, function and application to operate smoothly and independently.
To learn more, read the full case study!
Also, shoutout to the Node Foundation who started to assemble and distribute these pretty interesting use-case whitepapers on a regular basis!
3. Node.js Authentication & Security was well covered in 2017:
When it comes to building Node.js apps (or any app..) security is crucial to get right. This is the reason why "Your Node.js authentication tutorial is (probably) wrong" written by micaksica got so much attention on HackerNoon.
tl;dr: The author went on a search of Node.js/Express.js authentication tutorials. All of them were incomplete or made a security mistake in some way that can potentially hurt new users. His post explores some common authentication pitfalls, how to avoid them, and what to do to help yourself when your tutorials don’t help you anymore.
If you plan on reading only one security related article (..but why would you do that?!), this is definitely one of the best ones!
..
Also, we recommend to check out the Damn Vulnerable NodeJS Application github page, which aims to demonstrate the OWASP Top 10 Vulnerabilities and guides you on fixing and avoiding these vulnerabilities.
..
Other great articles which were receiving a lot of praise were Securing Node.js RESTful APIs with JSON Web Tokens by Adnan Rahic, and Two-Factor Authentication with Node.js from David Walsh.
4. API development with Node.js has been made even more easy:
One of the main strengths of Node.js is that you can build REST APIs with it in a very efficient way! There are a lot of articles covering that topic, but these were definitely the most popular ones:
RESTful API design with Node.js walks beginners through the whole process in a very thorough, easy to understand way.
Build a Node.js API in Under 30 Minutes achieves the same result, but it uses ES6 as well.
10 Best Practices for Writing Node.js REST APIs (written by us) goes a little further and includes topics like naming your routes, authentication, black-box testing & using proper cache headers for these resources.
5. We're constantly looking under the hood of Node.js.
Luckily, the Node/JS community delivers when you want to go deeper, in fact there were so many interesting deep-dives that it was really hard to pick out the best ones, but one can try! These articles are really insightful:
Understanding Node.js Event-Driven Architecture
What you should know to really understand the Node.js Event Loop
Node.js Streams: Everything you need to know
How JavaScript works: inside the V8 engine + 5 tips on how to write optimized code
ES8 was Released and here are its Main New Features
6 Reasons Why JavaScript’s Async/Await Blows Promises Away (Tutorial)
What were the best articles that peeked under the hood of JS/Node in your opinion? Share them in the comments!
6. Awesome new Tools were made in 2017:
Two of the most hyped tools of the year were Prettier & Fastify!
In case you don't know, prettier is a JavaScript formatter that works by compiling your code to an AST, and then pretty-printing the AST.
The result is good-looking code that is completely consistent no matter who wrote it. This solves the problem of programmers spending a lot of time manually moving around code in their editor and arguing about styles.
Fastify was introduced by Matteo Collina, Node.js core technical commitee member during Node Interactive 2017 in Vancouver.
Fastify is a new web framework inspired by Hapi, Restify and Express. Fastify is built as a general-purpose web framework, but it shines when building extremely fast HTTP APIs that use JSON as the data format.
..just to mention a few.
7. There are Amazing Crash Courses Available for Free
If you'd like to start learning Node, you can count on the commmunity to deliver free, high-quality resources that can speed up your journey!
One particular author - Adnan Rahić - received a huge amount of praise for releasing crash courses for different topics on Medium. Here they are:
A crash course on testing with Node.js
A crash course on Serverless with Node.js
We hope Adnan will continue this series and create crash courses on other areas of Node as well!
8. Miscallaneus Topics are Covered as Well
What do we mean under miscallaneus topics? Well, those are topics that go beyond the "traditional" use cases of Node.js, where the authors describe something interesting & fun they built with it.
For example there were exceptional articles released that used the OpenCV library for either face recognition or just regular object recognition.
Chatbots were a hype topic too, and there was no shortage of blogposts describing how to build them using different languages.
Building A Simple AI Chatbot With Web Speech API And Node.js was one of the most praised articles. In this post Tomomi Imura walked us through building a chat-bot which can be controlled by voice. Yep, it's pretty cool.
We also recommend to check out Developing A Chatbot Using Microsoft's Bot Framework, LUIS And Node.js too, which is the first part of a series dealing with the topic.
What other fun projects did you see? Add them to the comments section!
9. The community can attend great conferences, like Node Interactive!
Node Interactive Vancouver was a major event which provided great opportunities for it's attendees to meet with fellow developers & learn a lot about their favorite subject.
Fortunately every prezentation is available on YouTube, so you can get up-to-date even if you couldn't participate this year (just like us.)
The 10 most watched right now are:
Node.js Performance and Highly Scalable Micro-Services by Chris Bailey
New DevTools Features for JavaScript by Yang Guo
The V8 Engine and Node.js by Franzi Hinkelmann
High Performance JS in V8 by Peter Marshall
The Node.js Event Loop: Not So Single Threaded by Bryan Hughes
Welcome and Node.js Update by Mark Hinkle
Take Your HTTP Server to Ludicrous Speed by Matteo Collina
WebAssembly and the Future of the Web by Athan Reines
High Performance Apps with JavaScript and Rust by Amir Yasin
TypeScript - A Love Tale with JavaScript by Bowden Kelly
Of course this list is just the tip of the iceberg, since there are 54 videos upladed on the Node Foundations' YouTube channel, and most of them are really insightful and fun to watch.
10. Node.js is finally more sought after than Java!
Although Ryan Dahl recently stated in an interview that..
for a certain class of application, which is like, if you're building a server, I can't imagine using anything other than Go.
.. we have no reason to doubt the success of Node.js!
Mikeal Rogers, one of the core organizers of NodeConf, community manager & core contributor at the Node Foundation stated that Node.js will overtake Java within a year in a NewStack interview this summer!
We are now at about 8 million estimated users and still growing at about 100 percent a year. We haven’t passed Java in terms of users yet, but by this time next year at the current growth, we will surpass. - Mikeal.
Mikeal is not alone with his opinion. There is hard data available to prove that Node is becoming more sought after than Java.
According to the data gathered by builtinnode, the demand for Node.js developers has already surpassed the demand for Java devs by the summer of 2017 in the Who is Hiring section of Hacker News!
Since Node was already adopted and is being advocated by the greatest tech companies on Earth, there's no doubt that it will remain a leading technology for many years ahead!
We hope that the Node community will continue to thrive in 2018 as well, and produce a plethora of exellent tools & tutorials.
At RisingStack, we'll keep on advocating & educating Node in 2018 as well, for sure! If you're interested in the best content we produced in 2017, take a look!
The Node.js Community was amazing in 2017! Here's the proof: published first on http://ift.tt/2fA8nUr
0 notes
Photo
Node.js vs Deno: What You Need to Know
Since its announcement, Deno has generated quite a lot of interest within the JavaScript community. As a JavaScript runtime designed by the creator of Node, you might expect there to be be a lot of similarities between the two projects, and there are. However, there are also important differences, meaning you can’t just substitute one for the other.
This article will take a look at Deno in relation to its “older cousin” to help understand what they have in common, and what sets them apart. (If you want to get the skinny on Deno first, check out our recent introduction.)
Language Support
Both projects are JavaScript runtimes, allowing JavaScript code to be executed on a computer outside of a web browser. Let’s look at how they stack up in terms of language support.
Node.js
The current LTS release of Node (v12.18.1 as of writing) supports modern JavaScript syntax and features. It also supports about 75% of the ES2020 spec. ECMAScript modules are also supported, but are currently only classed as experimental: you need to use the .mjs file extension, or add the property "type": "module" to your project’s package.json file.
In order to run TypeScript (or any other language) on Node, the code needs to be compiled to JavaScript that the V8 engine can execute. There are several different ways to do this, with different pros and cons, so getting up and running means having to choose one of these and follow the necessary setup process.
Deno
I was unable to find any mention of the JavaScript spec supported by Deno, but as it also uses V8 under the hood I would assume a similar level of support as in Node. My own tests show that Deno supports ES2020 features like Promise.allSettled() and the globalThis keyword. ECMAScript modules are the default, with CommonJS modules not supported unless you use the Node compatibility library (more about this later).
TypeScript is supported as a first-class language in Deno, meaning that it works out-of-the-box: no installing additional tools to transpile to JavaScript first. Of course, the V8 engine doesn’t support TypeScript natively, so Deno is still transpiling the code under the hood, but this is all seamless and transparent to you as a developer.
I also couldn’t find mention of which version of TypeScript Deno v1.0.1 uses, but it supports optional chaining and nullish coalescing (but not private class fields) which would peg it as TS 3.7.
APIs
Deno and Node both expose their own APIs to developers, allowing us to write programs that can actually do useful things like read and write files, and send and receive network requests.
Node.js
When Node was first released, there was no built-in support for Promises. As a result of this, most of the APIs for asynchronous operations were written to take an error-first callback:
const fs = require('fs'); fs.readFile('readme.txt', (err, data) => { if (err) { // Handle the error } // Otherwise handle the data });
Even though Node developers now have access to Promises and the async/await syntax, the APIs still expect callbacks in order to maintain backwards compatibility.
Deno
Deno’s API has been designed to take advantage of modern JavaScript features. All the asynchronous methods return Promises. Deno also supports top level await, meaning you can use await in your main script without having to wrap it in an async function.
try { const data = await Deno.readFile('readme.txt'); // Handle the data } catch (e) { // Handle the error }
The development team also made the decision to use web standards where possible, which means they’ve implemented browser APIs where it’s practical to do so. Deno provides a global window object, and APIs such as addEventListener and fetch. Having access to fetch is particularly nice, as with Node you’d have to polyfill this or use a third-party library.
The compatibility module
Deno provides a compatibility layer with the aim of allowing you to reuse existing Node packages. It’s not yet complete, but it does currently support loading CommonJS modules via require(), among other things.
Package Management
Package management is one area where Deno represents a radical departure from Node’s way of doing things. As it’s still early days for Deno, it remains to be seen if its approach will prove to be advantageous.
Node.js
As you might be aware, Node comes with its own package manager called npm, which is used to install and manage third-party packages. npm is mainly used with the online npm registry, where most of the available third-party packages are listed.
When you use npm to install a package into your project, a package.json file is used to specify the package name and acceptable version range. The package itself (plus any packages it depends on) are then downloaded into a node_modules folder inside your project.
Deno
Deno does away with the need for a package manager altogether. Instead, packages are linked to directly via a URL:
import { Response } from "https://deno.land/[email protected]/http/server.ts";
On the first run of your code, Deno fetches and compiles all the dependencies. They are then cached on the file system, separately from your project, so subsequent runs are much faster.
Similar to npm’s package-lock.json file, Deno allows you to specify a lock file that will be used to ensure that only dependencies that match the exact version you originally imported will be used
Continue reading Node.js vs Deno: What You Need to Know on SitePoint.
by Nilson Jacques via SitePoint https://ift.tt/2BXUBGZ
0 notes
Photo

Fixing memory leaks, TypeScript 3.8, and what's new in Angular 9
#476 — February 21, 2020
Unsubscribe : Read on the Web
JavaScript Weekly
Fixing Memory Leaks in Web Applications — When the front-end became more dynamic, part of the bargain was that front-end developers would have to deal with “a new class of problems that don’t really exist in the server-rendered world”.. including avoiding memory leaks on 101 different types of client device. Here’s an introduction to client-side memory leaks and how to begin investigating them.
Nolan Lawson
TypeScript 3.8 Released — The latest version of TypeScript (the JS ‘superset’ with static type support) is with us and introduces support for ES private fields, export * as .., top-level await, and more, all demonstrated in detail in this release post.
Daniel Rosenwasser (Microsoft)
Get Best in Class Error Reporting for Your JavaScript Apps — Time is money. Software bugs waste both. Save time with Bugsnag. Automatically detect and diagnose errors impacting your users. Get comprehensive diagnostic reports, know immediately which errors are worth fixing, and debug in minutes. Try it free.
Bugsnag sponsor
A Guide to console Commands — Yes, this topic has been done before, but this is truly the most thorough article I’ve seen on the topic. If console.log is as far as you’ve gone before, this is quite the journey.
Travis Almand
▶ 9 Talks from the WebAssembly Summit 2020 — WebAssembly is undoubtedly going to become a more significant technology this decade and these talks provide a glimpse at the cutting edge of WASM with Lin Clark’s talk providing perhaps the best overview of the space.
YouTube
▶ Wes Bos Demonstrates Cloudflare Workers — Workers is Cloudflare’s ‘run JavaScript code at the edge’ service (though it also supports Rust, C and C++ now). This is a good, practical introduction to a common use case and worth watching if you want to use JavaScript to process in-flight HTTP requests.
Wes Bos
⚡️ Quick Releases
react-redux 7.2.0 — Redux bindings for React apps.
Snowpack 1.4.0
Node 13.9.0
jquery.terminal 2.14.0 — Create terminal-like experiences in the browser.
Slugify 0.11 — Turn any string into a useful slug.
💻 Jobs
Lead Server-Side Developer (Sydney or Remote across AUS/NZ) — Build back-end frameworks server side software (Express.js + MongoDB + GraphQL) and write server-side code (JavaScript, Node.js).
Compono
Find a Dev Job Through Vettery — Vettery is completely free for job seekers. Make a profile, name your salary, and connect with hiring managers from top employers.
Vettery
ℹ️ If you're interested in running a job listing in JavaScript Weekly, there's more info here.
📘 Articles & Tutorials
What’s Actually New in Angular 9? — We featured the release of Angular 9 a couple of weeks ago – this post digests the release.
John Papa and Kapehe Jorgenson
Aborting a Fetch API Request — The Fetch API didn’t have the ability to abort a request in its early days (unlike XMLHttpRequest) but AbortController and AbortSignal can help you get the job done on all modern browsers.
Ganapati V S
How To Use The HTML Drag-And-Drop API in React — How to use the useReducer hook for managing state in a React functional component.
Chidi Orji
The Ultimate Guide to Container Formats: Decrypting Video Technologies — This is a comprehensive guide of the definitions & processes that you must know when working with back-end video stacks.
Bitmovin Inc. sponsor
An Introduction to Set Objects — A brief introduction to a handy type that was introduced with ES6/ES2015.
Nick Scialli
Make Your Own DevTools? — How creating DevTools specific for your own app can be a way to improve your productivity. I could see this catching on!
Kent C Dodds
Introducing the Page Visibility API — Modern browers generally handle page visibility stuff on their own nowadays, but you can do a little more with the API, which has excellent browser support.
John Au-Yeung
Building an Image Gallery using PixiJS and WebGL — This goes beyond your typical image gallery with an interesting 3D effect. Lots of bits and pieces here to learn.
Luis Torres
Reactivity in Vue 2 vs Vue 3 — This goes into quite some depth though it’s only for Vue developers or anyone generally curious about how to implement reactivity in JavaScript.
Marina Mosti
How to Make a Simple JavaScript Quiz — Although there are tools available to do this sort of thing, this raw step-by-step process might be a way to learn a variety of coding techniques.
Yaphi Berhanu & James Hibbard
Build a Game of Battleship with TypeScript & Spring Cloud. Free Course
Solace sponsor
Moving from React to Ember in 2020 — There’s not a lot of detail here but it’s interesting to see someone make this choice and the reasons for it. Despite a lack of React-level buzz, Ember remains a solid choice for building apps if you lean into its advantages.
John Rodkey
Thoughts and Best Practices on Building Apps with Vue and TypeScript
Stefan Bauer
Lazy Loading Components in Angular 9 — Have you ever wanted to lazy load an Angular component? Here is a technique you can use with Angular 9.
John Papa
What is Type Coercion in Vanilla JavaScript? — Not particularly for those already advanced with JavaScript, but this is a pretty detailed look at type coercion and includes lots of demos.
Chris Ferdinandi
How to Build a Tinder-esque Carousel with Vanilla JS
Simone Manzi
🔧 Code & Tools
Paged.js: Formats HTML Content Suitably for Creating PDF Output — I think this project’s site could win an award for sheer elegance, but it’s not just elegant, there’s tons of well formatted documentation too. Or.. if you prefer, the GitHub repo.
Cabbage Tree Labs
Hex Engine: A Modern 2D Game Engine for the Browser — Still in early development. A TypeScipt- and Canvas-based game engine designed to feel similar to React.
Lily Scott
A Much Faster Way to Debug Code Than with Breakpoints or console.log — Wallaby catches errors in your tests and code and displays them right in your editor as you type, making your development feedback loop more productive.
Wallaby.js sponsor
Serialize JavaScript 3.0: Serialize Objects to a Superset of JSON — Why? So you can serialize regular expressions, dates, functions, and (as of this week) Infinity, as well as the usual things JSON can handle.
Yahoo
Destiny: A 'Prettier' for File Structures — Scans a folder’s contents, creating a graph based on how the JS/TS files import each other, moving files as needed, then builds a “prettified” file structure. Not sure I’m a fan but it’s an interesting idea.
Ben Awad
Chardin.js: Simple Overlay Instructions for Your Pages/Apps — It’s not every week we get to announce a library’s new release after six years, but Pablo has taken over this classic library. Demo.
Pablo Fernandez
The Official Redux Template for Create React App v1.0 — There’s now an official Redux template for Create React App to quickly set up a React app with Redux Toolkit and a Redux store ready to go.
Mark Erikson
Micromodal: A Tiny JavaScript Library for Creating Accessible Modal Dialogs
Indrashish Ghosh
Panolens.js: A Panorama Viewer Control — A striking full-page demo is on display here, or you can jump straight to GitHub.
Ray Chen
Nano Events: A Tiny (78 Bytes) Event Emitter Library
Andrey Sitnik
by via JavaScript Weekly https://ift.tt/32dWesu
0 notes