#better-npm-publish
Explore tagged Tumblr posts
codebriefly · 2 months ago
Photo
Tumblr media
New Post has been published on https://codebriefly.com/whats-new-in-angular-20-key-features-and-more/
What's New in Angular 20: Key Features, Differences from Angular 19, and Major Benefits
Tumblr media
​Angular 20, released in May 2025, marks a significant advancement in the Angular framework, introducing performance enhancements, developer-centric features, and architectural refinements. This post delves into the new features of Angular 20, contrasts them with Angular 19, and outlines the major benefits of upgrading.​
Table of Contents
Toggle
Key Features in Angular 20
1. Enhanced Ivy Compiler
2. Improved Developer Experience
3. Better Integration with PaaS
4. New Components and Libraries
5. Enhanced Security Features
Differences Between Angular 19 and Angular 20
Major Benefits of Angular 20
Upgrading to Angular 20
Final Thought
Key Features in Angular 20
1. Enhanced Ivy Compiler
Angular 20 continues to optimize the Ivy compiler, resulting in faster load times and reduced memory consumption. These improvements are particularly beneficial for applications deployed in Platform-as-a-Service (PaaS) environments.​
2. Improved Developer Experience
The Angular CLI has been updated with new commands and options, streamlining the development process. Notably, the ng generate command now supports more templates and configurations, facilitating quicker project scaffolding.​
3. Better Integration with PaaS
Angular 20 offers improved integration with popular PaaS providers like Heroku, AWS Elastic Beanstalk, and Google App Engine. The new Angular Deploy tool simplifies the deployment process to these platforms.​
4. New Components and Libraries
The release introduces new Material Design components, enhancing UI development. Additionally, the Angular Component Dev Kit (CDK) has been expanded with new tools and utilities, aiding in the creation of custom, performant, and accessible components.​
5. Enhanced Security Features
Angular 20 includes built-in protections against common web vulnerabilities like Cross-Site Scripting (XSS) and Cross-Site Request Forgery (CSRF). The framework now supports Content Security Policy (CSP), allowing developers to define and enforce security policies effectively.​
6. Improved Testing and Debugging
Testing utilities have been enhanced, with improvements to Angular TestBed and new Protractor integration, making it easier to write and run tests.​
  Differences Between Angular 19 and Angular 20
Feature Angular 19 Angular 20 Standalone Components Default behavior Continued support with enhancements Reactivity Introduction of linkedSignal and resource() APIs Further optimizations in reactivity TypeScript Support Up to TypeScript 5.6 Improved TypeScript support with better type checking CLI Enhancements AI-driven suggestions and automation New commands and options for streamlined development Security AutoCSP for content security policies Built-in protections against XSS and CSRF, with CSP support Testing Utilities Introduction of new testing tools Enhanced TestBed and Protractor integration
Major Benefits of Angular 20
Performance Optimization: The refined Ivy compiler and improved reactivity lead to faster load times and efficient memory usage.​
Enhanced Developer Productivity: Updated CLI commands and better TypeScript support streamline the development workflow.​
Seamless Deployment: Improved integration with PaaS providers and the Angular Deploy tool simplify the deployment process.​
Robust Security: Built-in protections against common vulnerabilities and CSP support enhance application security.​
Improved Testing: Enhanced testing utilities facilitate easier and more reliable application testing.​
Upgrading to Angular 20
To upgrade your Angular application to version 20, follow these
Use the following npm command to update Angular CLI:
Global Update:
npm install -g @angular/cli
Angular CLI in Your Project:
ng update @angular/cli
Angular Core and Dependencies:
ng update @angular/core
Verify Application Functionality: Run your application and ensure all functionalities work as expected.
Final Thought
Angular 20 brings substantial improvements in performance, security, and developer experience. Upgrading to this version ensures your applications are built with the latest advancements, providing a robust foundation for future development.​
If you need assistance with the upgrade process or have any questions, feel free to ask!
Keep learning & stay safe 😉
You may like:
Testing and Debugging Angular 19 Apps
Performance Optimization and Best Practices in Angular 19
State Management and Data Handling in Angular 19
0 notes
souhaillaghchimdev · 2 months ago
Text
Developing Open Source Programming Libraries
Tumblr media
Open source libraries are essential tools that drive innovation and collaboration in the programming world. They help developers save time, encourage knowledge sharing, and improve software quality. If you've ever thought about giving back to the developer community, building your own open source library is a fantastic way to start.
What is a Programming Library?
A programming library is a collection of reusable code, functions, or classes that help developers perform common tasks without rewriting code from scratch. Examples include libraries for handling dates, making HTTP requests, or performing complex mathematical operations.
Why Build an Open Source Library?
Contribute to the community: Help other developers solve problems and build better software.
Enhance your skills: Learn software design, testing, and documentation.
Build your reputation: Demonstrate your knowledge and gain recognition in the dev community.
Get feedback: Collaborate with developers from around the world and improve your code.
Steps to Build an Open Source Library
Identify a Problem: Find a common pain point or a repetitive task that you can simplify with code.
Plan Your Library: Outline features, structure, and language-specific conventions.
Write Clean Code: Use modular, readable, and well-documented code.
Include Tests: Unit and integration tests ensure your library works as expected.
Write Documentation: Explain how to install, use, and contribute to your library.
Choose a License: Pick an open source license (like MIT, GPL, Apache) to define how others can use your code.
Publish Your Library: Share it on GitHub and package it for ecosystems like npm (JavaScript), PyPI (Python), or crates.io (Rust).
Tools You Might Use
Git & GitHub: For version control and collaboration.
CI/CD Tools: Like GitHub Actions for automating tests and deployments.
Package Managers: npm, pip, Maven, etc., depending on your language.
Documentation Tools: JSDoc, Sphinx, or MkDocs to generate professional docs.
Best Practices
Keep your code simple and focused on solving one problem well.
Write thorough documentation and usage examples.
Be responsive to issues and pull requests.
Encourage contributions and create a welcoming community.
Use semantic versioning (SemVer) for clear version management.
Example: A Simple JavaScript Utility Library
Here's a snippet of a function that could go in a utility library: // utils.js export function capitalize(str) { return str.charAt(0).toUpperCase() + str.slice(1); }
You can package this as an npm module and publish it with clear usage instructions and tests.
Conclusion
Building an open source library is a great way to level up as a programmer while making a real impact. Whether it's a simple utility or a full-featured framework, your library can help others while showcasing your skills. Start small, stay consistent, and join the world of open source contributors!
0 notes
jcmarchi · 3 months ago
Text
Crafting Strong DX With Astro Components and TypeScript
New Post has been published on https://thedigitalinsider.com/crafting-strong-dx-with-astro-components-and-typescript/
Crafting Strong DX With Astro Components and TypeScript
I’m a big fan of Astro’s focus on developer experience (DX) and the onboarding of new developers. While the basic DX is strong, I can easily make a convoluted system that is hard to onboard my own developers to. I don’t want that to happen.
If I have multiple developers working on a project, I want them to know exactly what to expect from every component that they have at their disposal. This goes double for myself in the future when I’ve forgotten how to work with my own system!
To do that, a developer could go read each component and get a strong grasp of it before using one, but that feels like the onboarding would be incredibly slow. A better way would be to set up the interface so that as the developer is using the component, they have the right knowledge immediately available. Beyond that, it would bake in some defaults that don’t allow developers to make costly mistakes and alerts them to what those mistakes are before pushing code!
Enter, of course, TypeScript. Astro comes with TypeScript set up out of the box. You don’t have to use it, but since it’s there, let’s talk about how to use it to craft a stronger DX for our development teams.
Watch
I’ve also recorded a video version of this article that you can watch if that’s your jam. Check it out on YouTube for chapters and closed captioning.
Setup
In this demo, we’re going to use a basic Astro project. To get this started, run the following command in your terminal and choose the “Minimal” template.
npm create astro@latest
This will create a project with an index route and a very simple “Welcome” component. For clarity, I recommend removing the <Welcome /> component from the route to have a clean starting point for your project.
To add a bit of design, I’d recommend setting up Tailwind for Astro (though, you’re welcome to style your component however you would like including a style block in the component).
npx astro add tailwind
Once this is complete, you’re ready to write your first component.
Creating the basic Heading component
Let’s start by defining exactly what options we want to provide in our developer experience.
For this component, we want to let developers choose from any HTML heading level (H1-H6). We also want them to be able to choose a specific font size and font weight — it may seem obvious now, but we don’t want people choosing a specific heading level for the weight and font size, so we separate those concerns.
Finally, we want to make sure that any additional HTML attributes can be passed through to our component. There are few things worse than having a component and then not being able to do basic functionality later.
Using Dynamic tags to create the HTML element
Let’s start by creating a simple component that allows the user to dynamically choose the HTML element they want to use. Create a new component at ./src/components/Heading.astro.
--- // ./src/component/Heading.astro const as = Astro.props; const As = as; --- <As> <slot /> </As>
To use a prop as a dynamic element name, we need the variable to start with a capital letter. We can define this as part of our naming convention and make the developer always capitalize this prop in their use, but that feels inconsistent with how most naming works within props. Instead, let’s keep our focus on the DX, and take that burden on for ourselves.
In order to dynamically register an HTML element in our component, the variable must start with a capital letter. We can convert that in the frontmatter of our component. We then wrap all the children of our component in the <As> component by using Astro’s built-in <slot /> component.
Now, we can use this component in our index route and render any HTML element we want. Import the component at the top of the file, and then add <h1> and <h2> elements to the route.
--- // ./src/pages/index.astro import Layout from '../layouts/Layout.astro'; import Heading from '../components/Heading.astro'; --- <Layout> <Heading as="h1">Hello!</Heading> <Heading as="h2">Hello world</Heading> </Layout>
This will render them correctly on the page and is a great start.
Adding more custom props as a developer interface
Let’s clean up the element choosing by bringing it inline to our props destructuring, and then add in additional props for weight, size, and any additional HTML attributes.
To start, let’s bring the custom element selector into the destructuring of the Astro.props object. At the same time, let’s set a sensible default so that if a developer forgets to pass this prop, they still will get a heading.
--- // ./src/component/Heading.astro const as: As="h2" = Astro.props; --- <As> <slot /> </As>
Next, we’ll get weight and size. Here’s our next design choice for our component system: do we make our developers know the class names they need to use or do we provide a generic set of sizes and do the mapping ourselves? Since we’re building a system, I think it’s important to move away from class names and into a more declarative setup. This will also future-proof our system by allowing us to change out the underlying styling and class system without affecting the DX.
Not only do we future proof it, but we also are able to get around a limitation of Tailwind by doing this. Tailwind, as it turns out can’t handle dynamically-created class strings, so by mapping them, we solve an immediate issue as well.
In this case, our sizes will go from small (sm) to six times the size (6xl) and our weights will go from “light” to “bold”.
Let’s start by adjusting our frontmatter. We need to get these props off the Astro.props object and create a couple objects that we can use to map our interface to the proper class structure.
--- // ./src/component/Heading.astro const weights = "bold": "font-bold", "semibold": "font-semibold", "medium": "font-medium", "light": "font-light" const sizes= "6xl": "text-6xl", "5xl": "text-5xl", "4xl": "text-4xl", "3xl": "text-3xl", "2xl": "text-2xl", "xl": "text-xl", "lg": "text-lg", "md": "text-md", "sm": "text-sm" const as: As="h2", weight="medium", size="2xl" = Astro.props; ---
Depending on your use case, this amount of sizes and weights might be overkill. The great thing about crafting your own component system is that you get to choose and the only limitations are the ones you set for yourself.
From here, we can then set the classes on our component. While we could add them in a standard class attribute, I find using Astro’s built-in class:list directive to be the cleaner way to programmatically set classes in a component like this. The directive takes an array of classes that can be strings, arrays themselves, objects, or variables. In this case, we’ll select the correct size and weight from our map objects in the frontmatter.
--- // ./src/component/Heading.astro const weights = bold: "font-bold", semibold: "font-semibold", medium: "font-medium", light: "font-light", ; const sizes = "6xl": "text-6xl", "5xl": "text-5xl", "4xl": "text-4xl", "3xl": "text-3xl", "2xl": "text-2xl", xl: "text-xl", lg: "text-lg", md: "text-md", sm: "text-sm", ; const as: As = "h2", weight = "medium", size = "2xl" = Astro.props; --- <As class:list=[ sizes[size], weights[weight] ] > <slot /> </As>
Your front-end should automatically shift a little in this update. Now your font weight will be slightly thicker and the classes should be applied in your developer tools.
From here, add the props to your index route, and find the right configuration for your app.
--- // ./src/pages/index.astro import Layout from '../layouts/Layout.astro'; import Heading from '../components/Heading.astro'; --- <Layout> <Heading as="h1" size="6xl" weight="light">Hello!</Heading> <Heading as="h3" size="xl" weight="bold">Hello world</Heading> </Layout>
Our custom props are finished, but currently, we can’t use any default HTML attributes, so let’s fix that.
Adding HTML attributes to the component
We don’t know what sorts of attributes our developers will want to add, so let’s make sure they can add any additional ones they need.
To do that, we can spread any other prop being passed to our component, and then add them to the rendered component.
--- // ./src/component/Heading.astro const weights = // etc. ; const sizes = // etc. ; const as: As = "h2", weight = "medium", size = "md", ...attrs = Astro.props; --- <As class:list=[ sizes[size], weights[weight] ] ...attrs > <slot /> </As>
From here, we can add any arbitrary attributes to our element.
--- // ./src/pages/index.astro import Layout from '../layouts/Layout.astro'; import Heading from '../components/Heading.astro'; --- <Layout> <Heading id="my-id" as="h1" size="6xl" weight="light">Hello!</Heading> <Heading class="text-blue-500" as="h3" size="xl" weight="bold">Hello world</Heading> </Layout>
I’d like to take a moment to truly appreciate one aspect of this code. Our <h1>, we add an id attribute. No big deal. Our <h3>, though, we’re adding an additional class. My original assumption when creating this was that this would conflict with the class:list set in our component. Astro takes that worry away. When the class is passed and added to the component, Astro knows to merge the class prop with the class:list directive and automatically makes it work. One less line of code!
In many ways, I like to consider these additional attributes as “escape hatches” in our component library. Sure, we want our developers to use our tools exactly as intended, but sometimes, it’s important to add new attributes or push our design system’s boundaries. For this, we allow them to add their own attributes, and it can create a powerful mix.
It looks done, but are we?
At this point, if you’re following along, it might feel like we’re done, but we have two issues with our code right now: (1) our component has “red squiggles” in our code editor and (2) our developers can make a BIG mistake if they choose.
The red squiggles come from type errors in our component. Astro gives us TypeScript and linting by default, and sizes and weights can’t be of type: any. Not a big deal, but concerning depending on your deployment settings.
The other issue is that our developers don’t have to choose a heading element for their heading. I’m all for escape hatches, but only if they don’t break the accessibility and SEO of my site.
Imagine, if a developer used this with a div instead of an h1 on the page. What would happen?We don’t have to imagine, make the change and see.
It looks identical, but now there’s no <h1> element on the page. Our semantic structure is broken, and that’s bad news for many reasons. Let’s use typing to help our developers make the best decisions and know what options are available for each prop.
Adding types to the component
To set up our types, first we want to make sure we handle any HTML attributes that come through. Astro, again, has our backs and has the typing we need to make this work. We can import the right HTML attribute types from Astro’s typing package. Import the type and then we can extend that type for our own props. In our example, we’ll select the h1 types, since that should cover most anything we need for our headings.
Inside the Props interface, we’ll also add our first custom type. We’ll specify that the as prop must be one of a set of strings, instead of just a basic string type. In this case, we want it to be h1–h6 and nothing else.
--- // ./src/component/Heading.astro import type HTMLAttributes from 'astro/types'; interface Props extends HTMLAttributes<'h1'> "h4" //... The rest of the file ---
After adding this, you’ll note that in your index route, the <h1> component should now have a red underline for the as="div" property. When you hover over it, it will let you know that the as type does not allow for div and it will show you a list of acceptable strings.
If you delete the div, you should also now have the ability to see a list of what’s available as you try to add the string.
While it’s not a big deal for the element selection, knowing what’s available is a much bigger deal to the rest of the props, since those are much more custom.
Let’s extend the custom typing to show all the available options. We also denote these items as optional by using the ?:before defining the type.
While we could define each of these with the same type functionality as our as type, that doesn’t keep this future proofed. If we add a new size or weight, we’d have to make sure to update our type. To solve this, we can use a fun trick in TypeScript: keyof typeof.
There are two helper functions in TypeScript that will help us convert our weights and sizes object maps into string literal types:
typeof: This helper takes an object and converts it to a type. For instance typeof weights would return type bold: string, semibold: string, ...etc
keyof: This helper function takes a type and returns a list of string literals from that type’s keys. For instance keyof type bold: string, semibold: string, ...etc would return "bold" | "semibold" | ...etc which is exactly what we want for both weights and sizes.
--- // ./src/component/Heading.astro import type HTMLAttributes from 'astro/types'; interface Props extends HTMLAttributes<'h1'> "h3" // ... The rest of the file
Now, when we want to add a size or weight, we get a dropdown list in our code editor showing exactly what’s available on the type. If something is selected, outside the list, it will show an error in the code editor helping the developer know what they missed.
While none of this is necessary in the creation of Astro components, the fact that it’s built in and there’s no additional tooling to set up means that using it is very easy to opt into.
I’m by no means a TypeScript expert, but getting this set up for each component takes only a few additional minutes and can save a lot of time for developers down the line (not to mention, it makes onboarding developers to your system much easier).
0 notes
thinksmartsoft · 6 years ago
Text
Best Tips for New Drupal Developers
Drupal is a free and open-source CMS that is written in PHP and distributed in GNU. It was founded and developed by the Drupal community. Drupal works on UNIX like, windows. It works as a content management framework, content management system, blog software. Drupal builds everything from personal blogs to enterprises applications. 
Tumblr media
It also provides thousands of modules and designs that let to build any dreamlike website. As a leading provider of Drupal web development services, we are in the business for the last few years and are catering the clients around the globe. Our services are highly satisfactory and profit-driven that will unquestionably add value to your business.
Best Tips For New Drupal Developers Are
INSTALLATION OF DRUPAL BY HELP OF PROFILES
Drupal arrives in ‘core' format that can be downloaded from Drupal.com with the latest version. If the site is to be made on specified need, a website of political campaign or news, use one of the distributions or assigned profiles that are created by developers on hundreds of hours of coding to create. They have pre-designed themes, modules, or ready-to-deliver functions that help to make an ideal site instead of downloading and configuring your own multiple modules.
IMPORTANT USER
Have good care to a record Email ID, username, and password of the first user. This user name is used as a primary user or the website owner. You have to keep on check on the site and keep it up to date.
BE FAMILIAR WITH THE BASIC STRUCTURE OF URL FOR THE CONTENT
After the necessary installation is done, you may be able to produce unique content by driving undeviatingly to this website URL.
Drupal website content is known as a node. A node can be anything such as a page, an image, a narrative, a declaration, or it can be said that a node is an individual content unit.
SEARCH ENGINE AS A FRIEND
Undeviatingly from the starting, you may face installation errors; this may be because Drupal installation is different from the one that is installed at a remote hosting setting. Whenever there is a problem like this take the help of Google, copy & paste the specific fault that is shown. With the double quotes ("") so that the search engine can show the accurate search results.
TROUBLESHOOT WHITE SCREEN OF DEATH
Some times while navigating Drupal website, you notice nothing except a screen of pale white color termed as the "white-screen-of-death." It means Drupal development services have confronted a PHP error. Several kinds of stuff can be a result of this; even it can be triggered by missing a semicolon it. Whenever such a problem is there allow reporting of PHP error to find the place of issue.
BACK UP OF DATABASE
Backup is necessary for the database as it may contain important files, so safe location is needed to find out. For small business websites, you can email the file of SQL and store it yourself.
CLEAN CACHE DATA WHEN SOMETHING IS WRONG
Whenever a page is entreated, Drupal creates a cache of content. When your website behaves strange clear the cache.  
USERS ROLE AND PERMISSIONS
Once you have a grasp on every type of module, you have to decide and specify on what kind of user you will assign permission. Default user roles are given as follows;
 User Anonymous
 User Authenticated
 Admin user
UNDERSTANDING BASIC THEME OF DEVELOPING CONCEPTS
There are numerous types of themes available for free download paid download or can be of unique customization created by web designers.
USE OF CONTACT FORM
Most the clients require some web form; it is usually known as a contact form for their site users can message them. The Drupal web development services that we offer will assist you in accessing the built-in contact form; it is better to enable it and can make necessary changes to it before deploying to Drupal site.
MANAGEMENT OF YOUR NAVIGATION MENU
Drupal comes with three types of menu blocks:
 Navigating link- it is the main menu with linked personally and interactively by Drupal.
 Links that are primary- major parts of the site, such as tabs that are located on the top of the page.
 Secondarily links- it is an additional set of link for items such as legal notice, details of the contact, and other types of least important navigation element.
LOVE FOR THE VIEWS
By using views, you get a terrific control on the task that is tedious and displays your information. By use of editor views, you can edit your nodes and display in various styles like grid, list or tables and arrange them in a manner way.
Having a partnership with the hosting provider of trust
If your host provider doesn't have an experience of hosting Drupal, choose to have a host having a good grip on technical support and have a responsible team that understands your work and concern, and of your clients also.
Tumblr media
Node js best practices are becoming a popular and famous platform in the last few years. It is to get started with the node.js the project, but after getting once ahead of the basic app that is hello world, then dealing with the finding of the best structure for your code and dealing with the errors, sometimes becomes a nightmare.
And such types of nightmare make the difference the robust, perfect produced application and disastrous launch.
Let’s have a look over some node.js traps
Starting of the project with NPM init
Many people are being familiar with NPM, the way of installing dependencies, but it can work much more than this.
.npmrc for the setup
If you used .npm before, then you have also used - -save a flag which helps in updating package.json. When the developers clone or copy a project, they can become sure to have the right type of dependency due to this. 
For drupal web development services, our team of certified experts comprising of the drupal developers and architects for solutions that make a cost-effective and developing an agile approach to creating enterprise-level applications and websites. We have been known to how to build Drupal sites which can make the company earn money, saving time, and reaching customers.
Tumblr media
We help you here to develop solutions for e-commerce website using Drupal development services to build your e-commerce or any other website for business and also provides solutions relating to various delivery modes, payments, calculation of taxes to accomplish your task in a better way.
As a very famous Drupal web development company, we provide you with professional, scalable, and robust Drupal development services for publishing, hi-tech enterprise, and e-commerce. Besides, our team of Drupal developer has a super experience of creating a website using Drupal headless in which content can be used by the different application and can be reused base on the circumstances. We here give you an unmatchable Drupal web development services to have you the experience of the digital world.  
1 note · View note
xnetics · 2 years ago
Text
VS Code Extensions As Malware
VS Code is popular – according to StackOverflow 75% of developers use it. As a raw code editor it isn’t particularly impressive, but add to it the huge number of extensions that are available and you can assemble something workable and sometimes even something good. I use it because I can configure it to work with a range of languages in a range of situations on a range of platforms. Personally I’d prefer it if the core extensions were part of a configurable core program, but this is not how things are. Finding extensions is a big problem in terms of knowing who to trust. For most of the time I try to limit my use to well-known extensions, preferably authored by companies I know. Searching the 40K extensions for one that does the job and isn’t just abandonedware or halfstartedware is difficult. Now it seems it is also dangerous.
The problem is that extensions run will the same privileges as the user and there is no sandbox to keep the extension walled up in a prison. Once you realize this you can see how devastating an evil extension might be.
The researchers from Aqua Team Nautilus decided to see if the threat was real. They created an extension that copied the look and feel of the Prettier code formatter and entered it into the Marketplace – the official source of extensions. What they created looks identical to the real entry until you notice that the URL is pretier – i.e. it’s a case of typosquatting. Typosquatting is where you make use of user’s inability to type accurately 100% of the time to get them to visit web pages which differ from the real thing by a spelling or punctuation error. As a dyslexic programmer you can see that I’m personally very vulnerable to this problem. And while I take extra care I don’t think I’d ever spot it.
The problem is made worse by the fact that the extension name doesn’t need to be unique – it’s a display name. Other details are also easy to duplicate, including the project details and publisher. Harder to duplicate are the Unique Identifier which,yes, has to be unique, but again typosquatting to the rescue and esbenp.prettier-vscode becomes  espenp.prettier-vscode – which is identical from my dyslexic point of view,. Of course, the number of installs and ratings can’t be spoofed but as people download the faux extension they both go up.
The researchers then go on to point out that Microsoft’s attempts at verifying extensions isn’t particularly convincing. For examp,e the blue Verified Author tick simply means that the author owns a domain – any domain. From here the attacker can opt to show the display name complete with the blue tick for the author’s name.
So did it work? Were real programemrs, some of whom are not dyslexic and hence not overly vulnerable to typosquatting, fooled?
Yes they were.
“In just under 48 hours, we got more than a thousand installs by active developers from all around the world! Now, imagine a real attacker (which would give the extension much more time to be active thus gain more credibility), with a real malicious extension, installed on many developers compromising many organizations. The impact of this is critical.”
The researchers also point out that extensions make use of Node.js NPM as a package manager and any problems here will trickle down. They also point out that the same problems could occur with Visual Studio and Azure DevOps, but didn’t pursue this angle.
The part of the story that worries me is that extensions are not sandboxed. I go to a lot of trouble to avoid using software that is far better isolated from my development machine’s environment. It seems I have to look more carefully in future.
1 note · View note
stevewilliams1 · 3 years ago
Text
WordPress with React to Create Headless CMS for the Web Application
WordPress offers many fantastic features and one of the features is the Rest API. WordPress enables web developers to produce exciting plugins and themes and gives them access to the WordPress CMS to power third-party applications. These are the amazing benefits of WordPress development. In this blog, let’s take a look at the brief about Headless WordPress.
What is Headless WordPress?
Websites that utilize WordPress primarily for content management and some other custom frontend stack to actually display that content to a site visitor.
A site built with headless WordPress has many advantages, one of the primary benefits of this approach is decoupling content editing teams and developers.
Advantages of Headless WordPress CMS:
WordPress is a flexible open-source platform that can be used to build any website and may build the front end of the web application using any web technology. It manages its content using one of the most well-liked CMSs by using WordPress as a headless CMS.
Benefits of Headless WordPress
Faster Performance
WordPress websites that are powered by frontends are incredibly responsive, with millisecond fast load times and prefetched delivery on the edge.
Improved Security
Static-Site Generators acting as a frontend for WordPress have no active web servers and no reachable database, thus presenting a smaller attack surface and this approach prevents malicious requests, DDoS attacks, and accidental exposure.
Greater Flexibility
Frontends can integrate WordPress content into complex, organization-wide websites which may combine WordPress content with content from other CMSs and web services.
Lighter and Easier Redesigns
The website is smaller and simpler when you use WordPress as a headless CMS. Without impacting the website’s front-end appearance and user experience, can remodel some of its components.
Publishing Content Over Several Platforms
Enables users to publish content across several platforms, including PCs, tablets, and mobile phones, and its website’s accessibility will grow as a result, and it’ll ensure that younger audiences are better served.
Disadvantages of Headless WordPress:
Although there are many advantages to using Headless WordPress, it is not a surefire option for the website due to various drawbacks associated with using a headless CMS. 
Need     advanced coding expertise 
Challenges in publishing
Access  to crucial plug-ins being lost
How to Install React on WordPress:
React is developed and maintained by Facebook and it is one of the most widely used JavaScript frameworks for developing front-ends and is used in single-page apps. Once WordPress is configured in your system, creating apps using React is simple. 
Install Text editing software - Visual Studio Code.
Version control with Git
To create a web app using React, follow the steps
Install the packages for API calls next, and then launch a text editor to access the folder.
NodeJS and NPM
Once the application has been launched using the proper command, you are ready to begin building a web app. 
React-based WordPress backend with the editor WordPress has always been a relatively inclusive platform, allowing both programmers and non-programmers (such as bloggers and others) to build themes, plugins, and launch websites. 
It is unreasonable to need everyone to learn React in order to construct a block and the only drawback is that it improves the user experience.
If you find the challenging procedures, please get the setup done by a professional and you can completely configure your headless CMS once it is ready to be used. Let’s get connected if you are looking to Hire a WordPress Developer or for any similar website requirements.
0 notes
tonkiprecision · 3 years ago
Text
Position of thumbnail in videolightbox
Tumblr media
#Position of thumbnail in videolightbox how to#
#Position of thumbnail in videolightbox install#
#Position of thumbnail in videolightbox code#
Set to `true` to avoid scrolling views behind lightbox Time to stop at an image before move on to next image Index of the image that you want to start at Whether to show lightbox or not at the beginning Srcset: '.' // Optional for displaying responsive imagesĪutoplay: true, // Optional: Autoplay video when the lightbox opens Import CSS style require('vue-it-bigger/dist/')Ĭaption: 'caption to display.
#Position of thumbnail in videolightbox how to#
You can simply view App.vue to see how to use vue-it-bigger Then import it in your project at your entry point ( main.js normally) import Vue from 'vue'Īnd use the lightbox: import LightBox from 'vue-it-bigger'
#Position of thumbnail in videolightbox install#
Install the package: npm install vue-it-bigger
All interface elements have a background for better visibility.
Moved thumbnails to the top of the screen.
Moved caption bar and image counter off the media to the bottom of the screen.
When opening the lightbox the media doesn't flicker.
Lightbox opens and closes with a short fade.
Can skip to next / previous media programatically.
All of the graphics (previous, next and close buttons) can be customized via slots.
Can show an HTML enabled caption under each image or video.
Optional thumbnail strip with all of the gallery's media.
Unobtrusive interface that disappears after a few seconds, reappears on mouse activity.
#Position of thumbnail in videolightbox code#
* Copy all code for Video LightBox from the HEAD and BODY tags and paste it on your page in the HEAD tag and in the place where you want to have a gallery (inside the BODY tag). * Open the generated index.html file in any text editor. * Export your LightBox gallery using Video Lightbox app in any test folder on a local drive. blog with embedded video Videolightbox Thumbnail You can paste it in any place on your page where you want to add video popup. Step 4 - Add video embedd pop up vimeo videos inside your own page. To select the location of your project, just click the Browse folders button and choose a different location. So click Yes, then enter a name for your project. It's a good idea to save the project, because that will allow you to change the project in case you decide to do something different with future galleries. The project consists of the videos you choose to put on your website video gallery and all your settings. When you exit Video Lightbox, you'll be asked if you want to save your javascript flv video gallery project. Save your web video gallery as project file. If this website enables anonymous connections, just type in anonymous as the username and your e-mail address as the password. If you do not fill in this information, Video LightBox is unable to connect to your site and thus not able to upload your videos to website. Type in your username and password for the connection. If your web site uses another port, you will have to enter it here. The FTP port is normally located on port 21 thus this has been prefilled for you already. You will have to type in your hostname, e.g. Now type in a meaningful (this is not the actual hostname) name for your site and fill in the FTP details in the appropriate fields. You are able to add a new FTP site by clicking " Edit" to the right of the " Publish to FTP server" drop down list. The FTP Location Manager window enables you to define a number of connections for use when uploading your web site gallery to an FTP. You can also set " Open web page after publishing" option. To select a folder on your hard drive, just click the Browse folders button and choose a location.
Tumblr media
0 notes
codebriefly · 2 months ago
Photo
Tumblr media
New Post has been published on https://codebriefly.com/building-and-deploying-angular-19-apps/
Building and Deploying Angular 19 Apps
Tumblr media
Efficiently building and deploying Angular 19 applications is crucial for delivering high-performance, production-ready web applications. In this blog, we will cover the complete process of building and deploying Angular 19 apps, including best practices and optimization tips.
Table of Contents
Toggle
Why Building and Deploying Matters
Preparing Your Angular 19 App for Production
Building Angular 19 App
Key Optimizations in Production Build:
Configuration Example:
Deploying Angular 19 App
Deploying on Firebase Hosting
Deploying on AWS S3 and CloudFront
Automating Deployment with CI/CD
Example with GitHub Actions
Best Practices for Building and Deploying Angular 19 Apps
Final Thoughts
Why Building and Deploying Matters
Building and deploying are the final steps of the development lifecycle. Building compiles your Angular project into static files, while deploying makes it accessible to users on a server. Proper optimization and configuration ensure faster load times and better performance.
Preparing Your Angular 19 App for Production
Before building the application, make sure to:
Update Angular CLI: Keep your Angular CLI up to date.
npm install -g @angular/cli
Optimize Production Build: Enable AOT compilation and minification.
Environment Configuration: Use the correct environment variables for production.
Building Angular 19 App
To create a production build, run the following command:
ng build --configuration=production
This command generates optimized files in the dist/ folder.
Key Optimizations in Production Build:
AOT Compilation: Reduces bundle size by compiling templates during the build.
Tree Shaking: Removes unused modules and functions.
Minification: Compresses HTML, CSS, and JavaScript files.
Source Map Exclusion: Disables source maps for production builds to improve security and reduce file size.
Configuration Example:
Modify the angular.json file to customize production settings:
"configurations": "production": "optimization": true, "outputHashing": "all", "sourceMap": false, "namedChunks": false, "extractCss": true, "aot": true, "fileReplacements": [ "replace": "src/environments/environment.ts", "with": "src/environments/environment.prod.ts" ]
    Deploying Angular 19 App
Deployment options for Angular apps include:
Static Web Servers (e.g., NGINX, Apache)
Cloud Platforms (e.g., AWS S3, Firebase Hosting)
Docker Containers
Serverless Platforms (e.g., AWS Lambda)
Deploying on Firebase Hosting
Install Firebase CLI:
npm install -g firebase-tools
Login to Firebase:
firebase login
Initialize Firebase Project:
firebase init hosting
Deploy the App:
firebase deploy
Deploying on AWS S3 and CloudFront
Build the Project:
ng build --configuration=production
Upload to S3:
aws s3 sync ./dist/my-app s3://my-angular-app
Configure CloudFront Distribution: Set the S3 bucket as the origin.
Automating Deployment with CI/CD
Setting up a CI/CD pipeline ensures seamless updates and faster deployments.
Example with GitHub Actions
Create a .github/workflows/deploy.yml file:
name: Deploy Angular App on: [push] jobs: build-and-deploy: runs-on: ubuntu-latest steps: - uses: actions/checkout@v2 - name: Set up Node.js uses: actions/setup-node@v2 with: node-version: '18' - run: npm install - run: npm run build -- --configuration=production - name: Deploy to S3 run: aws s3 sync ./dist/my-app s3://my-angular-app --delete
Best Practices for Building and Deploying Angular 19 Apps
Optimize for Production: Always use AOT and minification.
Use CI/CD Pipelines: Automate the build and deployment process.
Monitor Performance: Utilize tools like Lighthouse to analyze performance.
Secure the Application: Enable HTTPS and configure secure headers.
Cache Busting: Use hashed filenames to avoid caching issues.
Containerize with Docker: Simplifies deployments and scales easily.
Final Thoughts
Building and deploying Angular 19 applications efficiently can significantly enhance performance and maintainability. Following best practices and leveraging cloud hosting services ensure that your app is robust, scalable, and fast. Start building your next Angular project with confidence!
Keep learning & stay safe 😉
You may like:
Testing and Debugging Angular 19 Apps
Performance Optimization and Best Practices in Angular 19
UI/UX with Angular Material in Angular 19
0 notes
jcmarchi · 3 months ago
Text
Automated Visual Regression Testing With Playwright
New Post has been published on https://thedigitalinsider.com/automated-visual-regression-testing-with-playwright/
Automated Visual Regression Testing With Playwright
Comparing visual artifacts can be a powerful, if fickle, approach to automated testing. Playwright makes this seem simple for websites, but the details might take a little finessing.
Recent downtime prompted me to scratch an itch that had been plaguing me for a while: The style sheet of a website I maintain has grown just a little unwieldy as we’ve been adding code while exploring new features. Now that we have a better idea of the requirements, it’s time for internal CSS refactoring to pay down some of our technical debt, taking advantage of modern CSS features (like using CSS nesting for more obvious structure). More importantly, a cleaner foundation should make it easier to introduce that dark mode feature we’re sorely lacking so we can finally respect users’ preferred color scheme.
However, being of the apprehensive persuasion, I was reluctant to make large changes for fear of unwittingly introducing bugs. I needed something to guard against visual regressions while refactoring — except that means snapshot testing, which is notoriously slow and brittle.
In this context, snapshot testing means taking screenshots to establish a reliable baseline against which we can compare future results. As we’ll see, those artifacts are influenced by a multitude of factors that might not always be fully controllable (e.g. timing, variable hardware resources, or randomized content). We also have to maintain state between test runs, i.e. save those screenshots, which complicates the setup and means our test code alone doesn’t fully describe expectations.
Having procrastinated without a more agreeable solution revealing itself, I finally set out to create what I assumed would be a quick spike. After all, this wouldn’t be part of the regular test suite; just a one-off utility for this particular refactoring task.
Fortunately, I had vague recollections of past research and quickly rediscovered Playwright’s built-in visual comparison feature. Because I try to select dependencies carefully, I was glad to see that Playwright seems not to rely on many external packages.
Setup
The recommended setup with npm init playwright@latest does a decent job, but my minimalist taste had me set everything up from scratch instead. This do-it-yourself approach also helped me understand how the different pieces fit together.
Given that I expect snapshot testing to only be used on rare occasions, I wanted to isolate everything in a dedicated subdirectory, called test/visual; that will be our working directory from here on out. We’ll start with package.json to declare our dependencies, adding a few helper scripts (spoiler!) while we’re at it:
"scripts": "test": "playwright test", "report": "playwright show-report", "update": "playwright test --update-snapshots", "reset": "rm -r ./playwright-report ./test-results ./viz.test.js-snapshots , "devDependencies": "@playwright/test": "^1.49.1"
If you don’t want node_modules hidden in some subdirectory but also don’t want to burden the root project with this rarely-used dependency, you might resort to manually invoking npm install --no-save @playwright/test in the root directory when needed.
With that in place, npm install downloads Playwright. Afterwards, npx playwright install downloads a range of headless browsers. (We’ll use npm here, but you might prefer a different package manager and task runner.)
We define our test environment via playwright.config.js with about a dozen basic Playwright settings:
import defineConfig, devices from "@playwright/test"; let BROWSERS = ["Desktop Firefox", "Desktop Chrome", "Desktop Safari"]; let BASE_URL = "http://localhost:8000"; let SERVER = "cd ../../dist && python3 -m http.server"; let IS_CI = !!process.env.CI; export default defineConfig( testDir: "./", fullyParallel: true, forbidOnly: IS_CI, retries: 2, workers: IS_CI ? 1 : undefined, reporter: "html", webServer: command: SERVER, url: BASE_URL, reuseExistingServer: !IS_CI , use: baseURL: BASE_URL, trace: "on-first-retry" , projects: BROWSERS.map(ua => ( name: ua.toLowerCase().replaceAll(" ", "-"), use: ...devices[ua] )) );
Here we expect our static website to already reside within the root directory’s dist folder and to be served at localhost:8000 (see SERVER; I prefer Python there because it’s widely available). I’ve included multiple browsers for illustration purposes. Still, we might reduce that number to speed things up (thus our simple BROWSERS list, which we then map to Playwright’s more elaborate projects data structure). Similarly, continuous integration is YAGNI for my particular scenario, so that whole IS_CI dance could be discarded.
Capture and compare
Let’s turn to the actual tests, starting with a minimal sample.test.js file:
import test, expect from "@playwright/test"; test("home page", async ( page ) => await page.goto("/"); await expect(page).toHaveScreenshot(); );
npm test executes this little test suite (based on file-name conventions). The initial run always fails because it first needs to create baseline snapshots against which subsequent runs compare their results. Invoking npm test once more should report a passing test.
Changing our site, e.g. by recklessly messing with build artifacts in dist, should make the test fail again. Such failures will offer various options to compare expected and actual visuals:
We can also inspect those baseline snapshots directly: Playwright creates a folder for screenshots named after the test file (sample.test.js-snapshots in this case), with file names derived from the respective test’s title (e.g. home-page-desktop-firefox.png).
Generating tests
Getting back to our original motivation, what we want is a test for every page. Instead of arduously writing and maintaining repetitive tests, we’ll create a simple web crawler for our website and have tests generated automatically; one for each URL we’ve identified.
Playwright’s global setup enables us to perform preparatory work before test discovery begins: Determine those URLs and write them to a file. Afterward, we can dynamically generate our tests at runtime.
While there are other ways to pass data between the setup and test-discovery phases, having a file on disk makes it easy to modify the list of URLs before test runs (e.g. temporarily ignoring irrelevant pages).
Site map
The first step is to extend playwright.config.js by inserting globalSetup and exporting two of our configuration values:
export let BROWSERS = ["Desktop Firefox", "Desktop Chrome", "Desktop Safari"]; export let BASE_URL = "http://localhost:8000"; // etc. export default defineConfig( // etc. globalSetup: require.resolve("./setup.js") );
Although we’re using ES modules here, we can still rely on CommonJS-specific APIs like require.resolve and __dirname. It appears there’s some Babel transpilation happening in the background, so what’s actually being executed is probably CommonJS? Such nuances sometimes confuse me because it isn’t always obvious what’s being executed where.
We can now reuse those exported values within a newly created setup.js, which spins up a headless browser to crawl our site (just because that’s easier here than using a separate HTML parser):
import BASE_URL, BROWSERS from "./playwright.config.js"; import createSiteMap, readSiteMap from "./sitemap.js"; import playwright from "@playwright/test"; export default async function globalSetup(config) // only create site map if it doesn't already exist try readSiteMap(); return; catch(err) // launch browser and initiate crawler let browser = playwright.devices[BROWSERS[0]].defaultBrowserType; browser = await playwright[browser].launch(); let page = await browser.newPage(); await createSiteMap(BASE_URL, page); await browser.close();
This is fairly boring glue code; the actual crawling is happening within sitemap.js:
createSiteMap determines URLs and writes them to disk.
readSiteMap merely reads any previously created site map from disk. This will be our foundation for dynamically generating tests. (We’ll see later why this needs to be synchronous.)
Fortunately, the website in question provides a comprehensive index of all pages, so my crawler only needs to collect unique local URLs from that index page:
function extractLocalLinks(baseURL) let urls = new Set(); let offset = baseURL.length; for(let href of document.links) if(href.startsWith(baseURL)) let path = href.slice(offset); urls.add(path); return Array.from(urls);
Wrapping that in a more boring glue code gives us our sitemap.js:
import readFileSync, writeFileSync from "node:fs"; import join from "node:path"; let ENTRY_POINT = "/topics"; let SITEMAP = join(__dirname, "./sitemap.json"); export async function createSiteMap(baseURL, page) await page.goto(baseURL + ENTRY_POINT); let urls = await page.evaluate(extractLocalLinks, baseURL); let data = JSON.stringify(urls, null, 4); writeFileSync(SITEMAP, data, encoding: "utf-8" ); export function readSiteMap() try var data = readFileSync(SITEMAP, encoding: "utf-8" ); catch(err) if(err.code === "ENOENT") throw new Error("missing site map"); throw err; return JSON.parse(data); function extractLocalLinks(baseURL) // etc.
The interesting bit here is that extractLocalLinks is evaluated within the browser context — thus we can rely on DOM APIs, notably document.links — while the rest is executed within the Playwright environment (i.e. Node).
Tests
Now that we have our list of URLs, we basically just need a test file with a simple loop to dynamically generate corresponding tests:
for(let url of readSiteMap()) test(`page at $url`, async ( page ) => await page.goto(url); await expect(page).toHaveScreenshot(); );
This is why readSiteMap had to be synchronous above: Playwright doesn’t currently support top-level await within test files.
In practice, we’ll want better error reporting for when the site map doesn’t exist yet. Let’s call our actual test file viz.test.js:
import readSiteMap from "./sitemap.js"; import test, expect from "@playwright/test"; let sitemap = []; try sitemap = readSiteMap(); catch(err) test("site map", ( page ) => throw new Error("missing site map"); ); for(let url of sitemap) test(`page at $url`, async ( page ) => await page.goto(url); await expect(page).toHaveScreenshot(); );
Getting here was a bit of a journey, but we’re pretty much done… unless we have to deal with reality, which typically takes a bit more tweaking.
Exceptions
Because visual testing is inherently flaky, we sometimes need to compensate via special casing. Playwright lets us inject custom CSS, which is often the easiest and most effective approach. Tweaking viz.test.js…
// etc. import join from "node:path"; let OPTIONS = stylePath: join(__dirname, "./viz.tweaks.css") ; // etc. await expect(page).toHaveScreenshot(OPTIONS); // etc.
… allows us to define exceptions in viz.tweaks.css:
/* suppress state */ main a:visited color: var(--color-link); /* suppress randomness */ iframe[src$="/articles/signals-reactivity/demo.html"] visibility: hidden; /* suppress flakiness */ body:has(h1 a[href="/wip/unicode-symbols/"]) main tbody > tr:last-child > td:first-child font-size: 0; visibility: hidden;
:has() strikes again!
Page vs. viewport
At this point, everything seemed hunky-dory to me, until I realized that my tests didn’t actually fail after I had changed some styling. That’s not good! What I hadn’t taken into account is that .toHaveScreenshot only captures the viewport rather than the entire page. We can rectify that by further extending playwright.config.js.
export let WIDTH = 800; export let HEIGHT = WIDTH; // etc. projects: BROWSERS.map(ua => ( name: ua.toLowerCase().replaceAll(" ", "-"), use: ...devices[ua], viewport: width: WIDTH, height: HEIGHT ))
…and then by adjusting viz.test.js‘s test-generating loop:
import WIDTH, HEIGHT from "./playwright.config.js"; // etc. for(let url of sitemap) test(`page at $url`, async ( page ) => checkSnapshot(url, page); ); async function checkSnapshot(url, page) // determine page height with default viewport await page.setViewportSize( width: WIDTH, height: HEIGHT ); await page.goto(url); await page.waitForLoadState("networkidle"); let height = await page.evaluate(getFullHeight); // resize viewport for before snapshotting await page.setViewportSize( width: WIDTH, height: Math.ceil(height) ); await page.waitForLoadState("networkidle"); await expect(page).toHaveScreenshot(OPTIONS); function getFullHeight() return document.documentElement.getBoundingClientRect().height;
Note that we’ve also introduced a waiting condition, holding until there’s no network traffic for a while in a crude attempt to account for stuff like lazy-loading images.
Be aware that capturing the entire page is more resource-intensive and doesn’t always work reliably: You might have to deal with layout shifts or run into timeouts for long or asset-heavy pages. In other words: This risks exacerbating flakiness.
Conclusion
So much for that quick spike. While it took more effort than expected (I believe that’s called “software development”), this might actually solve my original problem now (not a common feature of software these days). Of course, shaving this yak still leaves me itchy, as I have yet to do the actual work of scratching CSS without breaking anything. Then comes the real challenge: Retrofitting dark mode to an existing website. I just might need more downtime.
0 notes
muenchkevin · 4 years ago
Text
Another popular npm package infected with malware
In an audacious incident, threat actors hijacked the account of the developer of a widely used JavaScript library, UAParser.ja, to replace the legitimate code with malicious one infused with malware and trojans.
The library’s developer Faisal Salman noticed something was off when his email was flooded by spam messages.
“I believe someone was hijacking my npm account and published some compromised packages (0.7.29, 0.8.0, 1.0.0) which will probably install malware,” was Salman’s first reaction as he yanked the library and asked users to revert to a previous release.
TechRadar needs you!
We're looking at how our readers use VPNs with streaming sites like Netflix so we can improve our content and offer better advice. This survey won't take more than 60 seconds of your time, and we'd hugely appreciate if you'd share your experiences with us.
>> Click here to start the survey in a new window <<
UAParser.js is used by the likes of Facebook, Apple, Amazon, Microsoft, IBM, and a lot more, and clocks between 6-7 million downloads every week. 
Attacking developers
While attackers have previously attacked public repositories to push malicious software and malware, these attacks have been restricted to typosquatting or dependency hijacking. 
These are attacks where the authors of the malicious libraries hope to take advantage of downstream developers accidentally installing their malware-riddled library by misspelling the name of the original library. In fact, just last week, SonaType researchers shared details about their efforts to rid such malicious libraries from npm. 
Incidentally, one of the recent malevolent libraries SonaType helped remove last week, named Klow(n), was found impersonating UAParser.js, in what was labeled as a “weak brandjacking attempt.” 
However, hijacking a developer’s account to replace genuine code with a poisonous one, is a lot more serious, especially when the target is as popular as UAParser.js. 
According to The Record, analysis of the malicious library revealed that it downloaded scripts from a remote server, including a cryptominer and an information stealing trojan that could steal credentials from the operating systems and the web browsers, and could lead to all kinds of incidents of identity thefts.
Soon after he pulled the offending library, Salman uploaded new cleaner releases urging users to update.
The incident even led the US Cybersecurity and Infrastructure Security Agency (CISA) to publish a security alert, owing to the library’s popularity.
source https://www.techradar.com/news/another-popular-npm-package-infected-with-malware/ from Blogger https://ift.tt/3GcvgVM Source Link Another popular npm package infected with malware
0 notes
npmjs · 6 years ago
Text
AppSec POV on Dependency Management
It’s tempting to assume that all packages in the npm registry are safe to use––and, for the vast majority of them, that’s true. The npm security team and the JavaScript community at large exercises a high degree of vigilance over the hygiene of the massive shared code library that has made JavaScript the most popular and powerful software platform.
The vast majority of packages are safe… but, some aren’t. And, some packages that used to be safe can become compromised. Even in a safe neighborhood, it’s best to lock your car.
In the following post, we take an application security expert’s point of view at good practices and behaviors you should be doing whenever you start a project as well as on a [good] cadence throughout the life of your projects.
The Dependency Iceberg
It’s no secret that open source JavaScript modules tend to be highly… modular. It’s not uncommon for a single declared dependency to include hundreds of transitive dependencies. The dependencies you specifically declare are just the tip of the iceberg, so to speak. It’s the 95% of the code that lies invisibly below the water line that can rip through your hull if you’re not careful.
Tumblr media
Fig. 1. Iceberg; not to scale
What can be done, then, to steer safely clear of potential danger? The answers depend on where you are in the lifecycle of your project.
At the beginning of your project and any time you add a new dependency, choose your dependencies carefully. As an ongoing part of development throughout the life of your project, update your dependencies responsibly.
Choose Carefully
Just like anything you download from the Internet, use care when choosing your dependencies. npm, Inc. and the JavaScript community at large continuously work to maintain registry hygiene, but it’s not a good practice to assume that everything in the registry is suitable for use. As a developer, it’s your responsibility to take due care in choosing your dependencies.
Use the package rating metrics and social cues available on the npm package page. High scores for popularity, quality, and maintenance are strong signals that a package is suitable for use.
Tumblr media
Fig. 2. package quality metrics
Review the version history of the package to ensure a healthy release cadence has been established by the package maintainers. A slowing or stalled release cadence or a shift in project personnel may be a sign of maintainer fatigue. Maintainer fatigue can make it easier for unscrupulous actors to take control of projects through social engineering. When maintainers have poured years of heart, soul, and sweat into a project for no money and little community support, an offer to switch control to a new party may be too tempting to invite thorough vetting before handing over the keys. While not common, custody transitions have resulted in malicious code entering once-trusted libraries and could certainly happen again.
Review npm Advisories and Use npm audit
When you install a package from the public registry, npm will report any security vulnerabilities:
Tumblr media
Use npm audit to check the vulnerability status of your candidate dependency set:
Tumblr media
If there’s a vulnerability, read the advisory to judge whether or not it’s a show-stopper and if there are patched versions available.
Update Responsibly
Pin Dependencies as Narrowly as Possible
When you install a dependency without specifying a version, npm installs the latest version and sets the dependency package.json to ^x.y.z––which will automatically update the dependency to the latest minor version within the scope of the specified major version whenever you perform a fresh install or npm update.
The minor-version range provided by this default behavior strikes a balance between safety and reproducibility on the one hand and the benefits of bug-fixes and on the other. How wide or narrow to pin dependencies is by no means a settled issue, but from a security perspective, the narrower the better.
Also, commit your package-lock.json file to source control. This will ensure that all transitive dependencies are pinned, as well.
See the npm documentation for more details on package locking.
Update Tempo
As your project matures, establish a steady tempo of updating your dependencies. If you update too slowly, you may miss important security updates and expose yourself to a widening window of vulnerability. If you update too aggressively, you don’t give the ecosystem’s immune system time to react to potential malware. Aim for the goldilocks zone when establishing your update tempo.
Don’t automatically install random stuff from the Internet. Before committing to a dependency update, check to see if there are any security advisories for it. In addition to showing a history of security advisories for all packages,the advisories page on npmjs.com supports searching for advisories on particular packages, e.g.:
https://www.npmjs.com/advisories?search=eslint-utils
You can also pull a package down for further inspection without installing and/or install without running install scripts, then perform npm audit.
To install without running package scripts, run:
npm install --ignore-scripts
You can do this in a test/dummy project to avoid accidentally installing an unexpected package into a working project.
To run an audit without installing anything, run:
npm install --package-lock-only
then:
npm audit
What We Do Behind The Scenes
Ultimately, the practices of choosing, using, and updating open source software responsibly can only be done by the dependent application developer. But, there is much activity happening behind the scenes at npm to keep the registry safe.
We continually analyze incoming packages for potential risks. We watch install scripts, network activity, among other things looking for suspect behavior and perform delta analysis on new versions of packages. We guard against spam or junk packages and screen for typo-squatting attempts to make sure that registry users don’t install unintended software. We have the world’s largest (and growing) corpus of historical javascript malware to research against. This continuous practice of defense against the dark arts not only maintains registry hygiene, but spins out new and better tools for the entire community to use to improve security.
Perhaps the most important thing we do, though, is to respond quickly to vulnerability reports from the community and alert the maintainers when there’s an issue so a patch can be issued. Like a neighborhood watch program, the vigilance of the entire community is critical to the continued safety and usability of our shared resources.
Open source software contributions do not come only in the form of code commits. If you find a vulnerability in a published package, you can do your part by reporting it to us so others can benefit from your research. We take immediate action on such reports to investigate the nature and extent of the vulnerability as well as to notify maintainers to give them time to fix the issue before disclosing it to the public and removing any vulnerable packages from the registry as necessary.
You can report vulnerabilities here:
https://www.npmjs.com/advisories/report
or by emailing [email protected]
Do You Have a Headache Yet?
npm Enterprise can help. Learn how npm Enterprise automatically ensures security and compliance ›
4 notes · View notes
amandaallen · 4 years ago
Text
Introduction to Azure Pipelines
Technology adaptation is an ongoing procedure. The main aim is to tone down human efforts. People endeavors to attain the best results that enhance productivity and eases out the process. However, this could be dealt with efficiently with the help of various tools. This is the point where the need for Microsoft’s Azure DevOps-SaaS arises.
Microsoft Azure DevOps came into existence in 2018, implying that it has been in this industry for a long. Its origin could be traced through Visual Studio Team System that was launched in 2006.
What is Azure DevOps?
Azure DevOps refers to a comprehensive tool that provides different services considering software development lifecycle. Few advantages of Azure DevOps Services you ought to focus on:
Azure boards allow agile planning, power BI visualization, other standard reporting tools, and tracking work items.
Azure pipelines deal with Continuous Integration and Continuous Deployment with the help of Kubernetes and containers.
Azure Repos offers support to cloud-hosted personal storage.
Azure Artifacts provides package management support to various public and private sources like npm, Maven, NuGet, and Python.
Azure Test Plans to offer an investigation of different testing solutions and integrated planning.
 A fully-featured and mature model, the goal of Microsoft Azure DevOps is to assist businesses to multitask.
What are Azure Pipelines?
Azure DevOps is an automated procedure that helps developers build, compile, and employ their codes on various platforms. It is a compatible delivery tool, similar to open-source CodeShip or Jenkins. The main purpose of Azure Pipelines is that no human interference is required. Thus, all the chances are accomplished automatically. As when there is human involvement, there is a window of errors. Hence, with the help of automation, everything works conveniently.
Azure Pipeline can be classified as:
Source control
Build tools
Package creation
Configuration management
Monitoring
Azure DevOps Pipelines are used with various applications like Java, Go, .Net, Python, XCode, Node.js, and C++. In addition to this, Azure Pipelines considers Azure Repos, Bitbucket, Github, Subversion, etc. It also carries out deliveries through the test and classification of codes to suitable targets.
Azure DevOps Pipeline Terminology
Continuous Integration
Continuous Integration detects issues in the initial stages of the development process. The benefit is, it is quite easy to amend the errors at the beginning itself. Moreover, the developers can cross-check the code for Testing and detect bugs or issues.  The basic advantages comprise of the following:
Small tweaks simplify the merging of codes.
Feasible for the team to check what each one is dealing with.
Detect errors and resolve them.
Constant merging of codes and Testing.
Eases the Integration process for better productivity.
Continuous Delivery
Continuous Delivery enables the developers to deliver the latest features, configure changes, and resolve bugs faster. Routine deliveries are also provided by CD pipelines according to the requirement. The main advantages are:
Reduced release risks
Faster bug fixes
Feasible delivery
Continuous Testing
Continuous Testing refers to the procedure of managing continuous tests. No matter if the app is on the cloud or on-premise, it is convenient to automate the build-deploy-test workflows, select technologies, and test changes faster and efficiently. Continuous Testing assists in detecting issues during the development procedure. Moreover, CT and TFS keep a regular check on the app.
Pillars of Azure DevOps
To start and function, the Azure DevOps pipeline needs a trigger.
The pipeline ranges from single to multiple environments as it has various stages.
Each stage is assigned particular jobs in the pipelines.
Each job functions on a specific agent.
The step could be a task or anything.
The only condition is that it should be in the form of a pre-bundled script, which functions as a published artifact.
Run published files in groups called artifacts.
Packages format
Publish Maven, NuGet or npm packages so others can consume them. You can even rely on various package management tools.
Necessities to use Azure Pipelines
An Azure DevOps organization
To have required source codes stores
Features of Azure DevOps
Language friendly
Build, test, etc., and runs parallel on macOS, Windows, and Linux.
Kubernetes
Easily develops and deploys Containers and Kubernetes to hosts.
Extensible
Explore and implement tasks and tests to Slack extensions.
Cloud Deployment
Execute continuous delivery of the software to the cloud.
Benefits of Azure DevOps Speed
DevOps enables teams to release updates faster.
Rapid Delivery
Azure DevOps immediately deploys the cloud.
Reliability
You can stay updated about real-time performances.
Scalability
It assists in managing development efficiently.
Efficient collaboration
Azure DevOps permits us to collaborate efficiently.
Security
Automated compliance policies define and track compliance.
Artifacts in Azure Pipeline
Artifacts are files comprising of machine instructions rather than human intervention codes. It offers safe and quick binary packages which could be used at ease. Feed is similar to package containers that assist in publishing and consumption.
Cost of Azure DevOps
Azure dwells on two sorts of costs: individual services and user license services.
Individual services need users to possess one CI/CD dealt by Microsoft. The users can opt for up to 2GB of storage of azure artifacts.
User licenses focus on two types of plans: a simple plan and a simple plan with testing. The simple plan comprises basic features and is cost-free for the initial five users. Later, you ought to pay approximately $6 every month. The latter requires you to pay $52 every month from the initial stage itself.
Winding-up
It is really important to get the hang of Azure Pipelines before using Azure DevOps services. The system uses different operations during optimizations. The procedure uniformly divides workflows into various possible formats with the help of Azure. You can even hire reliable developers to manage your Azure projects without hassle
0 notes
simplythetest · 4 years ago
Text
Making Poetry: Why I Like Poetry over Pipenv
One upon a time, Python did not have a decent command-line package manager like the vaunted npm for NodeJS.
That was then. Now there are good options in the Python world for package managers.
Recently, I've been able to try out both Pipenv and Poetry for some projects of mine. I feel like I've given both a try, and I prefer Poetry over Pipenv.
What's Good, Overall
Both provide essentially the same good functionalities such as
creating and managing a lock file for deterministic builds,
allowing for pinning module versions, as well as just taking the latest version by default, and
having a good CLI and workflow for working with Python projects.
Pipenv - The Good Parts
Pipenv was the first package manager tool that I tried of these two. It definitely appealed to me, since it promises to combine pip (for dependency management) and virtualenv (for environment management). Hence the name.
Pipenv is really nice to use "off the shelf". If you are used to a Python workflow where you create a virtual environment first, then install dependencies, then start developing, Pipenv adds some efficiency. Instead of creating a requirements.txt file, you instead create a Pipfile file and list dependencies there. You can also run pipenv install <module-name> to add packages from command line and to your Pipfile as needed. These are good features.
As well, you can either open a virtual environment in place according to the Pipfile configuration using pipenv shell or run a script added to the Pipfile by running pipenv run <script-name>. These are both handy utilities. Sometimes it's helpful to poke around and debug inside the virtual environment itself, and defining a script that will be executed regularly can be helpful. A script is a Python-based command. Some example of scripts defined in the Pipfile look like the following,
[scripts] build = python setup.py install lint = flake8 . unit-test = pytest test/unit/
which can then be called using a command like
pipenv run build pipenv run lint pipenv run unit-test
This is definitely helpful for projects that have a few well-defined functionalities that need to be executed regularly. As of this writing, there doesn't appear to be any kind of auto-complete or suggestion features, which would be a nice touch given that scripts need to be defined in the Pipfile.
Poetry - The Good Parts
As a project, Poetry does seem a bit more polished and buttoned up on first glance. The initial landing page and documentation is clean and modern. Definitely a good sign of things to come.
Poetry does make use of a configuration file called pyproject.toml. This file contains information such as dependencies and their versions, as well as information around PyPI publishing. You can optionally define scripts that can be executed, similar to how Pipenv handles scripts, but Poetry also allows for specific command line calls to be made directly. This is actually one of the two killer features for Poetry for me.
For example, suppose I have a lint command defined that will lint the entire project, but I want to specifically to lint just the tests directory. In poetry I can accomplish this by using the command line directly like
poetry run flake8 tests/
Poetry does allow adding dependencies from command line using poetry add <module-name>, or specific versions using something like pipenv install <module-name>@1.2.3.
As mentioned, Poetry does have built-in functionality to publish packages in addition to installing and building packages. This can be done directly by calling the simply named poetry publish. This is helpful for projects with multiple folks who need to publish, or just to simplify the existing workflows for publishing to PyPi.
poetry publish
Easy breezy lemon squeezy.
Why I like Poetry Better
While this isn't an exhaustive comparison, I think I prefer Poetry over Pipenv for one small thing and one big thing.
The small thing is package publishing. I have consulted the same section of the same Medium article for how to publish to PyPI for a while now. I almost always forget some step. Poetry packages this process up neatly (pun intended) which makes the publishing process a bit smoother. You can also configure the pyproject.toml with credentials for team workflows and build pipelines if needed.
The bigger reason is how Poetry handles virtual environments.
Poetry almost fully encapsulates virtual environments. You almost don't even know you're using one, for whatever tasks that you end up executing. Dependency installation, development, building and publishing is all done via the poetry <commands> CLI. Where virtual environments are kept and how they are managed is basically invisible to the end user, including in the cases where different Python versions are used. Create the pyproject.toml and off you go.
Pipenv does not take this approach, and does expose some of the virtual environment(s) to end users. This feels natural at first if you're comfortable with virtual environment usage, but eventually it just gets cumbersome.
For example, consider the pipenv shell command. This command opens a virtual environment instance, which is a typical Python workflow. Using the shell command works great, until it doesn't. Pipenv manages virtual environments itself but will also try to use existing local virtual environments as well. If there is an existing virtual environment directory in the project, Pipenv sometimes errors if it cannot cleanly create or select the "default" virtual environment Pipenv created. This can lead to less than ideal situations. As well, if Pipenv is managing a virtual environment, resetting dependencies or even deleting the virtual environment and starting over (a hack I use sometimes with virtual environments) is difficult or tricky.
Another issue that arises out of this lack of abstraction is running one-off or non-standard script commands. Suppose I'm back at my scenario above where I want to link only the tests directory. In Pipenv the options I have are
create a new script in the Pipfile just for this task (annoying),
manually update the pipenv run lint script for my one use case (annoying and possibly easy to check-in to version control, causing shenanigans), or
run the command via pipenv shell (see above for problems in this case).
Poetry avoids these situations by abstracting the entire virtual environment concept away. There is no poetry shell command or analog, since the usage of a virtual environment is completely hidden. These edge cases aren't all that edgy really, and Poetry is well thought out from this perspective.
I do think Pipenv is a decent tool, but it is better suited to smaller Python projects or relatively static projects where the same scripts and commands are called often without any modification or updates.
Both tools have their place, but sometimes what you really need is just a hint of encapsulation, as a treat.
1 note · View note
thinktanker · 4 years ago
Text
Very Important Things to know about Node.js 15
Tumblr media
We all have heard of Node.js, which was first introduced in 2009. Since then, many versions of the platform have been released, and the latest version is Node.js 15. Node.js 14 will be replaced by Node.js 15. However, long-term support will be available for Node.js 14. Innumerable developers and web development organizations use this amazing back-end, open-source, cross-stage JavaScript runtime system. You must hire nodejs developers professionals to leverage the benefits offered by the version.
The best thing about Node.js is that it has a unique event-driven design along with an offbeat IO. As a result, the web application improvement process becomes simple and more streamlined. Many leading brands are adopting the new version of Node.js. There are various improvements and changes that have taken place in Node.js 15 version with better features and attributes. In this article, we will discuss some interesting improvements and features of Node.js 15.
Interesting new features of Node.js 15
Node.js 15 comes with some brilliant new features. Some prominent ones among them include:
Abort controller
An experimental implementation of the Abort Controller can be seen in Node.js 15 version. This is effective in canceling some specific promise-based APIs. This is a worldwide utility class, which is deeply dependent on the web API. With the help of the Abort Controller, web developers will be able to terminate a minimum of one web request prematurely whenever required.
The cancellation feature in the Abort Controller had been discussed for a long, and now it has been implemented. As the feature stabilizes, the list of APIs, which support the abort feature will surely expand.
NPM 7
When it comes to Node.js 15, the most prominent feature that is implemented is that of NPM 7. NPM is regarded as the package manager of JavaScript, making it one of the largest software registries if you did not know. NPM 7 comes with unique features, among which workspace needs special mention. With workspace, the creation and management of NPM packages are supported in a single file system. There have been critical internal alterations in NPM 7, which enhances maintainability and reliability.
Some features of NPM 7 include:
Default peer dependencies
Manual installation of peer dependency for resolving various issues were common in NPM previously. In the current version, the feature is improved as peer conditions are introduced naturally.
Workplaces
Helps in managing different bundles from a singular top-level root bundle. Much awaited component, this comes with the new package-lock.json format. It helps in supporting the automatic installation of peer dependencies and also supports yarn.lock file.
New language features with the help of v8.6
Node.js 15, Node.js’s latest version, comes with V8 JavaScript runtime support. From 8.4, it has been refreshed to 8.6, along with innumerable important highlights and performance upgrades. V8 is the main and primary JavaScript engine on which Node.js runs.
The version V8 engine is bumped by Node.js 15 for moving from 8.4 to 8.6. V8 illustrates language features of JavaScript, which are readily available to developers. Some new dialects in this version will minimize the amount of code that needs to be written and developed. Moreover, the code becomes more intelligible.
N-API Version 7
Changes in Node.js 15 make it easy to assemble, make and uphold local modules, otherwise known as add-ons. Red Hat and IBM are continuous and dedicated supporters of N-API. The latest version of Node.js, which is the Node.js 15has N-API version 7. With the help of this upgradation, extra techniques are available for working with array buffers.
To burn through N-API, one popular approach is that of node-addon-API. Approximately more than 2 million downloads have taken place recently every week for this, which is a respected figure for sure. Precisely, with N-API version 7, stability can be maintained across various Node.js versions and, more importantly, at different compiler levels. To utilize such features, it is necessary to hire the apt nodejs development company.
Throw out unhandled rejections
Node.js 15, the latest version of Node.js, has one amazing change. It is about the management of unhandled rejected promises. In the previous versions, it was seen that a warning was given if a rejected promise was not handled properly and explicitly. In Node.js 15, there is a default mode for unhandled rejection, which is changed for raising some uncaught exception and ends the application.
A warning is currently thrown, which might prove to be a foul surprise in production if Node.js gets updated without knowing about this change. There is a global handler of the unhandled rejection event. By adding this, it is possible to obtain unhandled rejections, and you have the choice of thinking about how you want to continue. The accompanying codes will log the event, and the application will keep running, like all previous Node.js variants.
Experimental support for QUIC protocol (HTTP/3)
In Node.js 15, there is support for QUIC protocol. A UDP-based organization transport convention for HTTP/3, QUIC provides in-built security with TLS 1.3. It is possible to carry out functions like stream control, multiplexing, connection relocation, error adjustment, etc., with this system. Node.js 15 provides experimental support for this work through the QUIC configuration flag.
QUIC is a very useful, exploratory, and vital transport layer network protocol, which Google has planned. It is not only proficient and quick; it is actualized on UDP. QUIC protocol aims to minimize latency when contrasted with TCP. Some additional advantages of QUIC include error correction, flow control, TLS 1.3, and connection migration.
Experimental diagnostics channel module
Another exciting feature of Node.js 15 is that it depicts diagnostics_channel, another trial module. With this module, the publish-subscribe pattern is empowered. Engineers can utilize this pattern to distribute arbitrary information to a specific channel, which various applications or modules can further use. The module is made nonexclusive on purpose so that it can be used in myriad ways. It is possible to import and use the module immediately.
It is expected that more Node.js 15 patches and minor releases will come out in April 2021.
However, after that a new version of Node.js will be seen in the form of Node.js 16. This will be the next LTS release. Till then, developers will be happy using Node.js 15 for its amazing features.
Source:
https://www.thinktanker.io/blog/very-important-things-to-know-about-node-js-15.html
0 notes
psychicanchortimemachine · 4 years ago
Text
TOP 10 OF ANGULAR 10
Angular model 10, has been launched at the twenty fourth of June, 2020 but with the beta version this time. This can imply that we are nearing the final launch of the newest version of the google-developed, typescript primarily based framework. The Version 10.0.0 is a major release that spans the framework, Angular Material, and the CLI. It has only been four months since they launched model 9.0 of Angular. Let’s explore the new features of Angular 10…
Here’s a list of top ten features of the newly updated Angular 10
1. Language Service and Localization
The language-service-precise compiler permits more than one typecheck files by using the venture interface which creates ScriptInfos as necessary. Autocompletion is additionally appeared to have been removed from HTML entities, consisting of &, <, etc., on the way to safeguard the inhouse center functionality of Angular LS, which holds questionable value and a performance cost. One of the fine capabilities of Angular 10 is that the today's angular model supports for merging of more than one translation files which could best load one report in the previous variations. Now clients can specify a couple of documents in step with locale, and the translations from each of these documents may be merged together by a message ID. In practice, this means you want to position the files so as of most vital first, with fallback translations later.
2. Router
The CanLoad shield now can go again Urltree in angular model 10. A CanLoad protect returning Urltree cancels the modern-day navigation and allows to redirect. This suits current conduct to the available CanActivate guards which might be also reputedly added. This however doesn’t have an effect on the preloading. Also, any routes with a CanLoad guard won’t be preloaded, and the guards will now not be completed as part of preloading. 3.Service Workers and Bug Fixes In the previous versions of angular Vary headers might had been considered even as retrieving sources from the cache, however simply stopping the retrieval of cached belongings and fundamental to unpredictable conduct because of inconsistent/buggy implementations in various browsers. However, in angular model 10 Vary headers are omitted even as retrieving sources from the ServiceWorker caches, which can bring about sources being retrieved even when their headers are not similar. If your utility needs to distinguish its responses based totally on request headers, it is important to ensure that the Angular ServiceWorker is configured to keep away from caching the affected sources. With this Angular 10 model, there had been a number of malicious program fixes, this consists of the compiler keeping off undefined expressions in a holey array and the middle averting a migration error at the same time as a nonexistent image is imported. Another worm restore ensures right identification of modules affected by overrides in TestBed. 4. Warnings about CommonJS imports
Starting with version 10, we now warn you while your build pulls in this kind of bundles. If you’ve started seeing these warnings on your dependencies, allow your dependency realize that you’d decide on an ECMAScript module (ESM) package. Converting pre-Ivy code All the pre-Ivy dependencies from npm ought to be converted to Ivy dependencies, which is supposed to take place as a precursor to jogging ngtsc at the application. Further all the destiny compilation and linking operations must be made inside the course of transforming variations of the dependencies.
5.Optional Stricter Settings
Angular version 10 offers a more strict project setup in order to create a new workspace with ng new.ng new --strictOnce this flag is enabled, it initializes the new task with a pair of recent settings that improve maintainability, help in catching bugs well in advance in time, and allow the CLI to perform superior optimizations on your app. Enabling this flag configures the app as side-effect free to make certain extra superior tree-shaking
What the flag does?
Enables strict mode in TypeScript
Turns template type checking to Strict
Default bundle budgets have been reduced by ~75%
Configures linting rules to prevent declarations of type any
Configures your app as side-effect free to enable more advanced tree-shaking
6. New browser configuration
The browser configuration is up to date for new initiatives to exclude older and less used browsers. This has the side effect of disabling ES5 builds by means of default for brand new projects. To permit ES5 builds and differential loading for browsers that require it (which include IE or UC Browser), actually upload the browsers you want to guide within the .browserslistrc file. 7. Typescript 3.9, TSLib 2.9 & TSLint v6
Angular 10 features typescript 3.9. As opposed to the previous version which supported typescript 3.6, 3.7 and even 3.8. Typescript is a language that builds on JavaScript by adding syntax for type declarations and annotations which is used by the TypeScript compiler to type-check our code. This in turn clean readable JavaScript that runs on lots of different runtimes. Typescript helps save the files with its rich editing functionality across editors. With Typescript 3.9 the team has worked on performance, polish, and stability. Apart from error-checking, this version powers things like completions, quick fixes and speeding up the compiler and editing experience.
8.Deprecations
The Angular Package Format no longer includes ESM5 or FESM5 bundles, saving you 119MB of download and install time for Angular packages and libraries. These formats are are not needed as any downleveling to help ES5 is finished at the end of the construct process. Deprecating support for older browsers including IE 9, 10, and Internet Explorer Mobile.
9. Flags and logic
Logic is updated relating formatting day periods that cross midnight, which will fit the time within an afternoon period that extends past midnight. Applications which are the use of either formatDate() or DatePipe or the b and B format codes are likely to be affected by this change. Another point underneath this segment is that any resolver that returns EMPTY will occur to cancel navigation. In order to enable any navigation, builders will should replace the resolvers to update some vale, together with default!Empty. Angular NPM inside the version 10 does no longer contain wonderful jsdoc comments to assist the Closure Compiler’s superior optimizations. Support for Closure Compiler in applications has been experimental and damaged for some times. For folks who will employ Closure Compiler is possibly better off soaking up Angular programs constructed from resources directly in preference to ingesting variations published on NPM. As a brief workaround, customers can't overlook using their present-day construct pipeline with Closure flag --compilation_level=SIMPLE. This flag will make sure that the build pipeline produces buildable, runnable artifacts, at a price of elevated payload size because of advanced optimizations being disabled.
10.New Date Range Picker
Material now consists of a brand new date range picker. To use the new date range picker, you can use the mat-date-range-input and mat-date-range-picker components.
We will be happy to answer your questions on designing, developing, and deploying comprehensive enterprise web, mobile apps and customized software solutions that best fit your organization needs. As a reputed Software Solutions Developer we have expertise in providing dedicated remote and outsourced technical resources for software services at very nominal cost. Besides experts in full stacks We also build web solutions, mobile apps and work on system integration, performance enhancement, cloud migrations and big data analytics. Don’t hesitate to
get in touch with us!
0 notes
npmjs · 6 years ago
Text
Supporting Open Source Maintainers
Part of npm, Inc.’s mission is to ensure the sustainability of the Open Source JavaScript ecosystem, and without fair compensation for developers, sustainability is impossible in the long term. For both practical and ethical reasons, those who consistently contribute to the open source commons should be compensated.
Over the past couple of years, we’ve observed a number of models emerging that enable a path towards sustainability for Open Source maintainers. Most notably: OpenCollective & GitHub Sponsors.  We at npm are in full support of both these initiatives, and intend to collaborate further with these organizations.
We believe the challenge in the JavaScript community is three-fold:
1. Any funding platform must strike the proper balance between making it easy to fund a publisher, without being intrusive or breaking the development lifecycle.
2. The size and depth of dependency graphs in the npm registry mean that funding high-visibility projects is not sufficient, if their dependencies are also not supported. This is an interconnected ecosystem, and just rewarding the stars will not solve the problem.
3. Despite a significant dependence on Open Source, and a widespread understanding of the business benefits of financial sustainability of the Open Source commons, large enterprise consumers are not engaged in a meaningful way for Open Source work to be quantified and measured.
As a result, past experiments in this area have typically been overly disruptive, inadequately distributed, or ultimately ineffective. 
npm, Inc. is uniquely positioned to address these challenges and ensure a fair and collaborative approach to funding Open Source maintainers.
1. npm is integral to the software development lifecycle of JavaScript developers everywhere.  We can make it easy for consumers to fund publishers who are in need of support, without resorting to hacks or workarounds that disrupt the development workflows.
2. npm has clear visibility into which dependencies throughout the tree are used by an application, even if the author of that application is only aware of the top-level dependencies. We can distribute funding support fairly to those who may be overlooked.
3. npm has clear visibility into the extent to which a given enterprise uses a set of dependencies. We are already engaged with many of the largest consumers of Open Source JavaScript. Most of them want to do the right thing, and we can help them understand what that is.
We are excited to announce that it is our intention to finalize and launch an Open Source funding platform by the end of 2019. Over the past couple months there has been a great deal of definition and work done by our engineering team to improve and grow our underlying registry systems, to make launching programs such as this possible. We have also made improvements to our policies to better address the gaps and improve our ability to continue our mission to ensure the sustainability of the Open Source JavaScript ecosystem.
Now we are ready to invite the community’s most active contributors and the biggest enterprise consumers of public open source code to a working group to finalize the platform's definition.
Next week Ahmad, Isaac and myself will be reaching out in order to get the expertise around the table with a goal of being able to share the framework by late September. If you are interested in participating, especially if you are part of an organization that is a large net consumer of packages and you are looking to fund contributors, please reach out to us and we will try to get you involved.
We know this has been a long time coming. And, the time is now!
If you have questions or comments, please send mail to [email protected].
4 notes · View notes