Tumgik
#keymetrics
webtrack360 · 12 days
Text
1 note · View note
digitaljahnavi · 4 months
Text
How to Analyze the Performance of a Digital Marketing Agency
Website Traffic and User Engagement
User engagement and website traffic are key metrics. They give you information on the effectiveness of your agency's traffic generation efforts and the way in which site users engage with your material.
Unique Visitors: Counts the amount of different people that come to your website.
Page Views: Keeps track of all the pages that have been seen.
Bounce Rate: The percentage of visitors to your website that depart after only reading one page is called the "bounce rate."
Average Session Duration: Indicates the typical amount of time visitors spend on your website.
Pages Per Session: Indicates how many pages a visitor typically visits in a single session.
1 note · View note
localvaluemarketing · 6 months
Link
0 notes
digitalzaa · 10 months
Text
Social Media Success: Navigating Key Metrics for Effective Brand Building
Tumblr media
Social media has become a potent instrument in the ever-changing field of digital marketing, helping businesses engage with their customers, increase brand recognition, and drive expansion. Nevertheless, the secret to achieving success on social media is not just producing interesting material but also comprehending and using the crucial analytics that determine how well your brand-building initiatives are working. 
The complexities of social media analytics will be examined in this section, along with ways businesses may use them to effectively develop their brands.
Understanding the Landscape: The Role of Social Media in Brand Building
Social networking sites have developed into online marketplaces where businesses may interact personally with consumers. Social media, with its billions of users on several platforms, presents a special chance for businesses to build and enhance their brand identity. Establishing a steady and genuine presence, developing deep relationships, and offering value to the audience are all essential components of successful social media brand growth.
Key Metrics for Social Media Success
To measure the impact of your brand-building efforts on social media, it's crucial to track and analyze key metrics. Here are some of the most important metrics to consider:
Engagement Rate:
The quantity of interaction your content receives—likes, comments, shares, and clicks—is measured by the engagement rate.
A high percentage of engagement suggests that people are actively engaging with your material, which may boost brand awareness and loyalty.
Reach and Impressions:
While impressions show the total number of times your content has been displayed, reach shows the total number of unique users that have viewed it.
You may learn more about the level of brand visibility and the possible influence of your content on a wider audience by keeping an eye on reach and impressions.
Follower Growth:
Follower growth tracks the increase or decrease in your social media audience over time.
A growing follower base suggests that your brand is gaining popularity and attracting a larger audience, while a decline may indicate the need for strategy adjustments.
Click-Through Rate (CTR):
CTR measures the percentage of people who click on a link in your post, taking them to another destination.
A higher CTR indicates that your content is compelling and encourages users to take action, whether it's visiting your website, signing up for a newsletter, or making a purchase.
Conversion Rate:
The percentage of users who successfully accomplish a desired activity, such as completing a form or making a transaction, is tracked by the conversion rate.
Knowing your conversion rate makes it easier for you to assess how well your social media marketing efforts are generating concrete outcomes for your business.
Sentiment Analysis:
Sentiment analysis gauges the overall sentiment of social media mentions related to your brand, whether positive, negative, or neutral.
Monitoring sentiment helps you assess the public perception of your brand and allows for timely responses to maintain a positive reputation.
Social Listening:
Social listening involves monitoring social media platforms for mentions of your brand or relevant keywords to gather insights into customer preferences and industry trends.
By actively listening to conversations, you can adapt your brand strategy, address customer concerns, and stay ahead of emerging trends.
Strategies for Leveraging Social Media Metrics
Set Clear Objectives:
Define specific and measurable goals for your social media campaigns, whether it's increasing brand awareness, driving website traffic, or boosting sales. Align your metrics with these objectives to track progress effectively.
Regularly Analyze Data:
Tumblr media
Consistently monitor and analyze social media metrics to identify patterns, trends, and areas for improvement. Regular data analysis provides valuable insights into what resonates with your audience and informs future content strategies.
A/B Testing:
Experiment with different content formats, posting times, and messaging to identify the most effective strategies. A/B testing allows you to optimize your approach based on real-time data and audience preferences.
Audience Segmentation:
Understand your audience demographics and segment them based on factors such as age, location, and interests. Tailor your content to specific segments to enhance relevance and engagement.
Social Media Advertising:
Utilize paid advertising options on social media platforms to extend your reach and target specific audience segments. Track the performance of paid campaigns using metrics like click-through rate and conversion rate.
Engage with Your Audience:
React as soon as possible to mentions, emails, and comments. Interacting with your audience promotes brand loyalty and fosters a feeling of community. Take note of consumer input and apply it to improve your brand strategy.
Stay Updated on Platform Algorithms:
Social media platforms frequently update their algorithms, affecting the visibility of content. Stay informed about these changes and adapt your strategy accordingly to ensure optimal performance.
Conclusion: 
In the dynamic world of social media, success isn't one-size-fits-all. It requires a strategic, learning-focused approach to navigate metrics and understand your audience. By leveraging key metrics, businesses can measure brand impact and refine strategies for lasting success. 
To learn more about maximizing social media impact, emphasizing engagement, reach, and audience dynamics. Remember, it's not just about numbers; it's about creating meaningful connections that strengthen your brand. 
Click here for actionable insights to elevate your online presence and take the next step toward impactful social media success.
0 notes
ltslean · 1 year
Text
Continuous Improvement Metrics that are worth measuring
Measuring continuous improvement metrics can help organisations improve processes, enhance product quality, reduce costs, and foster a culture of innovation.
Read More: https://balancedscorecard.ltslean.com/software-blogs-details/continuous-improvement-metrics-that-are-worth-measuring
Tumblr media
0 notes
babyawacs · 2 years
Text
@doescience .@doescience @raptorscreech @iaeaorg canwepublish keymet rics like: +explainability +precision +predictability from theirbest ba llmodels before: vs: newelectron newatom vs: now recent udpates idont think people understand espec predictability wow s @science @sciam @all
@doescience .@doescience @raptorscreech @iaeaorg canwepublish keymet rics like: +explainability +precision +predictability from theirbest ba llmodels before: vs: newelectron newatom vs: now recent udpates idont think people understand espec predictability wow s @science @sciam @all
@doescience .@doescience @raptorscreech @iaeaorg canwepublish keymetrics like: +explainability +precision +predictability from theirbest ballmodels before: vs: newelectron newatom vs: now recent udpates idontthink people understand espec predictability wow s @science @sciam @all I am Christian KISS BabyAWACS – Raw Independent Sophistication #THINKTANK + #INTEL #HELLHOLE…
View On WordPress
0 notes
dig-iga · 3 years
Text
There are two major changes that we’ll be seeing with Mail Privacy Protection (MPP) in the upcoming iOS 15 and MacOS updates in the coming months.
0 notes
easycloudbooks · 2 years
Text
CA One of the important #keymetrics for #accountingfirms to keep track of their growth is annual #RecurringFees and this should be easy to track with recurring #feescontract which helps you in #autoinvoicing for contracted services. #CA #CS #CMA #ICAI https://t.co/9sMBKqDnKD
CA One of the important #keymetrics for #accountingfirms to keep track of their growth is annual #RecurringFees and this should be easy to track with recurring #feescontract which helps you in #autoinvoicing for contracted services. #CA #CS #CMA #ICAI https://t.co/9sMBKqDnKD
— EasyCloudBooks (@Easycloudbooks) Apr 30, 2022
from Twitter https://twitter.com/Easycloudbooks
0 notes
webtrack360 · 4 months
Text
Tumblr media
Discover the key metrics and essential analytics needed to effectively measure customer engagement. Learn how to leverage data to enhance your marketing strategies and drive better customer experiences. Read more: https://webtrack360.com/customer-engagement-essential-analytics-and-key-metrics/
1 note · View note
keeponcashflow · 6 years
Photo
Tumblr media
A deeper look into one of your #favorite #realestate topics... #netoperatingincome #noi Whether #investing with others or by yourself, this is a #keymetric for #realestateinvestors to know. #personaldevelopment #keeponcashflow #blog #mindset #mindsetiseverything #dothework #multifamilyrealestate #apartmentinvesting #longdistanceinvesting #barcelona #charlotte #columbus (at Barcelona, Spain) https://www.instagram.com/p/Bptq1Yjl5Z1/?utm_source=ig_tumblr_share&igshid=14dd2l30wl1n1
0 notes
macronimous · 6 years
Text
Here are the #keyMetrics and templates you need to create a #PPC report https://t.co/z8NseIsKpA https://t.co/gVpsmjTxEa
Here are the #keyMetrics and templates you need to create a #PPC report https://t.co/z8NseIsKpA pic.twitter.com/gVpsmjTxEa
— Macronimous.com (@macronimous) August 23, 2018
from Twitter https://twitter.com/macronimous August 23, 2018 at 09:10PM via IFTTT
0 notes
iyarpage · 7 years
Text
Writing fast and safe native Node.js modules with Rust
TL:DR - Use Rust instead of C++ to write native Node.js modules!
RisingStack faced a shocking event last year: we reached the maximum speed that Node.js had to offer at the time, while our server costs went over the roof. To increase the performance of our application (and decrease our costs), we decided to completely rewrite it, and migrate our system to a different infrastructure - which was a lot of work, needless to say.
I figured out later that we could have just implemented a native module instead!
Back then, we weren’t aware that there was a better method to solve our performance issue. Just a few weeks ago I found out that another option could have been available. That’s when I picked up Rust instead of C++ to implement a native module. I figured out that it is a great choice thanks to the safety and ease of use it provides.
In this Rust tutorial, I’m going to walk you through the steps of writing a modern, fast and safe native module.
The Problem with our Node.js Server Speed
Our issue began in late 2016 when we've been working on Trace, our Node.js monitoring product, which was recently merged with Keymetrics in October 2017.
Like every other tech startup at the time, we've been running our services on Heroku to spare some expenses on infrastructure costs and maintenance. We've been building a microservice architecture application, which meant that our services have been communicating a lot over HTTP(S).
This is where the tricky part comes in: we wanted to communicate securely between the services, but Heroku did not offer private networking, so we had to implement our own solution. Therefore, we looked into a few solutions for authentication, and the one we eventually settled with was http signatures.
To explain it briefly; http signatures are based on public-key cryptography. To create an http signature, you take all parts of a request: the URL, the body and the headers and you sign them with your private key. Then, you can give your public key to those who would receive your signed requests so they can validate them.
Time passed by and we noticed that CPU utilization went over the roof in most of our http server processes. We suspected an obvious reason - if you're doing crypto, it's like that all the time.
However, after doing some serious profiling with the v8-profiler we figured out that it actually wasn't the crypto! It was the URL parsing that took the most CPU time. Why? Because to do the authentication, we had to parse the URL to validate request signatures.
To solve this issue, we decided to leave Heroku (what we wanted to do for other reasons too), and create a Google Cloud infrastructure with Kubernetes & internal networking - instead of optimizing our URL parsing.
The reason for writing this story/tutorial is that just a few weeks ago I realized that we could have optimized URL parsing in an other way - by writing a native library with Rust.
Naive developer going native - the need for a Rust module
It shouldn't be that hard to write native code, right?
Here at RisingStack, we've always said that we want to use the right tool for the job. To do so, we’re always doing research to create better software, including some on C++ native modules when necessary.
Shameless plug: I've written a blogpost about my learning journey on native Node.js modules too. Take a look!
Back then I thought that in most cases C++ is the right way to write fast and efficient software.. However, as now we have modern tooling at our disposal (in this example - Rust), we can use it to write more efficient, safe and fast code with much less effort than it ever required.
Let's get back to our initial problem: parsing an URL shouldn't be that hard right? It contains a protocol, host, query parameters...
(Source the Node.js documentation)
That looks pretty complex. After reading through the URL standard I figured out that I don't want to implement it myself, so I started to look for alternatives.
I thought that surely I'm not the only person who wants to parse URLs. Browsers probably have already solved this issue, so I checked out chromium's solution: google-url. While that implementation can be easily called from Node.js using the N-API, I have a few reasons not to do so:
Updates: when I just copy-paste some code from the internet I immediately get the feeling of danger. People have been doing it for a long time, and there are so many reasons it didn't work out so well.. There is just no easy way of updating a huge block of code that is sitting in my repository.
Safety: a person with not so much C++ experience cannot validate that the code is right, but we'll eventually have to run it on our servers. C++ has a steep learning curve, and it takes a long time to master it.
Security: we all heard about exploitable C++ code that is out there, which I'd rather avoid because I have no way to audit it myself. Using well maintained open-source modules gives me enough confidence to not worry about security.
So I'd much prefer a more approachable language, with an easy to use update mechanism and modern tooling: Rust!
A few words about Rust
Rust allows us to write fast and efficient code.
All of the Rust projects are managed with cargo - think about it as npm for Rust. Project dependencies can be installed with cargo, and there is a registry full of packages waiting for you to use.
I found a library which we can use in this example - rust-url, so shout out to the Servo team for their work.
We’re going to use Rust FFI too! We had already covered using Rust FFI with Node.js in a previous blogpost two years ago. Since then quite a lot has changed in the Rust ecosystem.
We have a supposedly working library (rust-url), so let's try to build it!
How do I build a Rust app?
After following instructions on https://rustup.rs, we can have a working rustc compiler, but all we should care about now is cargo. I don't want to go into much detail about how it works, so please check out our previous Rust blogpost if you're interested.
Creating a new Rust Project
Creating a new Rust project is as simple as cargo new --lib <projectname>.
You can check out all of the code in my example repository http://ift.tt/2zX5YeZ
To use the Rust library that we have, we can just list it as a dependency in our Cargo.toml
[package] name = "ffi" version = "1.0.0" authors = ["Peter Czibik <[email protected]>"] [dependencies] url = "1.6"
There is no short (built in) form for adding a dependency as you do with npm install - you have to manually add it yourself. However, there is a crate called cargo edit that adds a similar functionality.
Rust FFI
To be able to use Rust modules from Node.js, we can use the FFI provided by Rust. FFI is a short-term for Foreign Function Interface. Foreign function interface (FFI) is a mechanism by which a program written in one programming language can call routines or make use of services written in another.
To be able to link to our library we have to add two things to Cargo.toml
[lib] crate-type = ["dylib"] [dependencies] libc = "0.2" url = "1.6"
We have to declare that our library is a dynamic library. A file ending with the extension .dylib is a dynamic library: it's a library that's loaded at runtime instead of at compile time.
We will also have to link our program against libc. libc is the standard library for the C programming language, as specified in the ANSI C standard.
The libc crate is a Rust library with native bindings to the types and functions commonly found on various systems, including libc. This allows us to use C types from our Rust code, which we will have to do if we'd like to accept or return anything from our Rust functions. :)
Our code is fairly simple - I'm using the url and libc crate with the extern crate keyword. To expose this to the outer world through FFI, it is important to mark our function as pub extern. Our function takes a c_char pointer which represents the String types coming from Node.js.
We need to mark our conversion as unsafe. A block of code that is prefixed with the unsafe keyword is used to permit calling unsafe functions or dereferencing raw pointers within a safe function.
Rust uses the Option<T> type to represent a value that can be empty. Think of it as a value that can be null or undefined in your JavaScript. You can (and should) explicitly check every time you try to access a value that can be null. There are a few ways to address this in Rust, but this time I'm going with the simplest method: unwrap which will simply throw an error (panic in Rust terms) if the value is not present.
When the URL parsing is done, we have to convert it to a CString, that can be passed back to JavaScript.
extern crate libc; extern crate url; use std::ffi::{CStr,CString}; use url::{Url}; #[no_mangle] pub extern "C" fn get_query (arg1: *const libc::c_char) -> *const libc::c_char { let s1 = unsafe { CStr::from_ptr(arg1) }; let str1 = s1.to_str().unwrap(); let parsed_url = Url::parse( str1 ).unwrap(); CString::new(parsed_url.query().unwrap().as_bytes()).unwrap().into_raw() }
To build this Rust code, you can use cargo build --release command. Before compilation, make sure you add the url library to your list of dependencies in Cargo.toml for this project too!
We can use the ffi Node.js package to create a module that exposes the Rust code.
const path = require('path'); const ffi = require('ffi'); const library_name = path.resolve(__dirname, './target/release/libffi'); const api = ffi.Library(library_name, { get_query: ['string', ['string']] }); module.exports = { getQuery: api.get_query };
The naming convention is lib*, where * is the name of your library, for the .dylib file that cargo build --release builds.
This is great; we have a working Rust code that we called from Node.js! It works, but you can already see that we had to do a bunch of conversion between the types, which can add a bit of an overhead to our function calls. There should be a much better way to integrate our code with JavaScript.
Meet Neon
Rust bindings for writing safe and fast native Node.js modules.
Neon allows us to use JavaScript types in our Rust code. To create a new Neon project, we can use their own cli. Use npm install neon-cli --global to install it.
neon new <projectname> will create a new neon project with zero configuration.
With our neon project done, we can rewrite the code from above as the following:
#[macro_use] extern crate neon; extern crate url; use url::{Url}; use neon::vm::{Call, JsResult}; use neon::js::{JsString, JsObject}; fn get_query(call: Call) -> JsResult<JsString> { let scope = call.scope; let url = call.arguments.require(scope, 0)?.check::<JsString>()?.value(); let parsed_url = Url::parse( &url ).unwrap(); Ok(JsString::new(scope, parsed_url.query().unwrap()).unwrap()) } register_module!(m, { m.export("getQuery", get_query) });
Those new types that we're using on the top JsString, Call and JsResult are wrappers for JavaScript types that allows us to hook into the JavaScript VM and execute code on top of it. The Scope allows us to bind our new variables to existing JavaScript scopes, so our variables can be garbage collected.
This is much like writing native Node.js modules in C++ which I've explained in a previous blogpost.
Notice the #[macro_use] attribute that allows us to use the register_module! macro, which allows us to create modules just like in Node.js module.exports.
The only tricky part here is accessing arguments:
let url = call.arguments.require(scope, 0)?.check::<JsString>()?.value();
We have to accept all kinds of arguments (as any other JavaScript function does) so we cannot be sure if the function was called with single or multiple arguments. That is why we have to check for the first element's existence.
Other than that change, we can get rid of most of the serialization and just use Js types directly.
Now let's try to run them!
If you downloaded my example first, you have to go into the ffi folder and do a cargo build --release and then into the neon folder and (with previously globally installed neon-cli) run neon build.
If you're ready, you can use Node.js to generate a new list of urls with the faker library.
Run the node generateUrls.js command which will place a urls.json file in your folder, what our tests will read and try to parse. When that is ready, you can run the "benchmarks" with node urlParser.js. If everything was successful, you should see something like this:
This test was done with 100 URLs (randomly generated) and our app parsed them only once to give a result. If you'd like to benchmark parsing, increase the number (tryCount in urlParser.js) of URLs or the number of times (urlLength in urlGenerator.js).
You can see the winner in my benchmark is the Rust neon version, but as the length of the array increases, there will be more optimization V8 can do, and they will get closer. Eventually, it will surpass the Rust neon implementation.
This was just a simple example, so of course, there is much to learn for us in this field,
We can further optimize this calculation in the future, potentially utilizing concurrency libraries provided by some crates like rayon.
Implementing Rust modules in Node.js
Hopefully, you've also learned something today about implementing Rust modules in Node.js along with me, and you can benefit from a new tool in your toolchain from now on. I wanted to demonstrate that while this is possible (and fun), it is not a silver bullet that will solve all of the performance problems.
Just keep in mind that knowing Rust may come handy in certain situations.
If you have any questions or comments, let me know in the section below - I’ll be here to answer them!
Writing fast and safe native Node.js modules with Rust published first on http://ift.tt/2fA8nUr
0 notes
qwertsypage · 7 years
Text
Writing fast and safe native Node.js modules with Rust
TL:DR - Use Rust instead of C++ to write native Node.js modules!
RisingStack faced a shocking event last year: we reached the maximum speed that Node.js had to offer at the time, while our server costs went over the roof. To increase the performance of our application (and decrease our costs), we decided to completely rewrite it, and migrate our system to a different infrastructure - which was a lot of work, needless to say.
I figured out later that we could have just implemented a native module instead!
Back then, we weren’t aware that there was a better method to solve our performance issue. Just a few weeks ago I found out that another option could have been available. That’s when I picked up Rust instead of C++ to implement a native module. I figured out that it is a great choice thanks to the safety and ease of use it provides.
In this Rust tutorial, I’m going to walk you through the steps of writing a modern, fast and safe native module.
The Problem with our Node.js Server Speed
Our issue began in late 2016 when we've been working on Trace, our Node.js monitoring product, which was recently merged with Keymetrics in October 2017.
Like every other tech startup at the time, we've been running our services on Heroku to spare some expenses on infrastructure costs and maintenance. We've been building a microservice architecture application, which meant that our services have been communicating a lot over HTTP(S).
This is where the tricky part comes in: we wanted to communicate securely between the services, but Heroku did not offer private networking, so we had to implement our own solution. Therefore, we looked into a few solutions for authentication, and the one we eventually settled with was http signatures.
To explain it briefly; http signatures are based on public-key cryptography. To create an http signature, you take all parts of a request: the URL, the body and the headers and you sign them with your private key. Then, you can give your public key to those who would receive your signed requests so they can validate them.
Time passed by and we noticed that CPU utilization went over the roof in most of our http server processes. We suspected an obvious reason - if you're doing crypto, it's like that all the time.
However, after doing some serious profiling with the v8-profiler we figured out that it actually wasn't the crypto! It was the URL parsing that took the most CPU time. Why? Because to do the authentication, we had to parse the URL to validate request signatures.
To solve this issue, we decided to leave Heroku (what we wanted to do for other reasons too), and create a Google Cloud infrastructure with Kubernetes & internal networking - instead of optimizing our URL parsing.
The reason for writing this story/tutorial is that just a few weeks ago I realized that we could have optimized URL parsing in an other way - by writing a native library with Rust.
Naive developer going native - the need for a Rust module
It shouldn't be that hard to write native code, right?
Here at RisingStack, we've always said that we want to use the right tool for the job. To do so, we’re always doing research to create better software, including some on C++ native modules when necessary.
Shameless plug: I've written a blogpost about my learning journey on native Node.js modules too. Take a look!
Back then I thought that in most cases C++ is the right way to write fast and efficient software.. However, as now we have modern tooling at our disposal (in this example - Rust), we can use it to write more efficient, safe and fast code with much less effort than it ever required.
Let's get back to our initial problem: parsing an URL shouldn't be that hard right? It contains a protocol, host, query parameters...
(Source the Node.js documentation)
That looks pretty complex. After reading through the URL standard I figured out that I don't want to implement it myself, so I started to look for alternatives.
I thought that surely I'm not the only person who wants to parse URLs. Browsers probably have already solved this issue, so I checked out chromium's solution: google-url. While that implementation can be easily called from Node.js using the N-API, I have a few reasons not to do so:
Updates: when I just copy-paste some code from the internet I immediately get the feeling of danger. People have been doing it for a long time, and there are so many reasons it didn't work out so well.. There is just no easy way of updating a huge block of code that is sitting in my repository.
Safety: a person with not so much C++ experience cannot validate that the code is right, but we'll eventually have to run it on our servers. C++ has a steep learning curve, and it takes a long time to master it.
Security: we all heard about exploitable C++ code that is out there, which I'd rather avoid because I have no way to audit it myself. Using well maintained open-source modules gives me enough confidence to not worry about security.
So I'd much prefer a more approachable language, with an easy to use update mechanism and modern tooling: Rust!
A few words about Rust
Rust allows us to write fast and efficient code.
All of the Rust projects are managed with cargo - think about it as npm for Rust. Project dependencies can be installed with cargo, and there is a registry full of packages waiting for you to use.
I found a library which we can use in this example - rust-url, so shout out to the Servo team for their work.
We’re going to use Rust FFI too! We had already covered using Rust FFI with Node.js in a previous blogpost two years ago. Since then quite a lot has changed in the Rust ecosystem.
We have a supposedly working library (rust-url), so let's try to build it!
How do I build a Rust app?
After following instructions on https://rustup.rs, we can have a working rustc compiler, but all we should care about now is cargo. I don't want to go into much detail about how it works, so please check out our previous Rust blogpost if you're interested.
Creating a new Rust Project
Creating a new Rust project is as simple as cargo new --lib <projectname>.
You can check out all of the code in my example repository http://ift.tt/2zX5YeZ
To use the Rust library that we have, we can just list it as a dependency in our Cargo.toml
[package] name = "ffi" version = "1.0.0" authors = ["Peter Czibik <[email protected]>"] [dependencies] url = "1.6"
There is no short (built in) form for adding a dependency as you do with npm install - you have to manually add it yourself. However, there is a crate called cargo edit that adds a similar functionality.
Rust FFI
To be able to use Rust modules from Node.js, we can use the FFI provided by Rust. FFI is a short-term for Foreign Function Interface. Foreign function interface (FFI) is a mechanism by which a program written in one programming language can call routines or make use of services written in another.
To be able to link to our library we have to add two things to Cargo.toml
[lib] crate-type = ["dylib"] [dependencies] libc = "0.2" url = "1.6"
We have to declare that our library is a dynamic library. A file ending with the extension .dylib is a dynamic library: it's a library that's loaded at runtime instead of at compile time.
We will also have to link our program against libc. libc is the standard library for the C programming language, as specified in the ANSI C standard.
The libc crate is a Rust library with native bindings to the types and functions commonly found on various systems, including libc. This allows us to use C types from our Rust code, which we will have to do if we'd like to accept or return anything from our Rust functions. :)
Our code is fairly simple - I'm using the url and libc crate with the extern crate keyword. To expose this to the outer world through FFI, it is important to mark our function as pub extern. Our function takes a c_char pointer which represents the String types coming from Node.js.
We need to mark our conversion as unsafe. A block of code that is prefixed with the unsafe keyword is used to permit calling unsafe functions or dereferencing raw pointers within a safe function.
Rust uses the Option<T> type to represent a value that can be empty. Think of it as a value that can be null or undefined in your JavaScript. You can (and should) explicitly check every time you try to access a value that can be null. There are a few ways to address this in Rust, but this time I'm going with the simplest method: unwrap which will simply throw an error (panic in Rust terms) if the value is not present.
When the URL parsing is done, we have to convert it to a CString, that can be passed back to JavaScript.
extern crate libc; extern crate url; use std::ffi::{CStr,CString}; use url::{Url}; #[no_mangle] pub extern "C" fn get_query (arg1: *const libc::c_char) -> *const libc::c_char { let s1 = unsafe { CStr::from_ptr(arg1) }; let str1 = s1.to_str().unwrap(); let parsed_url = Url::parse( str1 ).unwrap(); CString::new(parsed_url.query().unwrap().as_bytes()).unwrap().into_raw() }
To build this Rust code, you can use cargo build --release command. Before compilation, make sure you add the url library to your list of dependencies in Cargo.toml for this project too!
We can use the ffi Node.js package to create a module that exposes the Rust code.
const path = require('path'); const ffi = require('ffi'); const library_name = path.resolve(__dirname, './target/release/libffi'); const api = ffi.Library(library_name, { get_query: ['string', ['string']] }); module.exports = { getQuery: api.get_query };
The naming convention is lib*, where * is the name of your library, for the .dylib file that cargo build --release builds.
This is great; we have a working Rust code that we called from Node.js! It works, but you can already see that we had to do a bunch of conversion between the types, which can add a bit of an overhead to our function calls. There should be a much better way to integrate our code with JavaScript.
Meet Neon
Rust bindings for writing safe and fast native Node.js modules.
Neon allows us to use JavaScript types in our Rust code. To create a new Neon project, we can use their own cli. Use npm install neon-cli --global to install it.
neon new <projectname> will create a new neon project with zero configuration.
With our neon project done, we can rewrite the code from above as the following:
#[macro_use] extern crate neon; extern crate url; use url::{Url}; use neon::vm::{Call, JsResult}; use neon::js::{JsString, JsObject}; fn get_query(call: Call) -> JsResult<JsString> { let scope = call.scope; let url = call.arguments.require(scope, 0)?.check::<JsString>()?.value(); let parsed_url = Url::parse( &url ).unwrap(); Ok(JsString::new(scope, parsed_url.query().unwrap()).unwrap()) } register_module!(m, { m.export("getQuery", get_query) });
Those new types that we're using on the top JsString, Call and JsResult are wrappers for JavaScript types that allows us to hook into the JavaScript VM and execute code on top of it. The Scope allows us to bind our new variables to existing JavaScript scopes, so our variables can be garbage collected.
This is much like writing native Node.js modules in C++ which I've explained in a previous blogpost.
Notice the #[macro_use] attribute that allows us to use the register_module! macro, which allows us to create modules just like in Node.js module.exports.
The only tricky part here is accessing arguments:
let url = call.arguments.require(scope, 0)?.check::<JsString>()?.value();
We have to accept all kinds of arguments (as any other JavaScript function does) so we cannot be sure if the function was called with single or multiple arguments. That is why we have to check for the first element's existence.
Other than that change, we can get rid of most of the serialization and just use Js types directly.
Now let's try to run them!
If you downloaded my example first, you have to go into the ffi folder and do a cargo build --release and then into the neon folder and (with previously globally installed neon-cli) run neon build.
If you're ready, you can use Node.js to generate a new list of urls with the faker library.
Run the node generateUrls.js command which will place a urls.json file in your folder, what our tests will read and try to parse. When that is ready, you can run the "benchmarks" with node urlParser.js. If everything was successful, you should see something like this:
This test was done with 100 URLs (randomly generated) and our app parsed them only once to give a result. If you'd like to benchmark parsing, increase the number (tryCount in urlParser.js) of URLs or the number of times (urlLength in urlGenerator.js).
You can see the winner in my benchmark is the Rust neon version, but as the length of the array increases, there will be more optimization V8 can do, and they will get closer. Eventually, it will surpass the Rust neon implementation.
This was just a simple example, so of course, there is much to learn for us in this field,
We can further optimize this calculation in the future, potentially utilizing concurrency libraries provided by some crates like rayon.
Implementing Rust modules in Node.js
Hopefully, you've also learned something today about implementing Rust modules in Node.js along with me, and you can benefit from a new tool in your toolchain from now on. I wanted to demonstrate that while this is possible (and fun), it is not a silver bullet that will solve all of the performance problems.
Just keep in mind that knowing Rust may come handy in certain situations.
If you have any questions or comments, let me know in the section below - I’ll be here to answer them!
Writing fast and safe native Node.js modules with Rust published first on http://ift.tt/2w7iA1y
0 notes
babyawacs · 3 years
Text
howis the newatom new electron licensing businessgoing makesure dinos are safe iftheyre in it lets supercharge usa and uk economy‎ and customers ofnewatom
howis the newatom new electron licensing businessgoing makesure dinos are safe iftheyre in it lets supercharge usa and uk economy‎ and customers ofnewatom
howis the newatom new electron licensing businessgoing makesure dinos are safe iftheyre in it lets supercharge usa and uk economy and customers ofnewatom publishnotarstamped whichimpossiblethings done which things supercharged orhowmany and inwhichvalue the issue afew keymetrics that show the value ofthe license stamps //// the germans would prime them and that propaganda shitballing daynight…
View On WordPress
0 notes
dothtml5 · 8 years
Link
デブサミに見るIT技術のトレンドとエンジニアの世界観の変化 (Web担当者Forum)
新刊 書籍:SEO検定公式テキスト
デザイナーがデザインシステムに参加するための課題と対策 (could)
はてなブックマークプラスが4月5日にサービス終了 (INTERNET Watch)
Yarn:Facebook発のパッケージマネジャーはnpmに代わるスタンダードになるか (WPJ)
知っておいて損はなし!新人ウェブデザイナーのためのnpm入門 (フェレット)
Node.jsのパフォーマンス最適化を阻むものの見つけ方 (POSTD)
Vue.js入門 (02) 要素のclass属性を動的に変える (Fumio Nonaka)
データの可視化に関するツールや知識、リソースをまとめている「dataviz.tools」 (かちびと)
Node.jsインフラストラクチャの監視ツールKeymetricsが200万ドルを調達 (TechCrunch)
最近見かけるレイアウト・ナビゲーション・スライダー・フォームなどがどうやって実装されているのかのまとめ (コリス)
未経験でエンジニアへ転職するために知っておくべきこと (TechAcademy)
斜め背景でカッコいいフロントエンドデザイン (sho012b | Qiita)
0 notes
joemktg · 8 years
Text
6 Google-Approved Tools that Take Your Analytics Data to New Heights - SEMrush
1. Leadfeeder reveals the companies that your site's anonymous visitors work for 2. ReportGarden automatically creates PPC performance reports with Google Analytics data 3. KeyMetric Call Analytics merges phone sales conversion data with Google Analytics 4. SkyGlue makes event tracking smarter and easier 5. UserReport supercharges your demographics data with surveys 6. Navegg provides better segmentation for your retargeting campaigns
marketingIO's insight:
Visualize your Marketing Stack. marketingIO will analyze your marketing technology and deliver a visual of your MarTech Stack. Free. Go here: http://go.marketingio.com/stack_analysis 
 marketingIO: MarTech for B2B Marketers. Strategy, Software, Services, Support, Staffing.  Go here to submit RFQs: http://www.marketingio.com #MarTech #DigitalMarketing
0 notes