#decided to edit it in my converter-program instead of trying to write it in manually in the json-files
Explore tagged Tumblr posts
racke7 · 1 month ago
Text
And apparently the API is using only the most recent stats? So if you want earlier stats you need to manually edit each stat that's been changed since whatever gen you wanted.
Yay. Fun.
Figured out a way to make the API work for me. Currently writing it into a file up to Gen5. We'll see how that goes.
Nevermind. It finished whilst writing this. And it did work. Yay.
2 notes · View notes
themikewheelers · 8 years ago
Note
tell me about your photoshop troubles. i have had the same in the past (specifically #THE AMOUNT OF VIRUSES I'VE HAD ON MY COMPUTER) and it's good to know someone else has them too 😂😂
Okay so it started last week. I've been "Photoshopping" things for years now but I've always used knockoff programs and never bothered to get the actual thing. So I decide to make an edit and I want to try using actual Photoshop this time, so FIRST I go to download the episodes but the fucking download program I got,,,,, it's in GERMAN. So I'm opening google translate and attempting to do this download, then it just crashes and it doesn't work. So then I get a rec from my friend for a good ENGLISH WORKING SAFE program so I get that and download it and the episodes. Day 1 is over. Next day I decide to download photoshop and fucking,,,,,, it's in SPANISH. Now I've been taking Spanish for 3 years now so it's to the point where I'm like "Okay I know that archivo means file so I'm just gonna stick with this" so I run downstairs and I pick up that old Spanish English dictionary I bought 3 years ago for my first Spanish class and haven't opened since, and I'm looking for the freaking import button, and the program itself DOESNT HAVE AN IMPORT BUTTON (I even clicked on that thing that was like 'show all menu options' or something like that, it still wasn't there, no import button) so THAT concludes my experience with Español Photoshop. The rest of Day 2 is downloading as many different photoshop links as I can find, just trying to find ONE that worked. I swear to God and Jesus I'm not exaggerating when I say I downloaded hundreds of things, I can open my laptop and check for the exact number but I swear it was at least 150 different things. Out of all of those, 6 of them worked. Number one was the Spanish one, and then the next one? In Italian. It worked perfectly but it was in Italian and this was one of the earlier downloads, I'm still full of hope, not just desperate for anything that says "Photoshop" on it, so I delete. Next one was Portuguese, and the one after THAT was Japanese. All of these claimed to be in English and btw I learned from my series of photoshop adventures that you can't change the language, it's connected to where you downloaded from and you have to pay to change it. So at this point I'm PRETTY far into the list of like 150 downloads, and out of the 4 that had worked so far, they were all in different languages. So understandably I'm getting very frustrated and I keep downloading things, and then finally, oh thank the lord finally, it's in English and it's working. Tears of joy running down my face-- and then my laptop just shuts off. I wait for it to power back on and then I get a little notification, it had found a virus and had to remove it. So I check and of course, English Photoshop was the virus it got rid of. So then, maybe 15 downloads later, that's when things got interesting. I find another working download, but the language of it is what's weird. It's half in English, half in Spanish. I don't ever want to look at another foreign word again. Sorry to my Spanish teacher, but this was like my 12th hour doing this and why are there so many languages to download photoshop in???? I was so exhausted, I couldn't do it anymore. I didn't want to settle at Half English though, so I delete it and keep looking. I kept looking for a while, until finally giving up. Six downloads had worked, and I couldn't use any of them. Now I think this is a good time to tell you that I'm currently having finals week, and instead of studying, THIS was what I've been doing (and it definitely didn't do anything to ease my anxiety levels). So I go and study for a few hours, figuring I'm done with Photoshop. But that's not who I am, when I spend so much time trying to do something, I'm gonna do it. Not out of determination, but it comes to a point where I'm doing it out of spite, just to say that I finally did it. So it's around 3am and I'm like "Okay I'm gonna download a few more and then I'm going to sleep" so I start downloading some and then around the third one, it's in English and it works. You can imagine my tears of joy. So I turn on FRIENDS and go the hell to sleep. Now it's day 3, I have the downloaded episodes and I have the working English photoshop, now all I have to do is make the edit, right? Wrong. I go to put the video into photoshop, and it's saying it's the wrong file type and it's not supported by Photoshop. So at first I'm like "okay that's understandable, I did just download them" so my FIRST problem is downloading a file converter. I'm not going to get into the story, but it's pretty similar to how it took to download Photoshop, hundreds of downloads, the only ones that worked are in foreign languages, and THIS was the day where I get the most notifications from my computer saying it located viruses. Now after a long long long time downloading tons of file converters, I finally find one to use. So I go to convert the files and then I try and put the converted files into photoshop and whoooop it didn't work. Now the rest of Day 3 consisted of me converting the files into HUNDREDS of different types. I converted them into basic things, things Photoshop claimed it supported, things all the people with similar problems online claimed they needed to convert into, things I hadn't even heard of. In the end, after around 15 hours of converting files, I FINALLY found something that photoshop will take. Now onto Day 4, this should be the easy day, now I just have the make the edit. Well my very first problem is that now I'm opening photoshop and I realize something, this has been going on so long I can't even remember what my idea to edit was. So then I'm just like "whatever I'll gif a random scene and put a meme on it, I gotta do something" So I go to make the gif, when I find out THE FREAKING PHOTOSHOP I DOWNLOADED DOESNT HAVE THE ANIMATION TIMELINE. I don't know why it didn't have it, but it didn't. So I end up going online and downloading the timeline separately, basically using it kinda like a psd. So I get the timeline and I want to make the gif and remove some frames, but for whatever reason my Photoshop doesn't let me select multiple frames at once, so I have to go through like 500 frames and manually delete each of them to get what I want. Now I think now is the time in this story to tell y'all that I'm not just stupid or anything. I know how to use Photoshop, I've been using it since I was in the sixth grade, I used to be in the digital editing club at my school, I know how to use photoshop, it's just this program is so wacky nothing is working. So manually I delete like 450 frames and after that everything with making the gif seems to be going well, up until it's time to add the text. I start typing and the font I'm using? Times New Roman. But instead these random symbols are showing up and it won't let me change it. So I just delete everything and start over and then I get up to adding text again, and this time symbols aren't showing up, instead just whatever letter I press, it just puts in a dash. So I'm tryna write the subtitles and instead what's coming up is "----------------" Now, after several several several redos it FINALLY starts writing letters, and there are a few more minor issues but for the most part everything is running smoothly, I finally make the gifs.So to conclude, this gifset took 4 days, 6 languages, hundreds of downloads, God-knows how many viruses, and an entire ocean of tears, but I finally made the gifs.Here's the link. God obviously determined that my making of this was a sin, because he definitely did everything in his power to make it stop, but dammit this was one of the most frustrating experiences of my life>>> enjoy http://themikewheelers.tumblr.com/post/156287152865
6 notes · View notes
iyarpage · 8 years ago
Text
Writing fast and safe native Node.js modules with Rust
TL:DR - Use Rust instead of C++ to write native Node.js modules!
RisingStack faced a shocking event last year: we reached the maximum speed that Node.js had to offer at the time, while our server costs went over the roof. To increase the performance of our application (and decrease our costs), we decided to completely rewrite it, and migrate our system to a different infrastructure - which was a lot of work, needless to say.
I figured out later that we could have just implemented a native module instead!
Back then, we weren’t aware that there was a better method to solve our performance issue. Just a few weeks ago I found out that another option could have been available. That’s when I picked up Rust instead of C++ to implement a native module. I figured out that it is a great choice thanks to the safety and ease of use it provides.
In this Rust tutorial, I’m going to walk you through the steps of writing a modern, fast and safe native module.
The Problem with our Node.js Server Speed
Our issue began in late 2016 when we've been working on Trace, our Node.js monitoring product, which was recently merged with Keymetrics in October 2017.
Like every other tech startup at the time, we've been running our services on Heroku to spare some expenses on infrastructure costs and maintenance. We've been building a microservice architecture application, which meant that our services have been communicating a lot over HTTP(S).
This is where the tricky part comes in: we wanted to communicate securely between the services, but Heroku did not offer private networking, so we had to implement our own solution. Therefore, we looked into a few solutions for authentication, and the one we eventually settled with was http signatures.
To explain it briefly; http signatures are based on public-key cryptography. To create an http signature, you take all parts of a request: the URL, the body and the headers and you sign them with your private key. Then, you can give your public key to those who would receive your signed requests so they can validate them.
Time passed by and we noticed that CPU utilization went over the roof in most of our http server processes. We suspected an obvious reason - if you're doing crypto, it's like that all the time.
However, after doing some serious profiling with the v8-profiler we figured out that it actually wasn't the crypto! It was the URL parsing that took the most CPU time. Why? Because to do the authentication, we had to parse the URL to validate request signatures.
To solve this issue, we decided to leave Heroku (what we wanted to do for other reasons too), and create a Google Cloud infrastructure with Kubernetes & internal networking - instead of optimizing our URL parsing.
The reason for writing this story/tutorial is that just a few weeks ago I realized that we could have optimized URL parsing in an other way - by writing a native library with Rust.
Naive developer going native - the need for a Rust module
It shouldn't be that hard to write native code, right?
Here at RisingStack, we've always said that we want to use the right tool for the job. To do so, we’re always doing research to create better software, including some on C++ native modules when necessary.
Shameless plug: I've written a blogpost about my learning journey on native Node.js modules too. Take a look!
Back then I thought that in most cases C++ is the right way to write fast and efficient software.. However, as now we have modern tooling at our disposal (in this example - Rust), we can use it to write more efficient, safe and fast code with much less effort than it ever required.
Let's get back to our initial problem: parsing an URL shouldn't be that hard right? It contains a protocol, host, query parameters...
(Source the Node.js documentation)
That looks pretty complex. After reading through the URL standard I figured out that I don't want to implement it myself, so I started to look for alternatives.
I thought that surely I'm not the only person who wants to parse URLs. Browsers probably have already solved this issue, so I checked out chromium's solution: google-url. While that implementation can be easily called from Node.js using the N-API, I have a few reasons not to do so:
Updates: when I just copy-paste some code from the internet I immediately get the feeling of danger. People have been doing it for a long time, and there are so many reasons it didn't work out so well.. There is just no easy way of updating a huge block of code that is sitting in my repository.
Safety: a person with not so much C++ experience cannot validate that the code is right, but we'll eventually have to run it on our servers. C++ has a steep learning curve, and it takes a long time to master it.
Security: we all heard about exploitable C++ code that is out there, which I'd rather avoid because I have no way to audit it myself. Using well maintained open-source modules gives me enough confidence to not worry about security.
So I'd much prefer a more approachable language, with an easy to use update mechanism and modern tooling: Rust!
A few words about Rust
Rust allows us to write fast and efficient code.
All of the Rust projects are managed with cargo - think about it as npm for Rust. Project dependencies can be installed with cargo, and there is a registry full of packages waiting for you to use.
I found a library which we can use in this example - rust-url, so shout out to the Servo team for their work.
We’re going to use Rust FFI too! We had already covered using Rust FFI with Node.js in a previous blogpost two years ago. Since then quite a lot has changed in the Rust ecosystem.
We have a supposedly working library (rust-url), so let's try to build it!
How do I build a Rust app?
After following instructions on https://rustup.rs, we can have a working rustc compiler, but all we should care about now is cargo. I don't want to go into much detail about how it works, so please check out our previous Rust blogpost if you're interested.
Creating a new Rust Project
Creating a new Rust project is as simple as cargo new --lib <projectname>.
You can check out all of the code in my example repository http://ift.tt/2zX5YeZ
To use the Rust library that we have, we can just list it as a dependency in our Cargo.toml
[package] name = "ffi" version = "1.0.0" authors = ["Peter Czibik <[email protected]>"] [dependencies] url = "1.6"
There is no short (built in) form for adding a dependency as you do with npm install - you have to manually add it yourself. However, there is a crate called cargo edit that adds a similar functionality.
Rust FFI
To be able to use Rust modules from Node.js, we can use the FFI provided by Rust. FFI is a short-term for Foreign Function Interface. Foreign function interface (FFI) is a mechanism by which a program written in one programming language can call routines or make use of services written in another.
To be able to link to our library we have to add two things to Cargo.toml
[lib] crate-type = ["dylib"] [dependencies] libc = "0.2" url = "1.6"
We have to declare that our library is a dynamic library. A file ending with the extension .dylib is a dynamic library: it's a library that's loaded at runtime instead of at compile time.
We will also have to link our program against libc. libc is the standard library for the C programming language, as specified in the ANSI C standard.
The libc crate is a Rust library with native bindings to the types and functions commonly found on various systems, including libc. This allows us to use C types from our Rust code, which we will have to do if we'd like to accept or return anything from our Rust functions. :)
Our code is fairly simple - I'm using the url and libc crate with the extern crate keyword. To expose this to the outer world through FFI, it is important to mark our function as pub extern. Our function takes a c_char pointer which represents the String types coming from Node.js.
We need to mark our conversion as unsafe. A block of code that is prefixed with the unsafe keyword is used to permit calling unsafe functions or dereferencing raw pointers within a safe function.
Rust uses the Option<T> type to represent a value that can be empty. Think of it as a value that can be null or undefined in your JavaScript. You can (and should) explicitly check every time you try to access a value that can be null. There are a few ways to address this in Rust, but this time I'm going with the simplest method: unwrap which will simply throw an error (panic in Rust terms) if the value is not present.
When the URL parsing is done, we have to convert it to a CString, that can be passed back to JavaScript.
extern crate libc; extern crate url; use std::ffi::{CStr,CString}; use url::{Url}; #[no_mangle] pub extern "C" fn get_query (arg1: *const libc::c_char) -> *const libc::c_char { let s1 = unsafe { CStr::from_ptr(arg1) }; let str1 = s1.to_str().unwrap(); let parsed_url = Url::parse( str1 ).unwrap(); CString::new(parsed_url.query().unwrap().as_bytes()).unwrap().into_raw() }
To build this Rust code, you can use cargo build --release command. Before compilation, make sure you add the url library to your list of dependencies in Cargo.toml for this project too!
We can use the ffi Node.js package to create a module that exposes the Rust code.
const path = require('path'); const ffi = require('ffi'); const library_name = path.resolve(__dirname, './target/release/libffi'); const api = ffi.Library(library_name, { get_query: ['string', ['string']] }); module.exports = { getQuery: api.get_query };
The naming convention is lib*, where * is the name of your library, for the .dylib file that cargo build --release builds.
This is great; we have a working Rust code that we called from Node.js! It works, but you can already see that we had to do a bunch of conversion between the types, which can add a bit of an overhead to our function calls. There should be a much better way to integrate our code with JavaScript.
Meet Neon
Rust bindings for writing safe and fast native Node.js modules.
Neon allows us to use JavaScript types in our Rust code. To create a new Neon project, we can use their own cli. Use npm install neon-cli --global to install it.
neon new <projectname> will create a new neon project with zero configuration.
With our neon project done, we can rewrite the code from above as the following:
#[macro_use] extern crate neon; extern crate url; use url::{Url}; use neon::vm::{Call, JsResult}; use neon::js::{JsString, JsObject}; fn get_query(call: Call) -> JsResult<JsString> { let scope = call.scope; let url = call.arguments.require(scope, 0)?.check::<JsString>()?.value(); let parsed_url = Url::parse( &url ).unwrap(); Ok(JsString::new(scope, parsed_url.query().unwrap()).unwrap()) } register_module!(m, { m.export("getQuery", get_query) });
Those new types that we're using on the top JsString, Call and JsResult are wrappers for JavaScript types that allows us to hook into the JavaScript VM and execute code on top of it. The Scope allows us to bind our new variables to existing JavaScript scopes, so our variables can be garbage collected.
This is much like writing native Node.js modules in C++ which I've explained in a previous blogpost.
Notice the #[macro_use] attribute that allows us to use the register_module! macro, which allows us to create modules just like in Node.js module.exports.
The only tricky part here is accessing arguments:
let url = call.arguments.require(scope, 0)?.check::<JsString>()?.value();
We have to accept all kinds of arguments (as any other JavaScript function does) so we cannot be sure if the function was called with single or multiple arguments. That is why we have to check for the first element's existence.
Other than that change, we can get rid of most of the serialization and just use Js types directly.
Now let's try to run them!
If you downloaded my example first, you have to go into the ffi folder and do a cargo build --release and then into the neon folder and (with previously globally installed neon-cli) run neon build.
If you're ready, you can use Node.js to generate a new list of urls with the faker library.
Run the node generateUrls.js command which will place a urls.json file in your folder, what our tests will read and try to parse. When that is ready, you can run the "benchmarks" with node urlParser.js. If everything was successful, you should see something like this:
This test was done with 100 URLs (randomly generated) and our app parsed them only once to give a result. If you'd like to benchmark parsing, increase the number (tryCount in urlParser.js) of URLs or the number of times (urlLength in urlGenerator.js).
You can see the winner in my benchmark is the Rust neon version, but as the length of the array increases, there will be more optimization V8 can do, and they will get closer. Eventually, it will surpass the Rust neon implementation.
This was just a simple example, so of course, there is much to learn for us in this field,
We can further optimize this calculation in the future, potentially utilizing concurrency libraries provided by some crates like rayon.
Implementing Rust modules in Node.js
Hopefully, you've also learned something today about implementing Rust modules in Node.js along with me, and you can benefit from a new tool in your toolchain from now on. I wanted to demonstrate that while this is possible (and fun), it is not a silver bullet that will solve all of the performance problems.
Just keep in mind that knowing Rust may come handy in certain situations.
If you have any questions or comments, let me know in the section below - I’ll be here to answer them!
Writing fast and safe native Node.js modules with Rust published first on http://ift.tt/2fA8nUr
0 notes
qwertsypage · 8 years ago
Text
Writing fast and safe native Node.js modules with Rust
TL:DR - Use Rust instead of C++ to write native Node.js modules!
RisingStack faced a shocking event last year: we reached the maximum speed that Node.js had to offer at the time, while our server costs went over the roof. To increase the performance of our application (and decrease our costs), we decided to completely rewrite it, and migrate our system to a different infrastructure - which was a lot of work, needless to say.
I figured out later that we could have just implemented a native module instead!
Back then, we weren’t aware that there was a better method to solve our performance issue. Just a few weeks ago I found out that another option could have been available. That’s when I picked up Rust instead of C++ to implement a native module. I figured out that it is a great choice thanks to the safety and ease of use it provides.
In this Rust tutorial, I’m going to walk you through the steps of writing a modern, fast and safe native module.
The Problem with our Node.js Server Speed
Our issue began in late 2016 when we've been working on Trace, our Node.js monitoring product, which was recently merged with Keymetrics in October 2017.
Like every other tech startup at the time, we've been running our services on Heroku to spare some expenses on infrastructure costs and maintenance. We've been building a microservice architecture application, which meant that our services have been communicating a lot over HTTP(S).
This is where the tricky part comes in: we wanted to communicate securely between the services, but Heroku did not offer private networking, so we had to implement our own solution. Therefore, we looked into a few solutions for authentication, and the one we eventually settled with was http signatures.
To explain it briefly; http signatures are based on public-key cryptography. To create an http signature, you take all parts of a request: the URL, the body and the headers and you sign them with your private key. Then, you can give your public key to those who would receive your signed requests so they can validate them.
Time passed by and we noticed that CPU utilization went over the roof in most of our http server processes. We suspected an obvious reason - if you're doing crypto, it's like that all the time.
However, after doing some serious profiling with the v8-profiler we figured out that it actually wasn't the crypto! It was the URL parsing that took the most CPU time. Why? Because to do the authentication, we had to parse the URL to validate request signatures.
To solve this issue, we decided to leave Heroku (what we wanted to do for other reasons too), and create a Google Cloud infrastructure with Kubernetes & internal networking - instead of optimizing our URL parsing.
The reason for writing this story/tutorial is that just a few weeks ago I realized that we could have optimized URL parsing in an other way - by writing a native library with Rust.
Naive developer going native - the need for a Rust module
It shouldn't be that hard to write native code, right?
Here at RisingStack, we've always said that we want to use the right tool for the job. To do so, we’re always doing research to create better software, including some on C++ native modules when necessary.
Shameless plug: I've written a blogpost about my learning journey on native Node.js modules too. Take a look!
Back then I thought that in most cases C++ is the right way to write fast and efficient software.. However, as now we have modern tooling at our disposal (in this example - Rust), we can use it to write more efficient, safe and fast code with much less effort than it ever required.
Let's get back to our initial problem: parsing an URL shouldn't be that hard right? It contains a protocol, host, query parameters...
(Source the Node.js documentation)
That looks pretty complex. After reading through the URL standard I figured out that I don't want to implement it myself, so I started to look for alternatives.
I thought that surely I'm not the only person who wants to parse URLs. Browsers probably have already solved this issue, so I checked out chromium's solution: google-url. While that implementation can be easily called from Node.js using the N-API, I have a few reasons not to do so:
Updates: when I just copy-paste some code from the internet I immediately get the feeling of danger. People have been doing it for a long time, and there are so many reasons it didn't work out so well.. There is just no easy way of updating a huge block of code that is sitting in my repository.
Safety: a person with not so much C++ experience cannot validate that the code is right, but we'll eventually have to run it on our servers. C++ has a steep learning curve, and it takes a long time to master it.
Security: we all heard about exploitable C++ code that is out there, which I'd rather avoid because I have no way to audit it myself. Using well maintained open-source modules gives me enough confidence to not worry about security.
So I'd much prefer a more approachable language, with an easy to use update mechanism and modern tooling: Rust!
A few words about Rust
Rust allows us to write fast and efficient code.
All of the Rust projects are managed with cargo - think about it as npm for Rust. Project dependencies can be installed with cargo, and there is a registry full of packages waiting for you to use.
I found a library which we can use in this example - rust-url, so shout out to the Servo team for their work.
We’re going to use Rust FFI too! We had already covered using Rust FFI with Node.js in a previous blogpost two years ago. Since then quite a lot has changed in the Rust ecosystem.
We have a supposedly working library (rust-url), so let's try to build it!
How do I build a Rust app?
After following instructions on https://rustup.rs, we can have a working rustc compiler, but all we should care about now is cargo. I don't want to go into much detail about how it works, so please check out our previous Rust blogpost if you're interested.
Creating a new Rust Project
Creating a new Rust project is as simple as cargo new --lib <projectname>.
You can check out all of the code in my example repository http://ift.tt/2zX5YeZ
To use the Rust library that we have, we can just list it as a dependency in our Cargo.toml
[package] name = "ffi" version = "1.0.0" authors = ["Peter Czibik <[email protected]>"] [dependencies] url = "1.6"
There is no short (built in) form for adding a dependency as you do with npm install - you have to manually add it yourself. However, there is a crate called cargo edit that adds a similar functionality.
Rust FFI
To be able to use Rust modules from Node.js, we can use the FFI provided by Rust. FFI is a short-term for Foreign Function Interface. Foreign function interface (FFI) is a mechanism by which a program written in one programming language can call routines or make use of services written in another.
To be able to link to our library we have to add two things to Cargo.toml
[lib] crate-type = ["dylib"] [dependencies] libc = "0.2" url = "1.6"
We have to declare that our library is a dynamic library. A file ending with the extension .dylib is a dynamic library: it's a library that's loaded at runtime instead of at compile time.
We will also have to link our program against libc. libc is the standard library for the C programming language, as specified in the ANSI C standard.
The libc crate is a Rust library with native bindings to the types and functions commonly found on various systems, including libc. This allows us to use C types from our Rust code, which we will have to do if we'd like to accept or return anything from our Rust functions. :)
Our code is fairly simple - I'm using the url and libc crate with the extern crate keyword. To expose this to the outer world through FFI, it is important to mark our function as pub extern. Our function takes a c_char pointer which represents the String types coming from Node.js.
We need to mark our conversion as unsafe. A block of code that is prefixed with the unsafe keyword is used to permit calling unsafe functions or dereferencing raw pointers within a safe function.
Rust uses the Option<T> type to represent a value that can be empty. Think of it as a value that can be null or undefined in your JavaScript. You can (and should) explicitly check every time you try to access a value that can be null. There are a few ways to address this in Rust, but this time I'm going with the simplest method: unwrap which will simply throw an error (panic in Rust terms) if the value is not present.
When the URL parsing is done, we have to convert it to a CString, that can be passed back to JavaScript.
extern crate libc; extern crate url; use std::ffi::{CStr,CString}; use url::{Url}; #[no_mangle] pub extern "C" fn get_query (arg1: *const libc::c_char) -> *const libc::c_char { let s1 = unsafe { CStr::from_ptr(arg1) }; let str1 = s1.to_str().unwrap(); let parsed_url = Url::parse( str1 ).unwrap(); CString::new(parsed_url.query().unwrap().as_bytes()).unwrap().into_raw() }
To build this Rust code, you can use cargo build --release command. Before compilation, make sure you add the url library to your list of dependencies in Cargo.toml for this project too!
We can use the ffi Node.js package to create a module that exposes the Rust code.
const path = require('path'); const ffi = require('ffi'); const library_name = path.resolve(__dirname, './target/release/libffi'); const api = ffi.Library(library_name, { get_query: ['string', ['string']] }); module.exports = { getQuery: api.get_query };
The naming convention is lib*, where * is the name of your library, for the .dylib file that cargo build --release builds.
This is great; we have a working Rust code that we called from Node.js! It works, but you can already see that we had to do a bunch of conversion between the types, which can add a bit of an overhead to our function calls. There should be a much better way to integrate our code with JavaScript.
Meet Neon
Rust bindings for writing safe and fast native Node.js modules.
Neon allows us to use JavaScript types in our Rust code. To create a new Neon project, we can use their own cli. Use npm install neon-cli --global to install it.
neon new <projectname> will create a new neon project with zero configuration.
With our neon project done, we can rewrite the code from above as the following:
#[macro_use] extern crate neon; extern crate url; use url::{Url}; use neon::vm::{Call, JsResult}; use neon::js::{JsString, JsObject}; fn get_query(call: Call) -> JsResult<JsString> { let scope = call.scope; let url = call.arguments.require(scope, 0)?.check::<JsString>()?.value(); let parsed_url = Url::parse( &url ).unwrap(); Ok(JsString::new(scope, parsed_url.query().unwrap()).unwrap()) } register_module!(m, { m.export("getQuery", get_query) });
Those new types that we're using on the top JsString, Call and JsResult are wrappers for JavaScript types that allows us to hook into the JavaScript VM and execute code on top of it. The Scope allows us to bind our new variables to existing JavaScript scopes, so our variables can be garbage collected.
This is much like writing native Node.js modules in C++ which I've explained in a previous blogpost.
Notice the #[macro_use] attribute that allows us to use the register_module! macro, which allows us to create modules just like in Node.js module.exports.
The only tricky part here is accessing arguments:
let url = call.arguments.require(scope, 0)?.check::<JsString>()?.value();
We have to accept all kinds of arguments (as any other JavaScript function does) so we cannot be sure if the function was called with single or multiple arguments. That is why we have to check for the first element's existence.
Other than that change, we can get rid of most of the serialization and just use Js types directly.
Now let's try to run them!
If you downloaded my example first, you have to go into the ffi folder and do a cargo build --release and then into the neon folder and (with previously globally installed neon-cli) run neon build.
If you're ready, you can use Node.js to generate a new list of urls with the faker library.
Run the node generateUrls.js command which will place a urls.json file in your folder, what our tests will read and try to parse. When that is ready, you can run the "benchmarks" with node urlParser.js. If everything was successful, you should see something like this:
This test was done with 100 URLs (randomly generated) and our app parsed them only once to give a result. If you'd like to benchmark parsing, increase the number (tryCount in urlParser.js) of URLs or the number of times (urlLength in urlGenerator.js).
You can see the winner in my benchmark is the Rust neon version, but as the length of the array increases, there will be more optimization V8 can do, and they will get closer. Eventually, it will surpass the Rust neon implementation.
This was just a simple example, so of course, there is much to learn for us in this field,
We can further optimize this calculation in the future, potentially utilizing concurrency libraries provided by some crates like rayon.
Implementing Rust modules in Node.js
Hopefully, you've also learned something today about implementing Rust modules in Node.js along with me, and you can benefit from a new tool in your toolchain from now on. I wanted to demonstrate that while this is possible (and fun), it is not a silver bullet that will solve all of the performance problems.
Just keep in mind that knowing Rust may come handy in certain situations.
If you have any questions or comments, let me know in the section below - I’ll be here to answer them!
Writing fast and safe native Node.js modules with Rust published first on http://ift.tt/2w7iA1y
0 notes
hydrus · 8 years ago
Text
Version 282
youtube
windows
zip
exe
os x
app
tar.gz
linux
tar.gz
source
tar.gz
I had a great week. I fixed some things and added regex tag parsing to import folders.
flash fixed and debug exes
I accidentally broke flash embed windows for Windows users last week. It seems the new version of the program I use to make the Windows build was not linking something correctly, so many users were unable to play flash without a stream of errorspam, and some users could not even boot.
I have rolled back my PyInstaller to the old version and everything flash seems to be working again. I will be testing flash embeds every week from now on to make sure this doesn't slip through again.
Also: I think I have figured a way to include the debug executables in the regular builds. They are just called client_debug and server_debug. If you cannot boot the client in future, please try running the debug exe and screenshotting the additional error information it provides. (I'll write a .txt file in the base directory next week or so to explain this further)
faster thumbnails
This is just a little thing really, but I 'smoothed' out the thumbnail rendering pipeline a bit. The client should generate some thumbnails much faster, particularly when they are first viewed.
pixiv broken
It seems that Pixiv recentishly changed how their gallery pages work (they now load in a stream as you scroll down, via some javascript or json thing). Unfortunately, the existing hydrus pixiv parser cannot deal with this and was as a result not parsing these results (I think it was just getting a handful in the 'popular' row and then quitting).
I have hidden the pixiv downloader links in the client and paused any existing pixiv subscriptions. You'll also get a little popup about it on update.
I would like to say that the new hydrus downloading engine will be able to handle the new system, but I have not looked into it deep enough to be certain. I hope it can, and will look into it more when it finally comes time for us to rebuild the existing parsers in the new engine.
Pixiv do have an official json API, but it is an OAuth thing, which is an entirely different kettle of fish for our purposes.
EDIT: Someone just told me that artist pages are still using the old system, so I will revisit this next week to make sure and reactivate them if they are all ok.
filename tagging options
As has been long-planned, I have extracted the regex and other 'filename' tagging options you can set after the 'import files' dialog into their own object and applied them to import folders!
If you are into import folders or the regex parsing, please check it out under the manage import folders dialog--a new simple listctrl replaces the old '.txt parsing' button. You can't set 'tags for some files' or the '#'-based tags (because those require manual input), but you should be able to set anything else on a per-service basis.
I have tested this and it all seems to work correctly, but it was a complicated and semi-big rewrite, so please let me know if you run into trouble. Maybe try it out on a small example before you go full on!
string converter
This is an advanced thing for people interested in making scripts and downloaders with the parsing engine.
I realised this week that the login manager and downloader overhaul actually have a lot of overlap in the work that needs to be done, particluarly in the parsing engine. Rather than finish the login and domain managers first and then move on to the downloader overhaul, I have decided to work on all three at the same time, making small improvements as the new parsing engine can do more things.
So, this week, I finished a new 'StringConverter' class, that does simple string transformations--adding or removing text, encoding to hex, reversing, that sort of thing. I added it to the html formulas in the existing scripting system and expect it to put it in a couple of other places for when you are in the situation of, 'I can parse this data, but I don't need the first six characters.' I don't expect the existing file lookup scripts need this too much, but feel free to play around with it. I think I'll add a 'regex' transformation type in future.
I also added it to the 'file identifier' section of the initial file lookup request. Now, rather than the file's hash bytes being encoded to hex or base64 from the dropdown, it occurs through a string converter. There has been a persistent issue of wanting to generate an 'md5:[md5_hash]' argument, and I believe this will now fix it. I do not know the exact specifics here so I won't write the script myself, but I'm happy to work with people on the discord or email or whatever to figure out a solution here.
full list
rolled back to an older version of pyinstaller that does not seem to have the embedded flash window problems windows users experienced
added an error handler to wx.lib.flashwin import--if it fails to import, the client will print the error to the log and thereafter show 'open externally' buttons instead of the embedded flash window
created a 'filename tagging options' object to handle the instance-non-specific auto-tagging parts of the filename tagging dialog
moved all the appropriate tag generation code to this new object
extracted the simple and advanced panels in the filename tagging dialog to a separate panel
wrote a new wrapper panel to edit filename tagging options objects directly
cleaned up some tag generation code--it'll now all be siblinged/parented/censored the same way
import folders now use the filename tagging options object, harmonising this tag parsing code
edit import folder gui now has an add/edit/delete listctrl to manage the new tag_service->filename_tagging_object relationship (this replaces the old .txt management button and summary text)
finished the StringConverter class in the new parsing engine. it applies a series of transformations to text
wrote a panel to edit string converters
wrote a panel to edit string converters' individual transformations
updated html formulas to use string converters instead of the old cull_and_add system
html formula edit panels can now edit their new string converters
file lookup scripts now 'stringconvert' their file identifier strings--this should allow the 'md5:md5_hash' fix we've been talking about (i.e. by first encoding to hex and then prepending 'md5:')
the help->about window now shows the client's default temp directory
thumbnail regeneration--particularly full-size regen and resized png gen--should be much faster in the client
the debug exes are now included in the windows build
the debug exes are now included in the non-windows builds
the test exe is no longer included in the windows install (can't remember it ever being useful, and it is like 10MB now)
unfortunately, a recentish change to how pixiv serves gallery page results has broken the hydrus pixiv downloader. this will have to wait for the downloader overhaul, and even then, it might be _slightly_ tricky. pixiv downloader entries are now hidden, and existing pixiv subscriptions will be paused on update
the thumbnail select submenu now clarifies 'all (all in x)' if all are in the inbox or archive
you can now archive/delete filter from the thumbnail menu on a single file
lowered the http session inactivity timeout even more, to 45 minutes
fixed a couple of instances of the new subscription dialogs being unable to mass-retry failures
ffmpeg parsing errors now print more info about broken file path
some daemons will be snappier about shutting down on application shutdown
took out the sometimes invalid 'booting' phrase in the during disk cache population status report
the client will now warn you about page instability at 165 pages after one user reported instability at 175. 200 is still the strict limit.
downloader pages and subscriptions will fail more gracefully if their downloaders cannot be loaded (e.g. they used a since-deleted booru)
fixed listctrl panel button enabled/disabled status when the child listctrl starts empty
the new listctrl can now select data in a cleverer way
misc fixes
misc refactoring
next week
I want to get the 'StringMatch' object finished, and maybe the accompanying 'URLMatch' done as well. These will verify that strings and urls match certain rules, and will help with login and domain and downloader verification and veto states.
0 notes
randomconnections · 8 years ago
Text
Why and Where, Revisited
As our departure date quickly approaches I’m reminded as to why I started this blog in the first place. It was 2004 and we were getting ready to drive from Greenville, SC to Samish Island, WA to visit Laura’s mom. We wanted a way for her to track our travels, and RandomConnections was born.
The first iteration of this website was a rough affair. I wrote the programming myself in ASP using an MS-Access back end. It did what I wanted it to do – allow me to edit and post from a web browser instead of having to use a web editor like Dreamweaver or FrontPage (gag.) Back in 2004 that was a big deal.
The first iteration of the site had the travel blog tucked under a separate link. That year was the 200th anniversary of the Lewis and Clark Expedition, so we were roughly following that trek. We had a map we could update (manually) and a way to post small photos.
This was in the days before Twitter and Facebook. Microblogging wasn’t a thing. (Is that term even still used?) My blog posts tended to be fairly short, and might even fit within a single tweet now.
One of the first posts was entitled “Why and Where.” I thought it might be interesting to look at the reasons for our first trip. The post was short enough that I’m quoting it in its entirety below.
It all starts with China – dishes, not the place.  Laura’s mom has given her some china that had belonged to her grandmother, and rather than ship the dishes, we decided to drive out and get them.  Her mom lives on an Island in the Puget Sound about two hours north of Seattle, so this is going to be a coast-to-coast trip.  We are both taking off two weeks to make a meandering drive out west, roughly following the Lewis and Clark trail for part of the trip.  We don’t actually have a set itinerary, but we know that we want to be on Samish Island for about four days, we know when we can leave, and we know when we have to get back.
Our original intent was to rent an SUV and camp along the way. That plan changed when we got our convertible and thought it would be the perfect vehicle for the trip.  Of course, that meant trading camping gear for a place to store the roof when it’s down.
So I log onto MapQuest, plug in our address, plug in mom-in-law’s address, and come up with a route that’s 2871.21 miles long, and would take 43 hours 47 minutes if driving straight through.  Oddly enough, the route roughly follows the Lewis and Clark Trail, and takes us near some interesting sites – Badlands, Rushmore, Yellowstone, and Grand Tetons.
The trip back will be a bit less meandering.
This time we don’t have the luxury of making all those stops at National Parks and scenic places. We will be traveling caravan-style with two cars, two cats, and two kayaks. The two cats are the limiting factor. We can’t leave them alone to go explore. That also limits where we can stay the night. We have to find pet-friendly hotels.
Originally we had planned to pull a small trailer behind the Subaru. While making inquiries at U-Haul we found that they have shipping boxes about the size of the trailer we planned to rent that they would deliver to our destination. The cost worked out to be about the same as renting a trailer for that time and distance. That was a no-brainer. We no longer have the hassles of a trailer and it saves wear and tear on the car.
So what does one pack for a year away?
I think back to another extended trip. In 1991 Laura had another year-long sabbatical from Furman. We moved to Tucson so that she could do research at the University of Arizona. For that trip we hired movers and took just about everything – furniture, pots, pans, dishes, etc., etc. We actually took too much stuff. For that trip we were moving into an unfurnished apartment and would need most of those things. A couple of Furman students would be renting our house while we were away.
This sabbatical is different. The house we’re moving to now belongs to Laura and her sister. It’s furnished and has everything we need. We really only need clothes, personal effects, and toys to entertain us on the long winter nights. Everything else stays. My sister Glynda will be living in our house for the year so she will need the furnishings, etc.
Even so, the process of deciding what to take and what to leave behind has been interesting. Some things I look at and think, “I don’t want to haul that across country. I’ll just get another one out there if I need it.” Some rejected items I look at and wonder if I should just get rid of them if I can do without them for a full year.
This sabbatical is different from the last year-long sojourn in that we aren’t locked into a schedule. In Tucson Laura kept regular work hours. I worked part time at a reading clinic and later at an elementary school. I had regular rehearsals with the Arizona Repertory Singers. This year Laura will be working on completing papers based on her research and I’ll be writing my book on South Carolina ghost towns. I checked out a couple of singing groups in the Skagit Valley, but decided not to get tied down with regular rehearsals. We’ll take some time to travel visit family on the west coast. One of our biggest jobs will be to go through the house and see what needs to be done to settle her parents’ estate.
As for the upcoming drive, it will take us about seven days. We could do it in less time, but we’re trying to keep the daily drive time down. We’re also allowing one day to drive through Yellowstone. I’m sure I’ll be blogging along the way. I have a GoPro set up as a dash cam and hope to make a time-lapse of the complete trip. It will be more tedious than our 2004 trek, but certainly less arduous than the original Lewis and Clark trip. One Facebook commenter also pointed out that this route roughly follows the path of totality in reverse for the recent solar eclipse.
As we come down to the wire we’re tossing stuff into suitcases and fretting over what we might have forgotten and left undone. However, we’re just about past that point. The trip is upon us. Day after tomorrow, we head west.
from WordPress http://ift.tt/2vsKujL via IFTTT
0 notes
lewiskdavid90 · 8 years ago
Text
80% off #Python Network Programming – Part 2: Multivendor Environment – $10
Automate Network Tasks Fast and Easy Using The Information, Tools, Libraries and Python Scripts Included In This Course!
Intermediate Level,  – 10.5 hours,  189 lectures 
Average rating 4.6/5 (4.6 (41 ratings) Instead of using a simple lifetime average, Udemy calculates a course’s star rating by considering a number of different factors such as the number of ratings, the age of ratings, and the likelihood of fraudulent ratings.)
Course requirements:
You should have a great desire to learn network programming and do it in a hands-on fashion, without having to watch countless lectures filled with slides and theory You should already be familiar with networking concepts like: SSH, Telnet, ACLs (Access Control Lists), VMs (Virtual Machines) You should already be familiar with Python syntax: functions, if, for, while, string and file operations etc. If you have no experience whatsoever in Python, then I encourage you to also check out my “Python Network Programming – Build 7 Apps” course, which teaches Python from scratch and then dives into using it on network devices, by building some great network apps. I will provide a fully configured virtual machine (Ubuntu 15.10, improved version of the VM in Part 1 of the course) with all the necessary software (Python Modules, Network Tools, dependencies etc.). This way you can save time and start programming network devices as soon as possible For teaching network programming concepts and tools on vendors like Cisco (IOS), Juniper (JunOS and ScreenOS), HP (ProCurve OS) and Avaya (ERS OS), I will use the Ubuntu 15.10 VM and physical devices in my lab, since these vendors do not provide virtualized environments for emulating their router/switch operating systems For teaching network programming concepts and tools on other vendors, like Cisco (IOS XE and IOS XR) and Arista (vEOS), I will use virtual machines that they provide on their official websites and I will guide you through properly downloading, installing and setting them up on your own computer, into VirtualBox For running the Ubuntu 15.10 Virtual Machine I provide, dedicating 2 GB of RAM will be more than enough. For running the Arista vEOS VM, you should allocate another 1 GB of RAM, throughout the Arista section of the course. For emulating Cisco IOS XE and IOS XR, you will need 2+ GB additional RAM memory. I am going to use only free software throughout the course: VirtualBox, Linux, free VMs etc.
Course description:
* Course Went Live: April 2016. Last Update: December 2016 *
* Full Apps, Quizzes, Cheat Sheets and Final Project are included *
Before you read what I have to say, see what my students are saying about my courses:
“What an incredible value and learning experience!” by Sean A.
“Excellent material. Kudos to a great instructor with a vast level of creativity.” by Anthony W.
“I can say this man is going on smoothly and perfectly, explaining in the most empirical/foundational way.” by Kolapo A.
Are you a network professional who wants to start automating network tasks and manage network devices from various vendors?
Are you looking to become a better network engineer and start learning network programming?
Or maybe you’re seeking a raise or even a career change?
Are you here after going through my “Python Network Programming – Part 1: Build 7 Python Apps” course, which became a bestseller on Udemy and other elearning platforms?
Join thousands of successful students who have decided to upgrade their networking skills and boost their careers using this Python Network Programming course series!
Thousands of satisfied students are enrolled in my Python Network Programming Courses across the Web
Last week over 300 students have joined my courses The average rating is above 4.5 / 5 from hundreds of reviews Update Dec 17, 2016: Added: Section 22: What’s Next? Update Oct 12, 2016: Added: Video lecture: Asking For Help
“Highly Recommended!” by Noor
This course covers many networking vendors and network operating systems:
Cisco IOS Cisco IOS XE Cisco IOS XR Juniper JunOS Juniper (Netscreen) ScreenOS Arista EOS HP ProCurve OS Avaya ERS OS
Save Time and Money by Writing Your Own Python Programs to Automate Daily Network Tasks. Grow Your Networking Career With Network Programming Skills! 
Bonus, You Might Even Get a Raise or a Better Job with These New Skills!
This Python Network Programming course is aimed at network professionals having little or no experience in network automation and a great desire to use Python and a lot of very smart Python modules to program their network devices and tasks. This hands-on Python Network Programming training walks you through lots of scenarios, networking vendors and useful tools to help you become the network engineer of 2017.
What’s so special about this course?
Well, let me tell you what’s this course all about:
 Learning to use amazing Python libraries and tools to manage network devices easily and without writing too much code.  Connecting remotely to network devices and read/extract/display/store command output, using interactive Python scripts, on: Cisco IOS, Cisco IOS XE, Cisco IOS XR, Juniper JunOS, Juniper (Netscreen) ScreenOS, Arista EOS, HP ProCurve OS and Avaya ERS OS. Configuring (one or more) network devices remotely, using interactive Python scripts, either by specifying the commands at the prompt OR asking the script to read them from one or more text files, on: Cisco IOS, Cisco IOS XE, Cisco IOS XR, Juniper JunOS, Juniper (Netscreen) ScreenOS, Arista EOS, HP ProCurve OS and Avaya ERS OS. Executing configuration commands on multiple network devices from multiple vendors simultaneously. Working with a device inventory in XML/JSON/CSV format and network metadata, while easily counting your registered network devices, getting device information by IP address, searching and listing device information, checking IP reachability and SSH availability and much more, using a simple but efficient tool in the Python interpreter or Linux shell. Managing network device credentials in a centralized, encrypted file. Managing access lists and converting access lists from one format to another using a very efficient tool (Example: Converting a Cisco IOS ACL to a Juniper JunOS ACL with a single command). Automatically executing commands upon device login, right from the Linux shell. Getting de
Reviews:
“I want to learn more about every tool mentioned. I am excited to learn more. I hope the instructor continues to provide more great information. The information should be useful for both network and security engineers.” (Randy Tubb)
“Good course , and instructor is very well explain each every point hats off” (Aruran Ganeshan)
“El contenido es muy didactico” (Alberto Arellano)
  About Instructor:
Mr. Mihai Catalin Teodosiu
Hi! My name is Mihai. I have a degree in Telecommunications and Information Technology from University Politehnica of Bucharest, Romania and also the CCNP, CCNA, CCDA, JNCIA and ISTQB CTFL certifications.
I worked as a Network Quality Assurance Engineer since 2010, testing the OS for Nortel/Avaya L3 switches. # 6+ years experience in the Networking and Testing/Quality Assurance industries. # Certified professional: Cisco, Juniper and International Software Testing Qualifications Board certifications. # Teaching courses on Udemy, GNS3 Academy and other e-learning platforms. # Thousands of satisfied students, 4.5 / 5 average course rating. # Thousands of followers on LinkedIn, Twitter, Facebook and Blogger.
See what my students are saying about the kind of courses I created: “What an incredible value and learning experience!” by Sean A. “Excellent material. Kudos to a great instructor with a vast level of creativity.” by Anthony W. “I can say this man is going on smoothly and perfectly, explaining in the most empirical/foundational way.” by Kolapo A. === In early 2014 I decided to start learning a programming language that can make my daily work easier and also boost my networking career. This is when I discovered that Python, being a great language for beginners, can help me write some scripts that connect to network devices and perform various tasks I was doing manually, thus saving a lot of time. Also, working as a tester, I used Python to automate a lot of network tests, about 250, in a short period of time, again reducing my workload. For all the above reasons and because there is a lot of Python material out there on the Internet that may seem discouraging for you as a beginner, I decided to build courses that take you from zero programming knowledge to advanced topics, all in a hands-on fashion. === In March 2015 I started working on my first course, “Python Network Programming – Part 1: Build 7 Python Apps”, without really knowing what is online education all about and with no prior teaching experience, either online or offline. I just wanted to try it for some reason, despite a lot of mental obstacles, a full time job and zero experience in teaching people. I worked a lot on this first course, with countless hours of researching, recording and editing, but there was something that kept me going further. In May 2015 my course was approved by Udemy and was an instant success, despite my low expectations. Gradually, the course became a bestseller on other e-learning platforms, as well. One example I’m very proud of is the GNS3 Academy. In April 2016, I finished working on Part 2 of the Python Network Programming series, called “Multivendor Environment”, after about three months of researching, testing, summarizing, simplifying, recording, editing and structuring the course content. Finally, in September 2016, I released Part 3 of this amazing course series, called “Python Network Programming – Part 3: Scapy & Security Tools”, which in my opinion is very interesting and exciting for anyone wanting to enter the world of network security in a 100% hands-on fashion. I am really putting a lot of energy and effort into each of my courses and I am always trying (and succeeding) to pay special attention to the smallest details and provide a lot of value to my students. I really think my courses will get better and better, both by learning from my mistakes and carefully listening to the feedback I get from my students. === So, check out my courses and stay tuned for more!
Instructor Other Courses:
Python Network Programming – Part 3: Scapy & Security Tools Mr. Mihai Catalin Teodosiu, Network Engineer | Python Enthusiast | CCNP | EFF Supporter (17) $10 $50 Python Network Programming – Part 1: Build 7 Python Apps …………………………………………………………… Mr. Mihai Catalin Teodosiu coupons Development course coupon Udemy Development course coupon Programming Languages course coupon Udemy Programming Languages course coupon Python Network Programming – Part 2: Multivendor Environment Python Network Programming – Part 2: Multivendor Environment course coupon Python Network Programming – Part 2: Multivendor Environment coupon coupons
The post 80% off #Python Network Programming – Part 2: Multivendor Environment – $10 appeared first on Udemy Cupón/ Udemy Coupon/.
from Udemy Cupón/ Udemy Coupon/ http://coursetag.com/udemy/coupon/80-off-python-network-programming-part-2-multivendor-environment-10/ from Course Tag https://coursetagcom.tumblr.com/post/156987792973
0 notes