#install mongo db ubuntu
Explore tagged Tumblr posts
Link
Check this tutorial for install Mongdb Ubuntu 16.04. Here you can find a complete, step by step tutorial on MongoDB community edition installation on Linux, Ubuntu operating system. How to connect to remote server, installation of PHP driver on Ubuntu platform.
0 notes
Text
Robo 3t connect to mongodb

ROBO 3T CONNECT TO MONGODB HOW TO
ROBO 3T CONNECT TO MONGODB PASSWORD
Today, we saw the different suggestions our Support Engineers provide to resolve this MongoDB error. In short, this error ‘MongoDB failed to load the list of databases’ can occur due to various reasons that include failed database user/password authentication, outdated Robo 3T version, and so on. So we need to set these privileges for the users. If the user that we are trying to connect doesn’t have proper privileges to list the databases.įor Robomongo to list out the databases/collections, we must connect to our database using a user that has ‘listDatabases’ privileges. So we need to re-check the database user/password.įor that, access Connection Settings > Authentication – Provide Database name, username, password.Īfter that, we re-test the connection. It means that the server IP connection might be successful but the database might have not connected. If there is any failure in the database user/password authentication then this error can occur. In some of the cases that we saw, this error was fixed by just upgrading the Robo 3T version.Ģ. We suggest upgrading the Robo 3T version.
ROBO 3T CONNECT TO MONGODB HOW TO
Here are the different ways to resolve the error.ġ. Will go over how to install and set up MongoDB on windows and then connect it to Robot3T, since I havent found any really great content about how to do it. Now let’s see the different suggestions our Support Engineers provide to resolve this error message. How we resolve ‘MongoDB failed to load the list of databases’ Today, let us see how our Support Engineers resolve this MongoDB error message. Here at Bobcares, we have seen several such MongoDB errors as part of our Server Management Services for web hosts and online service providers. This error can occur due to various reasons that include failed database user/password authentication, outdated Robo 3T version, and so on. ns file size for new databases.Few of our customers received an error ‘MongoDB failed to load the list of databases’ while trying to connect to a remote database using Robomongo. Any query that would do a table scan fails. Save the changes and press connect icon to see if the connection is working fine. The timer will be disabled when ETL is going. I have a timer that triggers every 5 seconds and does the migration for a particular date range. The source and target table can be the same in the same database. The database can be in a different MongoDB server. Here, I have entered my Ubuntu 18 Vagrant box ssh credentials. I have a Mongo DB ETL application that will transfer data from one database to another database.
ROBO 3T CONNECT TO MONGODB PASSWORD
Enter mongodb admin database username and password which you have created earlier. Launch Robo 3T and open the ‘Manage Connections’ window: Click on ‘Create’ to setup a new connection.
# Enable the HTTP interface (Defaults to port 28017). On Robo 3T GUI, in the connection settings, you need to do few changes as shown on below screen shots. Connection String: mongodb://admin::27017/adminssltrue Server Name: Port: 27017 Setting Up a New Connection on Robo 3T.
# Inspect all client data for validity on receipt (useful for # Enables periodic logging of CPU utilization and I/O wait # E.g., if the parent directory isn't mutable by the MongoDB user. If you want to secure the traffic to database, you can configure the SSL settings as the below. b) Input User and Password information in Authentication tab. # need to create and set permissions for this directory manually. a) Input connection information in Connection tab. # Note: if you run MongoDB as a non-root user (recommended) you may I commented out bind_ip and uncommented port. I was able to connect using the following nf file. You need to edit your /etc/nf file's bind_ip variable to include the IP of the computer you're using, or eliminate it altogether. It is a better idea to add more IP addresses than to open up your system to everything. For Standard plans, the connection information is quite straightforward. Note: Commenting out bind_ip can make your system vulnerable to security flaws. 1) Find out connection information of your database in control panel.

0 notes
Text
Robo 3t free version

#ROBO 3T FREE VERSION INSTALL#
#ROBO 3T FREE VERSION UPDATE#
Run the script to start the installation process: sudo bash studio-3t-linux-圆4.shĪs you run the script, a GUI Setup will open.
#ROBO 3T FREE VERSION INSTALL#
Now, to see whether the downloaded file is there or not use: lsĪs you are confirmed, the Studio 3T free file is there, extract it first: tar -xvf studio-3t-linux-圆4.tar.gzĪfter extracting the Tar archived file we have downloaded, you will have a script to install Studio 3T free on Ubuntu 22.04 system. Because whatever we download using the browser goes into that. Once you have downloaded the Robot 3T now known as a Studio 3T Free, go to the Downloads directory. Therefore, we have to download it manually from its website. Unfortunately, Studio 3T free version is not available to install using the standard repository of Ubuntu 22.04.
#ROBO 3T FREE VERSION UPDATE#
Open your terminal and run: sudo apt update & sudo apt upgrade This will also rebuild the system’s APT package index. Let’s execute the given command to install the latest available security updates for our system. The steps given here can be used for other versions of Ubuntu such as 20.04 or 18.04 including on Linux such as Debian, Linux Mint, and more… Update Ubuntu 22.04 Steps to install Robo 3T or Studio 3T free on Ubuntu 22.04 Studio 3T free Interface on Ubuntu 22.04.Steps to install Robo 3T or Studio 3T free on Ubuntu 22.04.⇒ Supported cloud platforms are MongoDB Atlas, Compose, mLab, ObjectRocket, ScaleGrid, Amazon EC2 ⇒ Support for SCRAM-SHA-256 auth mechanism ⇒ Support for importing from MongoDB SRV connection strings ⇒ Visual tool helping you manage Database ⇒ Native and cross-platform MongoDB manager It allows CSV, JSON, SQL, and BSON import/export, MongoDB task scheduling, data masking for protection, data schema exploration, real-time code auto-completion, and more. On one hand, the Studio 3T is a paid tool, whereas the Studio 3T Free is a free version with users can build queries using drag and drop functionality, generate driver code in seven languages, break down aggregation queries, plus more. It is also a cross-platform MongoDB GUI management tool available for Windows, macOS, and Linux. Formerly, Robo 3T is known as Robomongo, and now Studio 3T Free. Robo 3T offer MongoDB a GUI interface tool that is maintained and provided by the developers of MongoDB client Studio 3T (paid one with 30 days trial). Tutorial to install Robot 3T or Studio 3T free on Ubuntu 22.04 LTS Jammy JellyFish using the command line to get a Graphical user interfaces for managing your Mongo DB server instance.

0 notes
Photo
An Introduction to MongoDB
MongoDB is a cross-platform, open-source, NoSQL database, used by many modern Node-based web applications to persist data.
In this beginner-friendly tutorial, I’ll demonstrate how to install Mongo, then start using it to store and query data. I’ll also look at how to interact with a Mongo database from within a Node program, and also highlight some of the differences between Mongo and a traditional relational database (such as MySQL) along the way.
Terminology and Basic Concepts
MongoDB is a document-oriented database. This means that it doesn’t use tables and rows to store its data, but instead collections of JSON-like documents. These documents support embedded fields, so related data can be stored within them.
MongoDB is also a schema-less database, so we don’t need to specify the number or type of columns before inserting our data.
Here’s an example of what a MongoDB document might look like:
{ _id: ObjectId(3da252d3902a), type: "Tutorial", title: "An Introduction to MongoDB", author: "Manjunath M", tags: [ "mongodb", "compass", "crud" ], categories: [ { name: "javascript", description: "Tutorialss on client-side and server-side JavaScript programming" }, { name: "databases", description: "Tutorialss on different kinds of databases and their management" }, ], content: "MongoDB is a cross-platform, open-source, NoSQL database..." }
As you can see, the document has a number of fields (type, title etc.), which store values (“Tutorial”, “An Introduction to MongoDB” etc.). These values can contain strings, numbers, arrays, arrays of sub-documents (for example, the categories field), geo-coordinates and more.
The _id field name is reserved for use as a primary key. Its value must be unique in the collection, it’s immutable, and it may be of any type other than an array.
Tip: for those wondering what “JSON-like” means, internally Mongo uses something called BSON (short for Binary JSON). In practice, you don’t really need to know much about BSON when working with MongoDB.
As you might guess, a document in a NoSQL database corresponds to a row in an SQL database. A group of documents together is known as a collection, which is roughly synonymous with a table in a relational database.
Here’s a table summarizing the different terms:
SQL Server MongoDB Database Database Table Collection Row Document Column Field Index Index
If you’re starting a new project and are unsure whether to choose Mongo or a relational database such as MySQL, now might be a good time to read our tutorial SQL vs NoSQL: How to Choose.
With that said, let’s go ahead and install MongoDB.
Installing MongoDB
Note: if you’d just like to follow along with this tutorial without installing any software on your PC, there are a couple of online services you can use. Mongo playground, for example, is a simple sandbox to test and share MongoDB queries online.
MongoDB comes in various editions. The one we’re interested in is the MongoDB Community Edition.
The project’s home page has excellent documentation on installing Mongo, and I won’t try to replicate that here. Rather, I’ll offer you links to instructions for each of the main operating systems:
Install MongoDB Community Edition on Windows
Install MongoDB Community Edition on macOS
Install MongoDB Community Edition on Ubuntu
If you use a non-Ubuntu-based version of Linux, you can check out this page for installation instructions for other distros. MongoDB is also normally available through the official Linux software channels, but sometimes this will pull in an outdated version.
Post Installation Configuration
Once you have MongoDB installed for your system, you might encounter this error:
dbpath (/data/db) does not exist. Create this directory or give existing directory in --dbpath. See http://dochub.mongodb.org/core/startingandstoppingmongo
This means that Mongo can’t find (or access) the directory it uses to store its databases. This is pretty easy to remedy:
sudo mkdir -p /data/db sudo chown -R `id -un` /data/db
The first command creates the data/db directory. The second sets permissions so that Mongo can write to that directory.
Install the Compass GUI
We’ll be using the command line in this tutorial, but MongoDB also offers a tool called Compass to connect to and manage your databases using a GUI.
If you’re on Windows, Compass can be installed as part of the main Mongo installation (just select the appropriate option from the wizard). Otherwise, you can download Compass for your respective OS here.
This is what it looks like:
The Mongo Shell
We can test our installation by opening the Mongo shell. You can do this by opening a terminal window and typing mongo.
Note: this assumes that <mongodb installation dir>/bin is in your path. If for any reason this isn’t the case, change into the <mongodb installation dir>/bin directory and rerun the command.
If you get an Error: couldn't connect to server error, you’ll need to start the Mongo server (in a second terminal window) with the command mongod.
Once you’re in the Mongo shell, type in db.version() to see the version of MongoDB you’re running. At the time of writing, this should output 4.2.2.
Please note that you can exit the Mongo shell by running quit() and the Mongo daemon by pressing Ctrl + C at any time.
Now let’s get acquainted with some MongoDB basics.
Basic Database Operations
Enter the Mongo shell if you haven’t already (by typing mongo into a terminal):
[mj@localhost ~]$ mongo MongoDB shell version v4.2.2 connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb Implicit session: session { "id" : UUID("08a624a0-b330-4233-b56b-1d5b15a48fea") } MongoDB server version: 4.2.2
Let’s start off by creating a database to work with. To create a database, MongoDB has a use DATABASE_NAME command:
> use exampledb switched to db exampledb
To display all the existing databases, try show dbs:
> show dbs admin 0.000GB config 0.000GB local 0.000GB
The exampledb isn’t in the list because we need to insert at least one document into the database. To insert a document, you can use db.COLLECTION_NAME.insertOne({"key":"value"}). Here’s an example:
> db.users.insertOne({name: "Bob"}) { "acknowledged" : true, "insertedId" : ObjectId("5a52c53b223039ee9c2daaec") }
MongoDB automatically creates a new users collection and inserts a document with the key–value pair 'name':'Bob'. The ObjectId returned is the ID of the document inserted. MongoDB creates a unique ObjectId for each document on creation, and it becomes the default value of the _id field.
Now we should be able to see our database:
>show dbs admin 0.000GB config 0.000GB exampledb 0.000GB local 0.000GB
Similarly, you can confirm that the collection was created using the show collections command:
> show collections users
We’ve created a database, added a collection named users and inserted a document into it. Now let’s try dropping it. To drop an existing database, use the dropDatabase() command, as exemplified below:
>db.dropDatabase() { "dropped" : "exampledb", "ok" : 1 }
show dbs confirms that the database was indeed dropped:
> show dbs admin 0.000GB config 0.000GB local 0.000GB
For more database operations, please consult the MongoDB reference page on database commands.
The post An Introduction to MongoDB appeared first on SitePoint.
by Manjunath M via SitePoint https://ift.tt/2up1ygl
0 notes
Text
Install MongoDB on Ubuntu Server 18.04

If your company is in the business of using, handling or depending on data, chances are you’re in need of a document-oriented, NoSQL database. If you’re unfamiliar with the term, a NoSQL database is a non-relational database that doesn’t use tables filled with rows and columns. Instead, they make use of a storage model that is optimized specifically for the data. These types of databases offer scalability, flexibility, data distribution, and speed of processing that relational databases can’t match. One NoSQL database is MongoDB. This database has been adopted by big data and enterprise companies including Adobe, Craigslist, eBay, FIFA, Foursquare, and LinkedIn. MongoDB comes in both an enterprise and community edition. I’ll be demonstrating with the open-source community edition, installing it on Ubuntu Server 18.04. This edition can be installed from the standard repositories, however, that will likely install an outdated version. Because of that, I’ll show how to deploy a version from the official MongoDB repository. This will install: mongodb-org (this is the meta-package that will install everything below) mongodb-org-server (the mongod daemon) mongodb-org-mongos (the mongos daemon) mongodb-org-shell (the mongo shell) mongodb-org-tools (the MongoDB tools package which includes import, dump, export, files, performance, restore, and stats tools) Do note that this package only supports 64-bit architecture and LTS (Long Term Support) versions of Ubuntu (so 14.04, 16.04, and 18.04). Once installed, your Mean Stack development company (or whatever sector your business serves) can begin developing for big data.
Update/Upgrade
When installing a major application/service, it’s always best to first run an update/upgrade on the server. Not only will this ensure you have the most recent software, but it’ll also apply any security patches. Do note, however, should the kernel be updated in this process, you will need to reboot the machine before the updates take effect. To update and upgrade Ubuntu, log into the server and issue the following two commands: sudo apt-get update sudo apt-get upgrade -y Once up the update and upgrade completes, reboot your server (if required). You are now ready to install MongoDB, and you won’t even need to bring in your Java developers to take care of this task.
Adding the Repository
The first thing to be done is the addition of the necessary MongoDB repository. To do this, log into your Ubuntu server. From the command line, add the required MongoDB key with the command: wget -qO - https://www.mongodb.org/static/pgp/server-4.2.asc | sudo apt-key add - If you see an error regarding the wget command, install that tool with: sudo apt-get install wget Once you’ve added the key, create a new apt source list file with the command: echo "deb https://repo.mongodb.org/apt/ubuntu bionic/mongodb-org/4.2 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb.list
Installation
Now it’s time to install MongoDB. Update apt with the command: sudo apt-get update Once apt is updated, install MongoDB with the command: sudo apt-get install mongodb-org -y
Starting and Enabling the Community Edition
With the database installed, you’ll want to start and enable it to run upon server reboot. Otherwise, you’ll have to manually start it every time the server is restarted. To start the MongoDB database engine, issue the command: sudo systemctl start mongod You’ll then want to enable MongoDB with the command: sudo systemctl enable mongod
Using MongoDB
In order to start working with Mongo, issue the command: mongo If you get an error status 62, it means that the version of MongoDB is too new for your server. If that’s the case, you’ll need to uninstall the latest version and install the version from the official Ubuntu repositories. Here are the steps for that process: Remove the latest version with the command sudo apt-get purge mongodb-org. Remove any extra dependencies with the command sudo apt-get autoremove. Install the older version of MongoDB with the command sudo apt-get install mongodb -y. At this point, you should then have access to the MongoDB command prompt (Figure 1) by issuing the mongo command. Figure 1 The MongoDB command prompt. Let’s say you want to create a new database. Unlike relational databases, you don’t use the CREATE command. Instead, you simply issue the use command like so: use DATABASE Where DATABASE is the name of the database to be created. This doesn’t actually create the database. In order to finalize that, you must insert data into the new database. Say you create a database named albums. You can then insert data into that database with the command: db.artists.insert({artistname: "Devin Townsend" }) The above command would insert the string “Devin Townsend” associated with artistname in the database albums. You should see WriteResult({ “nInserted” : 1 }) as a result (Figure 2). Figure 2 Successfully insertion of data into the new db. And that’s all there is to installing MongoDB on Ubuntu 18.04 and creating your first database. For more information on using MongoDB, make sure to read the official documentation for the release you’ve installed. So, a little sidetrack from Wordpress in this post, but thought it worth a mention, Mongo DB being used in many Node and other dev environments, and I use Lubuntu. which of course is not dissimilar. Also, if you enjoyed this post, why not check out this article on 8 Reasons for Slow Speeds in New York City! Post by Xhostcom Wordpress & Digital Services, subscribe to newsletter for more! Read the full article
0 notes
Text
Profiling slow-running queries in Amazon DocumentDB (with MongoDB compatibility)
Amazon DocumentDB (with MongoDB compatibility) is a fast, scalable, highly available, and fully managed document database service that supports MongoDB workloads. You can use the same MongoDB 3.6 application code, drivers, and tools to run, manage, and scale workloads on Amazon DocumentDB without having to worry about managing the underlying infrastructure. As a document database, Amazon DocumentDB makes it easy to store, query, and index JSON data. AWS built Amazon DocumentDB to uniquely solve your challenges around availability, performance, reliability, durability, scalability, backup, and more. In doing so, we built several tools, like the profiler, to help you run analyze your workload on Amazon DocumentDB. The profiler gives you the ability to log the time and details of slow-running operations on your cluster. In this post, we show you how to use the profiler in Amazon DocumentDB to analyze slow-running queries to identify bottlenecks and improve individual query performance and overall cluster performance. Prerequisites To use the Amazon DocumentDB profiler, create an AWS Cloud9 environment and an Amazon DocumentDB cluster. You use the AWS Cloud9 terminal to run queries against the cluster. For instructions on setting up your environment and creating your cluster, see Getting Started with Amazon DocumentDB (with MongoDB compatibility); Part 2 – using AWS Cloud9. Solution overview The solution described in this post includes the following tasks: Turn on the profiler for your cluster to profile slow-running queries Load a dataset and run sample queries Use AWS CloudWatch Logs to analyze logs and identify bottlenecks for slow-running queries Improve performance of slow-running queries by adding a missing index Turning on the profiler To enable the profiler, you must first create a custom cluster parameter group. A cluster parameter group is a group of settings that determine how your cluster is configured. When you provision an Amazon DocumentDB cluster, it’s provisioned with the default cluster parameter group. The default settings are immutable; to make changes to your cluster, you need a custom parameter group. On the Amazon DocumentDB console, choose your parameter group. If you don’t have one, you can create a cluster parameter group. Select the profiler parameter and change its value to enabled. You can also modify profiler_sampling_rate and profiler_threshold_ms parameters based on your preferences. profiler_sampling_rate is a fraction of slow operations that should be profiled or logged. profiler_threshold_ms is a threshold in milliseconds; all commands that take longer than profiler-threshold-ms are logged. For more information about parameters, see Enabling the Amazon DocumentDB Profiler. For this post, I set profiler_sampling_rate to 1 and profiler_threshold_ms to 50. Select the cluster you want to turn on the profiler for, and choose the Configuration Choose Modify. For Cluster parameter group, choose your custom parameter group. If you’re already using a custom parameter group, you can skip this step. For Log exports, select Profiler logs. This enables your cluster to export the logs to CloudWatch. If you have already enabled log exports for profiler logs, you can skip this step. For When to apply modifications, select Apply immediately. Alternatively, you can apply the change during your next scheduled maintenance window. Choose Modify cluster. You now need to reboot your instances to apply the new parameter group. On the Instances page, select your instance. From the Actions drop-down menu, choose Reboot. You have successfully turned on the profiler and turned on log exports to CloudWatch. Loading a dataset For this post, I load a sample dataset on to my Amazon DocumentDB cluster and run some queries. The dataset consists of JSON documents that capture data related to the spread and characteristics, case count, and location info of the novel coronavirus (SARS-CoV-2). Download the dataset with the following code: wget -O cases.json https://raw.githubusercontent.com/aws-samples/amazon-documentdb-samples/master/datasets/cases.json Load the dataset using mongoimport: mongoimport --ssl --host= --collection=daily_cases --db=testdb --file=cases.json --numInsertionWorkers 4 --username= --password= --sslCAFile rds-combined-ca-bundle.pem Use mongoimport version 3.6.18 with Amazon DocumentDB. If you don’t have mongoimport, you can install it from the MongoDB download center. To install mongoimport 3.6 on Amazon Linux, run: echo -e "[mongodb-org-3.6] nname=MongoDB Repositorynbaseurl=https://repo.mongodb.org/yum/amazon/2013.03/mongodb-org/3.6/x86_64/ngpgcheck=1 nenabled=1 ngpgkey=https://www.mongodb.org/static/pgp/server-3.6.asc" | sudo tee /etc/yum.repos.d/mongodb-org-3.6.repo sudo yum install mongodb-org-tools-3.6.18 To install mongoimport 3.6 on Ubuntu: echo 'deb https://repo.mongodb.org/apt/ubuntu '$codename'/mongodb-org/3.6 multiverse' | sudo tee /etc/apt/sources.list.d/mongodb-org-3.6.list sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com --recv 2930ADAE8CAF5059EE73BB4B58712A2291FA4AD5 sudo apt-get update sudo apt-get install -y mongodb-org-tools=3.6.18 For more information on using mongoimport, see Dumping, Restoring, Importing, and Exporting Data. To export a log to CloudWatch, I need to run a query that is longer than 50 milliseconds. To do that, I run two queries on my dataset, using the mongo shell on my AWS Cloud9 environment. For installation instructions, see Installing the mongo shell. Run the first query: db.daily_cases.find({"Cases": 1068}) Run the second query: db.daily_cases.aggregate([{ $match: { Cases: { $gte: 100, $lt: 1000 } }}, { $group: { _id: null, count: { $sum: 1 } } } ]); Analyzing slow-running query logs in CloudWatch To view the logs of your slow-running queries, complete the following steps: On the CloudWatch console, under Logs, choose Log groups. Select the log group associated with your cluster. The log group should have the following format: /aws/docdb//profiler. It can take up to 1 hour for the log group to show up after enabling the profiler and turning on log exports. Select the log stream. Typically, every Amazon DocumentDB instance has its own log stream. For this use case, because you only have one instance, you only see one log stream. Set the time period filter to your desired value. For this use case, I filter it to 1 hour; in the following screenshot, you can see the two queries I ran earlier. It typically takes 1–2 minutes for your queries to show up in the log events. You can now analyze the logs. The following two screenshots are EXPLAIN plans for the queries we ran. The plan outlines the stages and times for a query, and helps you discover potential bottlenecks. Looking at the logs, you can see that the first query took 485 milliseconds, and the second query took 559 milliseconds. Because both took longer than 50 milliseconds, they showed up in the profiler logs. Improving performance by adding an index When you analyze the query logs, you can identify bottlenecks. The profiler logged both queries we ran earlier in CloudWatch because the runtime was over 50 milliseconds. From the logs, you can see that both the queries perform a COLLSCAN, which is a full collection sequential scan. COLLSCANs are often slow because you read your entire dataset. To reduce COLLSCANs, you can create indexes on fields that have a high degree of uniqueness and are queried frequently for a certain value or range of values for the field. For this use case, we can reduce COLLSCANS by creating an index on Cases. See the following code: db.daily_cases.createIndex({Cases:1}) For more information about indexes, see Working with Indexes. You can now rerun the queries to observe the new times. Ideally, the queries should run faster because the indexes prevent collection scans. To view the query plan and the amount of time it took to run at runtime, you can add explain("executionStats") to your query. To rerun the first query, enter the following code: db.daily_cases.find({"Cases": 1068}).explain("executionStats") The following screenshot shows the query plan. To run the second query, enter the following code: db.daily_cases.aggregate([{ $match: { Cases: { $gte: 100, $lt: 1000 } }}, { $group: { _id: null, count: { $sum: 1 } } } ]); The following screenshot shows the query plan. Instead of COLLSCANS, the query now uses IXSCANS and IXONLYSCANS. Instead of sequentially scanning the entire collection, the query optimizer uses indexes, which reduces the number of documents that need to be read to answer the query. This improved the runtime for the queries: the first query went from 485 milliseconds to 0.35 milliseconds, and the second query went from 559 milliseconds to 70 milliseconds. Conclusion In this post, you learned how to turn on the profiler in Amazon DocumentDB to analyze slow-running queries to identify performance issues, bottlenecks, and improve individual query performance and overall cluster performance. For more information, see Profiling Amazon DocumentDB Operations and Performance and Resource Utilization. If you have any questions or comments about this post, please use the comments section. If you have any features requests for Amazon DocumentDB, email us at [email protected]. About the Author Meet Bhagdev is a Sr. Product Manager with Amazon Web Services https://aws.amazon.com/blogs/database/profiling-slow-running-queries-in-amazon-documentdb-with-mongodb-compatibility/
0 notes
Text
MisskeyインスタンスをWSL(Ubuntu18.04)からlocalhostに生やす
まえがき
Misskeyを生やしたくなってきたな..
そうだ!WSLがあるんだし自前で生やそう
色々躓いたけどしゅいろ(@syuilo)さんのサポートでうまく動いた
基本はこの記事を参照のこと
この記事で動かなかったらこちらとかこちらの記事を参考にしてください。
SSLめんどくせぇ~~~えろいっくさーち(Elasticsearch)とかしらね~~~~~という気分で建てたのでOptionalなところは省いてます。ご了承ください。
前提条件(環境)
OS: WSL(Ubuntu16.04→18.04に無理やり上げた)※sudo権限を持っておこう
Misskey: commit 9fdb125960dc61ae26fa16cb90832ac6b95cbb63 (HEAD -> master, origin/master, origin/HEAD)
npm: 6.1.0
Nodejs: v8.10.0
MongoDB: v3.6.3 ~~18.04だとリポジトリが新しいので3.x.xが標準で入った~~
Redis server: v=4.0.9
ImageMagick(適当にaptで入れた。たぶんimagemagick-6*)(MisskeyはImagemagick7系を要求するので6だと画像投稿ができない(失敗した))
事前準備
Google様のreCAPTCHAを使うのでここにアクセスして鍵を作っておく(Twitterのvia芸みたいな感じでアプリを作ってそこからreCAPTCHAを使う感じ)
* Choose type of reCAPTCHAではreCAPTCHA v2を選ぶ * ドメインは「locahost」 * 利用規約に同意して鍵を生成 * 表示されたsite keyとsecret keyはビルドの設定で使うので控えておきましょう。 2. (Optional)VAPIDキーペアの作成
$ sudo npm install web-push -g $ sudo web-push generate-vapid-keys
ImageMagick7系のビルド
ここから依存ライブラリである
zlib
libpng
jpegsrc
libxml2 をダウンロードする(今回はみんなtar.gz形式を落とします)
それぞれの依存ライブラリを$ tar xf ライブラリ名.tar.gzして解凍します。
出来たライブラリのディレクトに$ cd 依存ライブラリ名してライブラリの直下に移動します。
$ ./configure && make -j 4 && sudo make install && cd ../を実行します(この辺でfailedする場合は依存ライブラリの依存ライブラリのがインストールされいない可能性があるのでエラー文でググるなりするといいかもしれません)。
2から4の手順をzlib→libpng→jpegsrc→libxml2の順番で繰り返す。
ここからImageMagick7系をダウンロードします。
依存ライブラリのビルド同様に $ tar xf ImageMagick-バージョン.tar.gz $ cd ImageMagick-バージョン $ ./configure && make -j 4 && sudo make install && cd ../ を実行してImageMagickをインストールします(この辺でfailedする場合(ry)。 (追記)ImageMagick7のビルドでコンパイラから「png_get_eXIf_1って何ですか~~~~~~~~~~^^;;」ってめっちゃ煽られたのでlibpng1.6.34を入れた。今は反省している。
更に追記(2018/06/02;22:21) magickが認識されないのでちょっと調べると、 こちらの記事が見つかったので試したところ、いい感じにできたことを追記します。
MongoDBの設定
$ sudo mongod $ mongo > use admin > db.createUser({ user:"管理者ユーザー名", pwd:"管理者パスワード", roles:[{ role:"userAdminAnyDatabase", db:"admin" }]}) > exit $ sudo service mongod restart $ mongo > use admin > db.auth("admin","adminpwd") > use misskey > db.createUser({ user:"DBユーザー名", pwd:"DBパスワード", roles:[{ role:"readWrite", db:"Misskey用DB名" }]}) > exit
~~雑になってきた~~
Misskeyのインストール
普通にmasterをgit cloneしてくるだけ
$ git clone -b master git://github.com/syuilo/misskey.git $ cd misskey $ sudo npm install
ビルドの設定
$ sudo npm run config
この時までにredis-serverとmongodを後ろで建てておきます。 mongoで作ったmisskeyデータベースのユーザーとパスワードを入力、redisでパスワードを設定している場合は設定したパスワードを入力していください。 今回はSSLを使用していないのでPortは80にしています。
Misskeyのビルド
$ sudo npm install -g node-gyp $ sudo node-gyp configure $ sudo node-gyp build $ sudo npm run build
Misskeyを始める
sudo npm run start
で始められます。 ブラウザを開き、http://localhost にアクセスしてみましょう。 表示されたら最高です。 登録できて、画像投稿が出来たら完璧です。 お疲れ様でした。
0 notes
Link
This cheat sheet is based on the latest version of mongodb, which initial had setup and didn't encounter any problems. However that all changed when I was working remote and forgot I disabled the port no my network, rather than with the server. So I went pointing around where I didn't need to and ended up messing things up. The Best tutorial out there is of course on digital ocean for my setup on ubuntu 16.04, this articles isn't so much about getting setup. I just wanted to include it because I found myself looking back at it to see if I missed a step. Bugs/Problems Now comes all the messed up things I found and had to deal with.
firewall - Port seems to be open, but connection refused
try using 0.0.0.0 in the bin_ip for the /etc/mongo.conf
try sudo ufw disable and check to see if it works without the firewall (probably still wont but worth a shot), then just sudo ufw enable. Make sure port 27017 is open.
Then check this linux - Connection refused to MongoDB errno 111
If you issue starts with this config, (Can't connect to MongoDB with authentication enabled)
security: authorization: enabled
Start by looking at the user you are using, try adding using root and not the documentation's use of the userAdminAnyDatabase.
use admin db.createUser( { user: "admin", pwd: "password", roles: [ { role: "root", db: "admin" } ] } ); exit;
- mongodb-admin-user-not-authorized Warning/Error (Assertion: 28595:13: Permission denied)
# storage.dbPath sudo chown -R mongodb:mongodb /var/lib/mongodb # systemLog.path sudo chown -R mongodb:mongodb /var/log/mongodb
Similar issue with:
"MongoDB: exception in initAndListen: 20 Attempted to create a lock file on a read-only directory: /data/db, terminating". I added the bit for my machine to do this to the folder where my data was, /data/db. Note I'm not sure if you need to worry about this warnings I just was stuck and pick straws with what the issue was.
The Finale
After all this I was back up and turned out to be a mix multiple things, I be sure to look out for more when I split my database up. Side note IDE's I am using because I haven't figured out which one I like more are Robo 3T, mongobooster.
Commands of Importance
mongo --port 27017 -u "myUserAdmin" -p "abc123" --authenticationDatabase "admin"
db.changeUserPassword("accountUser", "password")
db.getUsers()
For a clean install,
sudo apt-get purge mongodb-org*
sudo apt-get autoremove
sudo rm -r /var/log/mongodb
sudo rm -r /var/lib/mongodb
0 notes
Text
flask docker vagrant mac inceptions
First, some vagrant preparation stuff, cos I docker doesn't play well on mac.
$ vagrant up $ vagrant plugin install vagrant-vbguest $ vagrant ssh $ sudo apt-get install -y virtualbox-guest-dkms virtualbox-guest-utils virtualbox-guest-x11 $ sudo apt-get install linux-image-extra-$(uname -r) linux-image-extra-virtual
The Vagrantfile I used looks like this.
# -*- mode: ruby -*- # vi: set ft=ruby : Vagrant.configure(2) do |config| config.vm.provider :virtualbox do |provider| provider.check_guest_additions = false provider.functional_vboxsf = false provider.memory = 1024 provider.cpus = 1 end config.vm.define "pewpew" do |pewpew| pewpew.vm.box = "ubuntu/trusty64" pewpew.vm.box_check_update = false pewpew.vm.box_download_insecure = true pewpew.vm.network "private_network", ip: "192.168.50.14", netmask: "255.255.255.0" pewpew.vm.hostname = "pewpew.mydomain.com" pewpew.vm.network "forwarded_port", guest: 80, host: 8080, auto_correct: true# <-- nginx pewpew.vm.network "forwarded_port", guest: 443, host: 8082, auto_correct: true# <-- nginx pewpew.vm.network "forwarded_port", guest: 5000, host: 5000, auto_correct: true# <-- flask / gunicorn pewpew.vm.network "forwarded_port", guest: 27017, host: 27017, auto_correct: true# <-- mongodb pewpew.vm.network "forwarded_port", guest: 2376, host: 2376, auto_correct: true# <-- docker-machine pewpew.vm.network "forwarded_port", guest: 8081, host: 8081, auto_correct: true# <-- image-generator pewpew.vm.synced_folder "~/Vagrant/docker/", "/srv/", owner: "root", group: "root" # add swap space pewpew.vm.provision :shell, inline: "fallocate -l 2G /swapfile && chmod 0600 /swapfile && mkswap /swapfile && swapon /swapfile && echo '/swapfile none swap sw 0 0' >> /etc/fstab" pewpew.vm.provision :shell, inline: "echo vm.swappiness = 10 >> /etc/sysctl.conf && echo vm.vfs_cache_pressure = 50 >> /etc/sysctl.conf && sysctl -p" end config.ssh.username = "vagrant" config.ssh.pty = true config.vm.provision "shell" do |shell| shell.privileged = true shell.inline = "sudo sed -i '/tty/!s/mesg n/tty -s \\&\\& mesg n/' /root/.profile" end end
My /etc/hosts file contains this line:
192.168.50.14 pewpew.mydomain.com
My working directory on my mac looks like this:
$ tree . ├── Vagrantfile ├── app │ ├── Dockerfile │ ├── index.py │ └── pewpew.wsgi ├── db │ └── Dockerfile └── rp ├── Dockerfile └── site.conf
This directory is mounted into the Vagrant vm at /srv.
Let's go through each file:
--- /srv/app/Dockerfile --- FROM python:2.7 RUN pip install --no-cache-dir Flask==0.10.1 RUN pip install --no-cache-dir gunicorn==19.3.0 RUN pip install --no-cache-dir eventlet==0.17.4 RUN pip install --no-cache-dir pymongo==3.4.0 COPY index.py /app/ COPY pewpew.wsgi /app/ EXPOSE 5000 WORKDIR /app CMD ["gunicorn", "-k", "eventlet", "-b", "0.0.0.0:5000", "-w", "1", "index:app"] --- /srv/app/index.py --- import os from flask import Flask from pymongo import MongoClient app = Flask(__name__) db = "mongodb" client = MongoClient(db, 27017) @app.route("/") def hello(): try: server_info = client.server_info() db_names = client.database_names() client.close() return "Pew Pew!\n%s\n%s\n" % (server_info, db_names) except: return "Pew Pew! DB failing...\n" if __name__ == '__main__': app.run() --- /srv/app/pewpew.wsgi --- import sys PROJECT_DIR = '/app/' sys.path.append(PROJECT_DIR) from pewpew import app as application --- /srv/db/Dockerfile --- FROM mongo EXPOSE 27017 --- /srv/rp/Dockerfile --- FROM nginx COPY site.conf /etc/nginx/conf.d/site.conf EXPOSE 80 443 --- /srv/rp/Dockerfile --- server { listen 80; server_name pewpew.mydomain.com; access_log /var/log/nginx/nginx_access_myapp.log; error_log /var/log/nginx/nginx_error_myapp.log; location / { proxy_pass http://flaskapp:5000/; } }
Ok let's start. ssh into the vagrant box and check the kernel. To use docker you need 3.10+ or sum chit...
$ vagrant ssh $ uname -r 3.13.0-98-generic
Ok cool, install the docker daemon.
$ sudo curl -sSL https://get.docker.com/ | sh
Now let's build these images from the three Dockerfiles we have.
$ sudo docker build -t reverseproxy /srv/rp/ $ sudo docker build -t flaskapp /srv/app/ $ sudo docker build -t mongodb /srv/db/ $ docker images REPOSITORY TAG IMAGE ID CREATED SIZE reverseproxy latest fa2ead9fdb67 11 minutes ago 107MB flaskapp latest 48ce64a24bea About an hour ago 681MB nginx latest b8efb18f159b 12 days ago 107MB python 2.7 fa8e55b2235d 13 days ago 673MB mongo latest b39de1d79a53 13 days ago 359MB
Start the database container first.
$ docker run -d -e DB_PORT_27017_TCP_ADDR='0.0.0.0' -v /srv/db:/data -p 27017:27017 --name mongodb mongo $ docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES c610b1a11752 mongo "docker-entrypoint..." 3 seconds ago Up 1 second 0.0.0.0:27017->27017/tcp mongodb
Then start the flask application container.
$ docker run -d -p 5000:5000 --name flaskapp --link mongodb:mongodb flaskapp $ docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ebf6ba70b2f8 flaskapp "gunicorn -k event..." 2 seconds ago Up 1 second 0.0.0.0:5000->5000/tcp flaskapp c610b1a11752 mongo "docker-entrypoint..." 24 seconds ago Up 23 seconds 0.0.0.0:27017->27017/tcp mongodb
Send a request to the app
$ curl http://127.0.0.1:5000 Pew Pew! {u'storageEngines': [u'devnull', u'ephemeralForTest', u'mmapv1', u'wiredTiger'], u'maxBsonObjectSize': 16777216, u'ok': 1.0, u'bits': 64, u'modules': [], u'openssl': {u'compiled': u'OpenSSL 1.0.1t 3 May 2016', u'running': u'OpenSSL 1.0.1t 3 May 2016'}, u'javascriptEngine': u'mozjs', u'version': u'3.4.6', u'gitVersion': u'c55eb86ef46ee7aede3b1e2a5d184a7df4bfb5b5', u'versionArray': [3, 4, 6, 0], u'debug': False, u'buildEnvironment': {u'cxxflags': u'-Woverloaded-virtual -Wno-maybe-uninitialized -std=c++11', u'cc': u'/opt/mongodbtoolchain/v2/bin/gcc: gcc (GCC) 5.4.0', u'linkflags': u'-pthread -Wl,-z,now -rdynamic -Wl,--fatal-warnings -fstack-protector-strong -fuse-ld=gold -Wl,--build-id -Wl,-z,noexecstack -Wl,--warn-execstack -Wl,-z,relro', u'distarch': u'x86_64', u'cxx': u'/opt/mongodbtoolchain/v2/bin/g++: g++ (GCC) 5.4.0', u'ccflags': u'-fno-omit-frame-pointer -fno-strict-aliasing -ggdb -pthread -Wall -Wsign-compare -Wno-unknown-pragmas -Winvalid-pch -Werror -O2 -Wno-unused-local-typedefs -Wno-unused-function -Wno-deprecated-declarations -Wno-unused-but-set-variable -Wno-missing-braces -fstack-protector-strong -fno-builtin-memcmp', u'target_arch': u'x86_64', u'distmod': u'debian81', u'target_os': u'linux'}, u'sysInfo': u'deprecated', u'allocator': u'tcmalloc'} [u'admin', u'local']
Nice! Now let's try with the nginx container.
$ docker run -d -p 80:80 --name reverseproxy --link flaskapp:flaskapp reverseproxy $ sudo docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 716f3c7c321c reverseproxy "nginx -g 'daemon ..." 1 second ago Up Less than a second 0.0.0.0:80->80/tcp, 443/tcp reverseproxy ebf6ba70b2f8 flaskapp "gunicorn -k event..." 19 seconds ago Up 18 seconds 0.0.0.0:5000->5000/tcp flaskapp c610b1a11752 mongo "docker-entrypoint..." 41 seconds ago Up 39 seconds 0.0.0.0:27017->27017/tcp mongodb
Send a request to the nginx vhost.
$ curl http://127.0.0.1/ Pew Pew! {u'storageEngines': [u'devnull', u'ephemeralForTest', u'mmapv1', u'wiredTiger'], u'maxBsonObjectSize': 16777216, u'ok': 1.0, u'bits': 64, u'modules': [], u'openssl': {u'compiled': u'OpenSSL 1.0.1t 3 May 2016', u'running': u'OpenSSL 1.0.1t 3 May 2016'}, u'javascriptEngine': u'mozjs', u'version': u'3.4.6', u'gitVersion': u'c55eb86ef46ee7aede3b1e2a5d184a7df4bfb5b5', u'versionArray': [3, 4, 6, 0], u'debug': False, u'buildEnvironment': {u'cxxflags': u'-Woverloaded-virtual -Wno-maybe-uninitialized -std=c++11', u'cc': u'/opt/mongodbtoolchain/v2/bin/gcc: gcc (GCC) 5.4.0', u'linkflags': u'-pthread -Wl,-z,now -rdynamic -Wl,--fatal-warnings -fstack-protector-strong -fuse-ld=gold -Wl,--build-id -Wl,-z,noexecstack -Wl,--warn-execstack -Wl,-z,relro', u'distarch': u'x86_64', u'cxx': u'/opt/mongodbtoolchain/v2/bin/g++: g++ (GCC) 5.4.0', u'ccflags': u'-fno-omit-frame-pointer -fno-strict-aliasing -ggdb -pthread -Wall -Wsign-compare -Wno-unknown-pragmas -Winvalid-pch -Werror -O2 -Wno-unused-local-typedefs -Wno-unused-function -Wno-deprecated-declarations -Wno-unused-but-set-variable -Wno-missing-braces -fstack-protector-strong -fno-builtin-memcmp', u'target_arch': u'x86_64', u'distmod': u'debian81', u'target_os': u'linux'}, u'sysInfo': u'deprecated', u'allocator': u'tcmalloc'} [u'admin', u'local']
Awesome, we can also go to our http://pewpew.mydomain.com URL in a browser on our mac, as we have forwarded the port on our Vagrant box and added a local DNS entry in /etc/hosts remember?
0 notes
Text
How to Install MongoDB on Debian/Ubuntu

MongoDB is becoming more popular now-a-days.
How to Install MongoDB on Debian/Ubuntu
NoSQL (originally referring to “non SQL”, “non relational” or “not only SQL” as per Wikipedia.)
Due to its No Structured Language architecture, this is preferred by many Corporate Giants and Big Companies. They are migrating their storages to MongoDB because of its high performance and high-availability and easy scalability options.
The MongoDB stores the data not in a relational format, but through simple JSON format. The MongoDB calls it BSON. BSON is nothing but a binary form of storage of objects and arrays.
Terms compared between MongoDB and MySQL DB
Following is the simple comparison of terms compared between MySQL DB and MongoDB.
MySQL DB MongoDB text
Database Database
Table Collection
Rows Document
Column –
We do not have table columns in MongoDB, as it does not have a fixed table schema structure.
How to install MongoDB on Debian/Ubuntu
Following are the steps involved in installing MongoDB on Debian/Ubuntu servers. The following tutorial describes the installation of MongoDB version (3.2)
Importing Public Key We need to import the GPG public key for the Debian package management, in order to install using apt-get, done using following command
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv EA312927
Creating Source List File We need to create the source list file, so that sources for installing MongoDB can be fetched while using apt-get install, done using following command.
echo "deb http://repo.mongodb.org/apt/ubuntu "$(lsb_release -sc)"/mongodb-org/3.2 multiverse"
| sudo tee /etc/apt/sources.list.d/mongodb-org-3.2.list
This will fetch the right source list file based on your operating system distribution.
Update Source Repository Update the apt-get repository to reload the source repository changes using following command.
sudo apt-get update
Install MongoDB Install the latest stable version of MongoDB using following command.
sudo apt-get install -y mongodb-org
Running MongoDB You can use following commands to start / stop / restart and getting status regarding MongoDB service.
service mongod start // To start MongoDB Service
service mongod stop // To stop running MongoDB Service
service mongod restart // To restart running MongoDB Service
service mongod status // To get status of running MongoDB Service
Verify if MongoDB is running You can use the following command to see if MongoDB has started and is running fine.
netstat -nalp | grep mongod
// This command will show you the output of mongod service running using port 27017
tcp 0 0 127.0.0.1:27017 0.0.0.0:* LISTEN 35482/mongod
The MongoDB by default uses port 27017 The default configuration path is /etc/mongod.conf The logs by default get stored in the path /var/log/mongodb/mongod.log
By default MongoDB will listen to only 127.0.0.1, If you want MongoDB to listen on your network interfaces, you need to configure them manually in the configuration file.
MongoDB Shell MongoDB shell can be opened using following command.
mongo
In case, you get errors in opening MongoDB shell cli, then you need to export LC environment variables using following command
export LC_ALL=C
mongo // Start Mongo shell after running export command
Note: The installation steps covered above are for the open-source community edition of MongoDB.
MongoDB UI MongoDB does not include any User Interface or GUI for management. There are many 3rd party tools available like
MongoBooster RoboMongo
#mongodb#installingmongodbubuntu#cloudcontactcentersolutions#cloudcallcenter#cloudcallcentersoftware
0 notes
Text
Install MongoDB on Ubuntu Server 18.04

If your company is in the business of using, handling or depending on data, chances are you’re in need of a document-oriented, NoSQL database. If you’re unfamiliar with the term, a NoSQL database is a non-relational database that doesn’t use tables filled with rows and columns. Instead, they make use of a storage model that is optimized specifically for the data. These types of databases offer scalability, flexibility, data distribution, and speed of processing that relational databases can’t match. One NoSQL database is MongoDB. This database has been adopted by big data and enterprise companies including Adobe, Craigslist, eBay, FIFA, Foursquare, and LinkedIn. MongoDB comes in both an enterprise and community edition. I’ll be demonstrating with the open-source community edition, installing it on Ubuntu Server 18.04. This edition can be installed from the standard repositories, however, that will likely install an outdated version. Because of that, I’ll show how to deploy a version from the official MongoDB repository. This will install: mongodb-org (this is the meta-package that will install everything below) mongodb-org-server (the mongod daemon) mongodb-org-mongos (the mongos daemon) mongodb-org-shell (the mongo shell) mongodb-org-tools (the MongoDB tools package which includes import, dump, export, files, performance, restore, and stats tools) Do note that this package only supports 64-bit architecture and LTS (Long Term Support) versions of Ubuntu (so 14.04, 16.04, and 18.04). Once installed, your Mean Stack development company (or whatever sector your business serves) can begin developing for big data.
Update/Upgrade
When installing a major application/service, it’s always best to first run an update/upgrade on the server. Not only will this ensure you have the most recent software, but it’ll also apply any security patches. Do note, however, should the kernel be updated in this process, you will need to reboot the machine before the updates take effect. To update and upgrade Ubuntu, log into the server and issue the following two commands: sudo apt-get update sudo apt-get upgrade -y Once up the update and upgrade completes, reboot your server (if required). You are now ready to install MongoDB, and you won’t even need to bring in your Java developers to take care of this task.
Adding the Repository
The first thing to be done is the addition of the necessary MongoDB repository. To do this, log into your Ubuntu server. From the command line, add the required MongoDB key with the command: wget -qO - https://www.mongodb.org/static/pgp/server-4.2.asc | sudo apt-key add - If you see an error regarding the wget command, install that tool with: sudo apt-get install wget Once you’ve added the key, create a new apt source list file with the command: echo "deb https://repo.mongodb.org/apt/ubuntu bionic/mongodb-org/4.2 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb.list
Installation
Now it’s time to install MongoDB. Update apt with the command: sudo apt-get update Once apt is updated, install MongoDB with the command: sudo apt-get install mongodb-org -y
Starting and Enabling the Community Edition
With the database installed, you’ll want to start and enable it to run upon server reboot. Otherwise, you’ll have to manually start it every time the server is restarted. To start the MongoDB database engine, issue the command: sudo systemctl start mongod You’ll then want to enable MongoDB with the command: sudo systemctl enable mongod
Using MongoDB
In order to start working with Mongo, issue the command: mongo If you get an error status 62, it means that the version of MongoDB is too new for your server. If that’s the case, you’ll need to uninstall the latest version and install the version from the official Ubuntu repositories. Here are the steps for that process: Remove the latest version with the command sudo apt-get purge mongodb-org. Remove any extra dependencies with the command sudo apt-get autoremove. Install the older version of MongoDB with the command sudo apt-get install mongodb -y. At this point, you should then have access to the MongoDB command prompt (Figure 1) by issuing the mongo command. Figure 1 The MongoDB command prompt. Let’s say you want to create a new database. Unlike relational databases, you don’t use the CREATE command. Instead, you simply issue the use command like so: use DATABASE Where DATABASE is the name of the database to be created. This doesn’t actually create the database. In order to finalize that, you must insert data into the new database. Say you create a database named albums. You can then insert data into that database with the command: db.artists.insert({artistname: "Devin Townsend" }) The above command would insert the string “Devin Townsend” associated with artistname in the database albums. You should see WriteResult({ “nInserted” : 1 }) as a result (Figure 2). Figure 2 Successfully insertion of data into the new db. And that’s all there is to installing MongoDB on Ubuntu 18.04 and creating your first database. For more information on using MongoDB, make sure to read the official documentation for the release you’ve installed. So, a little sidetrack from Wordpress in this post, but thought it worth a mention, Mongo DB being used in many Node and other dev environments, and I use Lubuntu. which of course is not dissimilar. Also, if you enjoyed this post, why not check out this article on 8 Reasons for Slow Speeds in New York City! Post by Xhostcom Wordpress & Digital Services, subscribe to newsletter for more! Read the full article
0 notes
Photo
Build a To-Do API With Node and Restify
Introduction
Restify is a Node.js web service framework optimized for building semantically correct RESTful web services ready for production use at scale. In this tutorial, you will learn how to build an API using Restify, and for learning purposes you will build a simple To-Do API.
Set Up the Application
You need to have Node and NPM installed on your machine to follow along with this tutorial.
Mac users can make use of the command below to install Node.
brew install node
Windows users can hop over to the Node.js download page to download the Node installer.
Ubuntu users can use the commands below.
curl -sL http://ift.tt/2rqglBp | sudo -E bash - sudo apt-get install -y nodejs
To show that you have Node installed, open your terminal and run node -v. You should get a prompt telling you the version of Node you have installed.
You do not have not install NPM because it comes with Node. To prove that, run npm -v from your terminal and you will see the version you have installed.
Create a new directory where you will be working from.
mkdir restify-api cd restify-api
Now initialize your package.json by running the command:
npm init
You will be making use of a handful of dependencies:
Mongoose
Mongoose API Query (lightweight Mongoose plugin to help query your REST API)
Mongoose TimeStamp (adds createdAt and updatedAt date attributes that get auto-assigned to the most recent create/update timestamps)
Lodash
Winston (a multi-transport async logging library)
Bunyan Winston Adapter (allows the user of the winston logger in restify server without really using bunyan—the default logging library)
Restify Errors
Restify Plugins
Now go ahead and install the modules.
npm install restify mongoose mongoose-api-query mongoose-timestamp lodash winston bunyan-winston-adapter restify-errors restify-plugins --save
The packages will be installed in your node_modules folder. Your package.json file should look similar to what I have below.
#package.json { "name": "restify-api", "version": "1.0.0", "description": "", "main": "index.js", "scripts": { "test": "echo \"Error: no test specified\" && exit 1" }, "keywords": [], "author": "", "license": "ISC", "dependencies": { "bunyan-winston-adapter": "^0.2.0", "lodash": "^4.17.4", "mongoose": "^4.11.2", "mongoose-api-query": "^0.1.1-pre", "mongoose-timestamp": "^0.6.0", "restify": "^5.0.0", "restify-errors": "^4.3.0", "restify-plugins": "^1.6.0", "winston": "^2.3.1" } }
Before you go ahead, you have to install MongoDB on your machine if you have not done that already. Here is a standard guide to help you in that area. Do not forget to return here when you are done.
When that is done, you need to tell mongo the database you want to use for your application. From your terminal, run:
mongo use restify-api
Now you can go ahead and set up your configuration.
touch config.js
The file should look like this:
#config.js 'use strict' module.exports = { name: 'RESTIFY-API', version: '0.0.1', env: process.env.NODE_ENV || 'development', port: process.env.port || 3000, base_url: process.env.BASE_URL || 'http://localhost:3000', db: { uri: 'mongodb://127.0.0.1:27017/restify-api', } }
Set Up the To-Do Model
Create your to-do model. First, you create a directory called models.
mkdir models touch todo.js
You will need to define your to-do model. Models are defined using the Schema interface. The Schema allows you to define the fields stored in each document along with their validation requirements and default values. First, you require mongoose, and then you use the Schema constructor to create a new schema interface as I did below. I also made use of two modules called mongooseApiQuery and timestamps.
MongooseApiQuery will be used to query your collection (you will see how that works later on), and timestamps will add created_at and modified_at timestamps for your collection.
The file you just created should look like what I have below.
#models/todo.js 'use strict' // Requires module dependencies installed using NPM. const mongoose = require('mongoose'), mongooseApiQuery = require('mongoose-api-query'), // adds created_at and modified_at timestamps for us (ISO-8601) timestamps = require('mongoose-timestamp') // Creates TodoSchema const TodoSchema = new mongoose.Schema({ // Title field in Todo collection. title: { type: String, required: true, trim: true, }, }, { minimize: false }) // Applies mongooseApiQuery plugin to TodoSchema TodoSchema.plugin(mongooseApiQuery) // Applies timestamp plugin to TodoSchema TodoSchema.plugin(timestamps) // Exports Todo model as a module. const Todo = mongoose.model('Todo', TodoSchema) module.exports = Todo
Set Up the To-Do Routes
Create another directory called routes, and a file called index.js. This is where your routes will be set.
mkdir routes touch index.js
Set it up like so:
#routes/index.js 'use-strict' // Requires module dependencies installed using NPM. const _ = require('lodash') errors = require('restify-errors') // Requires Todo model const Todo = require('../models/todo') // HTTP POST request server.post('/todos', (req, res, next) => { // Sets data to the body of request let data = req.body || {} // Creates new Todo object using the data received let todo = new Todo(data) // Saves todo todo.save( (err) => { // If error occurs, error is logged and returned if (err) { log.error(err) return next(new errors.InternalError(err.message)) //next() } // If no error, responds with 201 status code res.send(201) next() }) }) // HTTP GET request server.get('/todos', (req, res, next) => { // Queries DB to obtain todos Todo.apiQuery(req.params, (err, docs) => { // Errors are logged and returned if there are any if (err) { log.error(err) return next(new errors.InvalidContentError(err.errors.name.message)) } // If no errors, todos found are returned. res.send(docs) next() }) }) //HTTP GET request for individual todos server.get('/todos/:todo_id', (req, res, next) => { // Queries DB to obtain individual todo based on ID Todo.findOne({_id: req.params.todo_id}, (err, doc) => { // Logs and returns error if errors are encountered if (err) { log.error(err) return next(new errors.InvalidContentError(err.errors.name.message)) } // Responds with todo if no errors are found res.send(doc) next() }) }) // HTTP UPDATE request server.put('/todos/:todo_id', (req, res, next) => { // Sets data to the body of request let data = req.body || {} if (!data._id) { _.extend(data, { _id: req.params.todo_id }) } // Finds specific todo based on the ID obtained Todo.findOne({ _id: req.params.todo_id }, (err, doc) => { // Logs and returns error found if (err) { log.error(err) return next(new errors.InvalidContentError(err.errors.name.message)) } else if (!doc) { return next(new errors.ResourceNotFoundError('The resource you request could not be found.')) } // Updates todo when the todo with specific ID has been found Todo.update({ _id: data._id }, data, (err) => { // Logs and returns error if (err) { log.error(err) return next(new errors.InvalidContentError(err.errors.message)) } // Responds with 200 status code and todo res.send(200, data) next() }) }) }) // HTTP DELETE request server.del('/todos/:todo_id', (req, res, next) => { // Removes todo that corresponds with the ID received in the request Todo.remove({ _id: req.params.todo_id }, (err) => { // Logs and returns error if (err) { log.error(err) return next(new errors.InvalidContentError(err.errors.message)) } // Responds with 204 status code if no errors are encountered res.send(204) next() }) })
The file above does the following:
Requires module dependencies installed with NPM.
Performs actions based on the request received.
Errors are thrown whenever one (or more) is encountered, and logs the errors to the console.
Queries the database for to-dos expected for listing all to-dos, and posting to-dos.
Now you can create the entry for your application. Create a file in your working directory called index.js.
#index.js 'use strict' // Requires module dependencies downloaded with NPM. const config = require('./config'), restify = require('restify'), bunyan = require('bunyan'), winston = require('winston'), bunyanWinston = require('bunyan-winston-adapter'), mongoose = require('mongoose') // Sets up logging using winston logger. global.log = new winston.Logger({ transports: [ // Creates new transport to log to the Console info level logs. new winston.transports.Console({ level: 'info', timestamp: () => { return new Date().toString() }, json: true }), ] }) /** * Initialize server */ global.server = restify.createServer({ name : config.name, version : config.version, log : bunyanWinston.createAdapter(log) }) /** * Middleware */ server.use(restify.plugins.bodyParser({ mapParams: true })) server.use(restify.plugins.acceptParser(server.acceptable)) server.use(restify.plugins.queryParser({ mapParams: true })) server.use(restify.plugins.fullResponse()) // Error handler to catch all errors and forward to the logger set above. server.on('uncaughtException', (req, res, route, err) => { log.error(err.stack) res.send(err) }) // Starts server server.listen(config.port, function() { // Connection Events // When connection throws an error, error is logged mongoose.connection.on('error', function(err) { log.error('Mongoose default connection error: '+ err) process.exit(1) }) // When connection is open mongoose.connection.on('open', function(err) { // Error is logged if there are any. if (err) { log.error('Mongoose default connection error: ' + err) process.exit(1) } // Else information regarding connection is logged log.info( '%s v%s ready to accept connection on port %s in %s environment.', server.name, config.version, config.port, config.env ) // Requires routes file require('./routes') }) global.db = mongoose.connect(config.db.uri) })
You have set up your entry file to do the following:
Require modules installed using NPM.
Output info level logs to the console using Winston Logger. With this, you get to see all the important interactions happening on your application right on your console.
Initialize the server and set up middleware using Restify plugins.
bodyParser parses POST bodies to req.body. It automatically uses one of the following parsers based on content type:
acceptParser accepts the header.
queryParser parses URL query parameters into req.query.
fullResponse handles disappeared CORS headers.
Next, you start your server and create a mongoose connection. Logs are outputted to the console dependent on the result of creating the mongoose connection.
Start up your node server by running:
node index.js
Open up postman and send an HTTP POST request. The specified URL should be http://locahost:3000/todos.
For the request body, you can use this:
{ "title" : "Restify rocks!" }
And you should get a response.
Conclusion
You have been able to build a standard To-Do API using Restify and Node.js. You can enhance the API by adding new features such as descriptions of the to-dos, time of completion, etc.
By building this API, you learned how to create logs using Winston Logger—for more information on Winston, check the official GitHub page. You also made use of Restify plugins, and more are available in the documentation for plugins.
You can dig further into the awesomeness of Restify, starting with the documentation.
by Chinedu Izuchukwu via Envato Tuts+ Code http://ift.tt/2eQ2mze
0 notes