#postgres config file location
Explore tagged Tumblr posts
codeonedigest · 2 years ago
Text
Spring Boot Microservice Project with Postgres DB Tutorial with Java Example for Beginners  
Full Video Link: https://youtu.be/iw4wO9gEb50 Hi, a new #video on #springboot #microservices with #postgres #database is published on #codeonedigest #youtube channel. Complete guide for #spring boot microservices with #postgressql. Learn #programming #
In this video, we will learn, how to download, install postgres database, how to integrate Postgres database with a Spring Boot Microservice Application and perform different CRUD operations i.e. Create, Read, Update, and Delete operations on the Customer entity. Spring Boot is built on the top of the spring and contains all the features of spring. And is becoming a favorite of developers these…
Tumblr media
View On WordPress
0 notes
technovert · 4 years ago
Text
A DATA INTEGRATION APPROACH TO MAXIMIZE YOUR ROI
The data Integration approach adopted by many data integration projects relies on a set of premium tools leading to cash burnout with RoI less than the standard.
To overcome this and to maximize the RoI, we lay down a data integration approach that makes use of open-source tools over the premium to deliver better results and an even more confident return on the investment.  
Tumblr media
Adopt a two-stage data integration approach:
Part 1 explains the process of setting up technicals and part 2 covers the execution approach involving challenges faced and solutions to the same.
Part 1: Setting Up
The following are the widely relied data sources:
REST API Source with standard NoSQL JSON (with nested datasets)
REST API Source with full data schema in JSON
CSV Files in AWS S3
Relational Tables from Postgres DB
There are 2 different JSON types above in which the former is conventional, and the latter is here
Along with the data movement, it is necessary to facilitate Plug-n-Play architecture, Notifications, Audit data for Reporting, Un-burdened Intelligent scheduling, and setting up all the necessary instances.
The landing Data warehouse chosen was AWS Redshift which is a one-stop for the operational data stores (ODS) as well as facts & dimensions. As said, we completely relied on open-source tools over the tools from tech giants like Oracle, Microsoft, Informatica, Talend, etc.,  
The data integration was successful by leveraging Python, SQL, and Apache Airflow to do all the work. Use Python for Extraction; SQL to Load & Transform the data and Airflow to orchestrate the loads via python-based scheduler code. Below is the data flow architecture.
Data Flow Architecture
Part 2: Execution
The above data flow architecture gives a fair idea of how the data was integrated. The execution is explained in parallel with the challenges faced and how they were solved.
Challenges:
Plug-n-Play Model.  
Dealing with the nested data in JSON.
Intelligent Scheduling.
Code Maintainability for future enhancements.  
1. Plug-n-Play Model
To meet the changing business needs, the addition of columns or a datastore is obvious and if the business is doing great, expansion to different regions is apparent. The following aspects were made sure to ensure a continuous integration process.
A new column will not break the process.
A new data store should be added with minimal work by a non-technical user.
To bring down the time consumed for any new store addition(expansion) integration from days to hours.  
The same is achieved by using:
config table which is the heart of the process holding all the jobs needed to be executed, their last extract dates, and parameters for making the REST API call/extract data from RDBMS.
Re-usable python templates which are read-modified-executed based on the parameters from the config table.
Audit table for logging all the crucial events happening whilst integration.
Control table for mailing and Tableau report refreshes after the ELT process
By creating state-of-art DAGs which can generate DAGs(jobs) with configuration decided in the config table for that particular job.
Any new table which is being planned for the extraction or any new store being added as part of business expansion needs its entries into the config table.
The DAG Generator DAG run will build jobs for you in a snap which will be available in Airflow UI on the subsequent refresh within seconds, and the new jobs are executed on the next schedule along with existing jobs.
2. Dealing with the nested data in JSON.
It is a fact that No-SQL JSONS hold a great advantage from a storage and data redundancy perspective but add a lot of pain while reading the nested data out of the inner arrays.
The following approach is adopted to conquer the above problem:
Configured AWS Redshift Spectrum, with IAM Role and IAM Policy as needed to access AWS Glue Catalog and associating the same with AWS Redshift database cluster
Created external database, external schema & external tables in AWS Redshift database
Created AWS Redshift procedures with specific syntax to read data in the inner array
AWS Redshift was leveraged to parse the data directly from JSON residing in AWS S3 onto an external table (no loading is involved) in AWS Redshift which was further transformed to rows and columns as needed by relational tables.
3. Intelligent Scheduling
There were multiple scenarios in orchestration needs:
Time-based – Batch scheduling; MicroELTs ran to time & again within a day for short intervals.
Event-based – File drop in S3
For the batch scheduling, neither the jobs were run all in series (since it is going to be an underutilization of resources and a time-consuming process) nor in parallel as the workers in airflow will be overwhelmed.  
A certain number of jobs were automated to keep running asynchronously until all the processes were completed. By using a python routine to do intelligent scheduling. The code reads the set of jobs being executed as part of the current batch into a job execution/job config table and keeps running those four jobs until all the jobs are in a completed/failed state as per the below logical flow diagram.
Logical Flow Diagram
For Event-based triggering, a file would be dropped in S3 by an application, and the integration process will be triggered by reading this event and starts the loading process to a data warehouse.
The configuration is as follows:
CloudWatch event which will trigger a Lambda function which in turn makes an API call to trigger Airflow DAG
4. Code Maintainability for future enhancements
A Data Integration project is always collaborative work and maintaining the correct source code is of dire importance. Also, if a deployment goes wrong, the capability to roll back to the original version is necessary.
For projects which involve programming, it is necessary to have a version control mechanism. To have that version control mechanism, configure the GIT repository to hold the DAG files in Airflow with Kubernetes executor.
Take away:
This data integration approach is successful in completely removing the premium costs while decreasing the course of the project. All because of the reliance on open-source tech and utilizing them to the fullest.
By leveraging any ETL tool in the market, the project duration would be beyond 6 months as it requires building a job for each operational data store. The best-recommended option is using scripting in conjunction with any ETL tool to repetitively build jobs that would more or less fall/overlap with the way it is now executed.  
Talk to our Data Integration experts:
Looking for a one-stop location for all your integration needs? Our data integration experts can help you align your strategy or offer you a consultation to draw a roadmap that quickly turns your business data into actionable insights with a robust Data Integration approach and a framework tailored for your specs.
1 note · View note
amartyabanerjee · 6 years ago
Text
Fathom installation on a Ubuntu Server running NginX
Tumblr media
As we got closer to opening up TunePad to more classrooms, we wanted to have some web analytics set up. While Google Analytics is almost the default choice in such a scenario, we were looking for alternatives. Fathom seemed promising. 
This guide documents the installation and deployment process. It draws heavily from Fathom’s own excellent documentation, but hopefully, the screenshots and slightly more detailed notes would come handy for someone.  
Download and install Fathom
Before using the wget command below to download the file, go to this link to find the most recent release of Fathom for Ubuntu. In our case, we used version 1.2.1. Thereafter, unzip it /usr/local/bin and then add execute permission to it. 
wget https://github.com/usefathom/fathom/releases/download/v1.2.1/fathom_1.2.1_linux_386.tar.gz sudo tar -C /usr/local/bin -xzf fathom_1.2.1_linux_386.tar.gz sudo chmod +x /usr/local/bin/fathom
Run “fathom --version” to check if the installation was successful. 
Basic configuration 
Somewhere on the server create a directory.
sudo mkdir fathom-analytics sudo chown sadmin:www-data fathom-analytics/
Create fathom environment config file
vi .env
In that file enter the following config options:
FATHOM_SERVER_ADDR=9000 FATHOM_GZIP=true FATHOM_DEBUG=true FATHOM_DATABASE_DRIVER="postgres" FATHOM_DATABASE_USER="tunepad" FATHOM_DATABASE_PASSWORD=“XXXXXXX” FATHOM_DATABASE_NAME=“fathom" FATHOM_SECRET=“XXXXXXXXXXX”
Now, cd into the “fathom-analytics” directory (or whichever directory you created in above), and run
fathom server
you should see something similar to this. 
Tumblr media
Go to http://<your_domain>:9000 (for now, make sure you are not trying to use https) If this is not working, try changing the firewall settings.
sudo ufw allow 9000 sudo ufw status
Add a fathom user
fathom user add --email="[email protected]>" --password="XXXXXXXXX"
(this is the user name and password we shall use to log in to the analytics dashboard) To start fathom on boot (add it as a service)
sudo vim /etc/systemd/system/fathom-analytics.service
In the fathom-analytics.service file, add the following:
[Unit] Description=Starts the fathom server Requires=network.target After=network.target [Service] Type=simple User=sadmin Restart=always RestartSec=3 WorkingDirectory=/srv/projects/fathom-analytics ExecStart=/usr/local/bin/fathom server [Install] WantedBy=multi-user.target #then reload and start the service sudo systemctl daemon-reload sudo systemctl enable fathom-analytics sudo systemctl start fathom-analytics
Since we have HTTPS on our domains, we needed to do a couple of more steps Using NGINX with Fathom Create the following file:
sudo vim/etc/nginx/sites-enabled/fathom
add the following lines to it:
server { server_name tunepad.codes www.tunepad.codes; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header Host $host; proxy_pass http://127.0.0.1:9000; } }
This is what what the directory structure looks like on our server 
Tumblr media
This is the fathom service file (Certbot added the https stuff, we cover that later): 
Tumblr media
Test NGINX configuration
sudo nginx -t (#to test NGINX configuration)
Reload NGINX configuration after adding symlink to sites-enabled
sudo ln -s /etc/nginx/sites-available/fathom /etc/nginx/sites-enabled  sudo service nginx reload 
Make sure that for the domain use for Certbot, has the Name and A-records set correctly (tunepad.codes looks like this on Namecheap). 
Tumblr media
Then, on the server, run Certbot
certbot --nginx -d tunepad.codes
Once this is done, try going to the domain (tunepad.codes in this case) and logging in with the credentials set using Fathom’s add-user command. 
Copy the Tracking snippet from the Fathom dashboard and add it to the pages that need to be tracked. If something went wrong, try going through the steps listed here: Fathom Github.
Tumblr media Tumblr media
0 notes
mayppong · 8 years ago
Text
Kubernetes, Part 1: The Moving Pieces
Getting started with Kubernetes was not easy. First is because there are a lot of moving pieces. Secondly, there are some basic concept in running an containerized application in a cluster that differs from running a single local instance. So while picking up Kubernetes for the first time, this is also my first time learning some of the cluster deployment concepts as well. I decided to keep track of what I learn while trying to implement a set of config file for one of my pet project, gyro, with the hope that I could share it with other people also interested.
So! here's what I've got so far. Let's start with the basic moving pieces that makes a typical application goes.
Resources
In the world of Kubernetes, well, cluster computing in general, your components are often scattered around many hosting services. So instead of using local resources like storage, queue, or cache, you need to declare where they are located. For my project, gyro, I declared a storage volume for the database.
Deployments
When you’re ready to create an instance of an application, you create a deployment. The deployment may defines a docker image of your application, and make necessary request for resources. The docker image needs to be in a network-accessible registry to ensure that it can be downloaded to the machine it’s deployed to. You need to build your application into a docker image first, and push the image up into a registry service that is network accessible. From there, you can reference it in your kubernetes yaml file. For a local testing, you can build the docker image directly into the local cache by loading the minikube’s local cache registry into docker’s environment. There is a command provided to you by minikube to do just that.
eval $(minikube docker-env)
Stateless (Application)
A stateless deployment is a type of deployment is where there is no need for the data to persist outside of a currently running process. There's no sharing data between multiple instances or between instances' up and down time. This makes the deployment of the application really simple.
Ideally, most of the application would be deployed this way. To persist data, it would instead making a request to another deployment service that specifically handles data persistence like the database, queue manager, or smaller application as a microservice. This creates a nice separation between the application layer and the data persistence layer.
Stateful (Database)
A stateful deployment is a deployment where the deployed application requires the ability to persist data: a database for example. To achieve this, the deployment needs to be able to find appropriate storage volume. Since the deployment is running off of a docker container which doesn't persist data by default when down, and not able to share that data with the rest of the cluster, we need to find a network-accessible and sharable storage volume within the cluster instead.
A resource claim is a process where you request to the Kubernetes agent asking for an available resource the deployment needs to run. Once a volume is claimed, it can be used by the deployment, which in this case, is the database application. Should the database instance goes down, the new node will be able to reclaim and reconnect to the same storage volume, thus gain access to the same data again.
For example, to run a Postgres database, you need to define the Postgres docker application image, and make a request for storage of specific size. Kubernetes will then look at its available resource pool, and try to find a storage of at least that size for use. You can see an example here.
Services
The purpose of a service in Kubernetes is to provide abstraction to connect to your pods or deployments within a cluster. As pods come and go, such as restart from failure, their IP addresses may change. Services provide a fixed way to connect to dynamically allocated pods. During testing, I had a bit of problems assigning external IP to the service. I haven’t digged far enough to figure out what’s going on. So when it works, it would display <nodes> like below.
15:08:36 [mayppong]@Zynthia: ~/Projects/gyro (@dev 0 | 1 | 4) >> kubectl get svc gyro NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE gyro 10.0.0.222 <nodes> 4000:30065/TCP 7s
So to access your deployment, you want to run the minikube command to get the url for a named service instead.
minikube service gyro --url
On the other hand, the only way to connect one deployment to another through a service I’ve found so far is to use the name DNS. This is a default add-on that should be started when start your minikube local cluster. Each service is given a unique DNS generated based on its name and namespace using the format: <my_service_name>.<my_namespace>.svc.cluster.local. There are some other details outlined in the documentation as well.
1 note · View note
globalmediacampaign · 5 years ago
Text
Common administrator responsibilities on Amazon RDS and Amazon Aurora for PostgreSQL databases
Amazon Web Services (AWS) offers Amazon Relational Database Service (RDS) and Amazon Aurora as fully managed relational database services. With a few commands, you can have your production database instance up and running on AWS. An online database frees the database administrator (DBA) from many maintenance and management tasks. However, there are a few significant responsibilities to be aware of. This post discusses the DBA tasks to perform on Amazon RDS for PostgreSQL and Aurora with PostgreSQL-compatible databases. As a DBA, you face daily pressure to deliver value to your business across many fronts. Maintaining the right platform for running mission-critical databases is becoming increasingly difficult. Maintenance is also a challenging job. The launch of Amazon RDS and Aurora has vastly reduced the time you spend on tasks like installation, configuration, monitoring, and security. Nevertheless, you must still carry out several critical tasks: several of them daily, a few weekly, and some only at the time of Amazon RDS or Aurora installation (at the time of instance creation). Some of the administrative tasks that you must carry out include: Configuring the parameter group Managing IP traffic using a security group Auditing the database log files Maintenance and management activities Planning backup and recovery strategies User management Monitoring the database Configuring the parameter group The data directory of an on-premises PostgreSQL cluster contains the configuration file postgresql.conf. You can manage the parameters through this configuration file. Similarly, for Amazon RDS and Aurora PostgreSQL instances, you manage the parameters through a parameter group. Before you create a new Amazon RDS and Aurora instance, customize your DB parameter group. For more information about creating a new parameter group, modifying the parameters, and attaching it to the instance, see Working with DB Parameter Groups. If you do not have a customized parameter group at the time of creation, you can perform an instance restart. Replace the default DB parameter group with the custom parameter group, which allows the customized parameters to take effect. The following overview describes which parameters you should turn on for optimal performance: Enter the following logging parameters: log_autovacuum_min_duration 0 log_checkpoints '1' log_connection '1' log_disconnection '1' log_min_duration_statement '' log_temp_files '1' log_statement='ddl' rds.force_autovacuum_logging_level='log' Enter the following autovacuum parameters: autovacuum_max_workers autovacuum_vacuum_cost_limit autovacuum_vacuum_cost_delay Enter the following as other parameters: random_page_cost default_statistics_target shared_preload_libraries='pg_hint_plan, pg_stat_statements' Managing IP traffic using a security group In Amazon RDS and Aurora, the security group controls the traffic in and out of the instance. It controls both incoming and outgoing traffic by applying appropriate rules to the security group. For example, the following screenshot shows how you can allow PG traffic from your applications to the database via port 5432: Do not open your database to the world by using 0.0.0.0/0. Auditing the database log files The busier your database is, the higher the number of transactions. The more transactions, the more logs it generates. The more log files, the more complicated it becomes to extract specific information from those log files. Most DBAs review their log files as a last resort, but you should turn to them frequently for the ERROR, FATAL, WARNING, and HINTS messages they contain. It is vital to check and audit the log files regularly. When it becomes difficult to analyze the log files every day due to size, you can use pgBadger, which is available on GitHub. pgBadger is an open-source PostgreSQL log analyzing tool that generates HTML reports from your PostgreSQL log file. By default, RDS and Aurora instances retain logs for 3–7 days. Run custom bash scripts to download the log files locally or to an Amazon EC2 instance or an Amazon S3 bucket to maintain log files for a longer period. To install and generate pgBadger reports, complete the following steps: Sign in to the AWS Management Console and create one EC2 RHEL or CentOS instance. Download the pgdg repo on Amazon EC2. To install, enter the following code: sudo yum install ftp://ftp.pbone.net/mirror/apt.sw.be/redhat/7.3/en/i386/rpmforge/RPMS/perl-Text-CSV_XS-0.65-1.rh7.rf.i386.rpm perl perl-devel sudo yum install https://download.postgresql.org/pub/repos/yum/reporpms/EL-7-x86_64/pgdg-redhat-repo-latest.noarch.rpm sudo yum install pgbadger -y This post tested the preceding steps on RHEL 7 with pgdg-10 (PostgreSQL repo). To generate the report, complete the following steps: Download the PostgreSQL log files from Amazon RDS or Aurora to Amazon EC2 and run pgBadger. Enable the logging parameters in your DB parameter group. Schedule a cron job to download the log files to an appropriate location on Amazon EC2 and generate the pgBadger report. Download and convert your log files with the following code: #This Script helps to download the Postgres log files from cloud and store it on EC2. ## 1. Delete the logs and pgBadger reports older than 3 days. ## 2. Download the latest Postgres log from Amazon RDS instance: . ## 3. Generate the pgBadger report for newly downloaded log file. #create pgBadger dir under /home/ec2-user mkdir -p /home/ec2-user/pgBadger, # mkdir -p /home/ec2-user/pgBadger/logs , mkdir -p /home/ec2-user/pgBadger/reports #Use must install pgbadger and it should be in path. #Here is link for pgbadger installation: https://github.com/darold/pgbadger #Install awscli on EC2 instance set the env (https://docs.aws.amazon.com/cli/latest/topic/config-vars.html) # to download the log files. home_dir="/home/postgres/pgBadger" logDir="/home/postgres/pgBadger/logs" rptDir="/var/www/pgbadger" identifier='' date=`date -d "-1 days" +%Y-%m-%d` sudo find $logDir -name '*.log.*' -type f -mtime 0 -exec rm {} ; sudo find $rptDir -name 'postgresql*.html' -type f -mtime +10 -exec rm {} ; sudo mkdir -p $logDir/$date sudo chown -R postgres:postgres $logDir/$date #how to generate pgbadger report #Install pgbadger on EC2 . To install, follow the link: https://github.com/darold/pgbadger for i in `seq -w 00 23` do sudo aws rds download-db-log-file-portion --db-instance-identifier $identifier --log-file-name error/postgresql.log.$date-$i --starting-token 0 --output text > $logDir/$date/postgresql.log.$date-$i done if [ $? -eq 0 ] ; then sudo pgbadger --prefix '%t:%r:%u@%d:[%p]:' $logDir/$date/*.log.* -o $rptDir/postgresql.$date.html -f stderr #-f $logDir/*.log.* sudo chmod -R 777 $rptDir/postgresql.$date.html if [ $? -eq 0 ]; then #mailx -s "Successfully Generated the pgbadger report for Date: $date" echo "Successfully Generated the pgbadger report for Date: $date" else #mailx -s "UNSUCESSFUL GENERATION of pgbadger report for Date: $date" echo "Successfully Generated the pgbadger report for Date: $date" fi gzip -r9 $logDir/$date fi This script generates the pgbadger report that you can use to analyze the activities performed on the database. For a sample pgBadger report, see postgres_sample. Maintenance and management activities A remote database still requires maintenance. The following section discusses autovacuuming, the VACUUM ANALYZE command, and long-running queries and sessions. Autovacuuming Query slowness due to table or index bloat is one of the most common scenarios in PostgreSQL. Amazon RDS and Aurora enable autovacuuming by default to reduce this bloat. As you manage slowdown, keep the following in mind: Autovacuum holds a less-priority lock on the table. It might cancel its own job when another high-priority operation wants to acquire a lock on the table. The same table can become a candidate for repeated autovacuums, which causes other tables to remain bloated. Because these are the common scenarios in PostgreSQL, you should tune your autovacuum parameter properly. If tuning does not work, you must schedule a manual vacuum/analyze script. Based on the frequency of the bloat, you can decide whether to perform VACUUM ANALYZE, VACUUM FULL, or PG_REPACK. Scheduling VACUUM ANALYZE To keep the stats updated, remove bloat in reused space, and avoid the transaction wraparound, schedule VACUUM ANALYZE on your database. VACUUM removes the bloat and avoids transaction wraparound. ANALYZE helps to update the database stats, which helps the planner generate good plans for queries. Before you proceed, you should understand the differences between VACUUM ANALYZE, VACUUM FULL, and PG_REPACK. VACUUM ANALYZE – Removes the bloat from the tables and indexes and updates the tables’ statistics. This is a non-locking operation; you can run it at a table level or database level. It cleans the bloated pages but does not reclaim the space. VACUUM FULL – Writes the entire content of the table into a new disk file and releases the wasted space back to OS. This causes a table-level lock on the table and slow speeds. Avoid using VACUUM FULL on a high-load system. PG_REPACK – Writes the entire content of the table into a new disk file and releases the wasted space back to OS and does it online without holding the lock on the table. It is faster than VACUUM FULL, and Amazon Aurora and Amazon RDS support it as an extension. Instead of re-indexing or performing a VACUUM FULL, you should use PG_REPACK to back up. PG_REPACK is available as an extension in Amazon Aurora for PostgreSQL and Amazon RDS PostgreSQL. The following code calculates the bloat and extra space that bloated pages occupy: SELECT current_database(), schemaname, tblname, bs*tblpages AS real_size, (tblpages-est_tblpages)*bs AS extra_size, CASE WHEN tblpages - est_tblpages > 0 THEN 100 * (tblpages - est_tblpages)/tblpages::float ELSE 0 END AS extra_ratio, fillfactor, CASE WHEN tblpages - est_tblpages_ff > 0 THEN (tblpages-est_tblpages_ff)*bs ELSE 0 END AS bloat_size, CASE WHEN tblpages - est_tblpages_ff > 0 THEN 100 * (tblpages - est_tblpages_ff)/tblpages::float ELSE 0 END AS bloat_ratio, is_na -- , (pst).free_percent + (pst).dead_tuple_percent AS real_frag FROM ( SELECT ceil( reltuples / ( (bs-page_hdr)/tpl_size ) ) + ceil( toasttuples / 4 ) AS est_tblpages, ceil( reltuples / ( (bs-page_hdr)*fillfactor/(tpl_size*100) ) ) + ceil( toasttuples / 4 ) AS est_tblpages_ff, tblpages, fillfactor, bs, tblid, schemaname, tblname, heappages, toastpages, is_na -- , stattuple.pgstattuple(tblid) AS pst FROM ( SELECT ( 4 + tpl_hdr_size + tpl_data_size + (2*ma) - CASE WHEN tpl_hdr_size%ma = 0 THEN ma ELSE tpl_hdr_size%ma END - CASE WHEN ceil(tpl_data_size)::int%ma = 0 THEN ma ELSE ceil(tpl_data_size)::int%ma END ) AS tpl_size, bs - page_hdr AS size_per_block, (heappages + toastpages) AS tblpages, heappages, toastpages, reltuples, toasttuples, bs, page_hdr, tblid, schemaname, tblname, fillfactor, is_na FROM ( SELECT tbl.oid AS tblid, ns.nspname AS schemaname, tbl.relname AS tblname, tbl.reltuples, tbl.relpages AS heappages, coalesce(toast.relpages, 0) AS toastpages, coalesce(toast.reltuples, 0) AS toasttuples, coalesce(substring( array_to_string(tbl.reloptions, ' ') FROM 'fillfactor=([0-9]+)')::smallint, 100) AS fillfactor, current_setting('block_size')::numeric AS bs, CASE WHEN version()~'mingw32' OR version()~'64-bit|x86_64|ppc64|ia64|amd64' THEN 8 ELSE 4 END AS ma, 24 AS page_hdr, 23 + CASE WHEN MAX(coalesce(null_frac,0)) > 0 THEN ( 7 + count(*) ) / 8 ELSE 0::int END + CASE WHEN tbl.relhasoids THEN 4 ELSE 0 END AS tpl_hdr_size, sum( (1-coalesce(s.null_frac, 0)) * coalesce(s.avg_width, 1024) ) AS tpl_data_size, bool_or(att.atttypid = 'pg_catalog.name'::regtype) OR count(att.attname) <> count(s.attname) AS is_na FROM pg_attribute AS att JOIN pg_class AS tbl ON att.attrelid = tbl.oid JOIN pg_namespace AS ns ON ns.oid = tbl.relnamespace LEFT JOIN pg_stats AS s ON s.schemaname=ns.nspname AND s.tablename = tbl.relname AND s.inherited=false AND s.attname=att.attname LEFT JOIN pg_class AS toast ON tbl.reltoastrelid = toast.oid WHERE att.attnum > 0 AND NOT att.attisdropped AND tbl.relkind = 'r' GROUP BY 1,2,3,4,5,6,7,8,9,10, tbl.relhasoids ORDER BY 2,3 ) AS s ) AS s2 ) AS s3; You receive the following code as output:  current_database | schemaname | tblname | real_size | extra_size | extra_ratio | fillfactor | bloat_size | bloat_ratio | is_na ------------------+--------------------+-------------------------+------------+------------+------------------+------------+------------+------------------+------- postgres | public | sample_table | 1565351936 | 239951872 | 15.3289408267611 | 100 | 239951872 | 15.3289408267611 | f To reclaim the space, run VACUUM FULL or PG_REPACK: Postgres# vacuum full analyze sample_table; After you run VACUUM FULL, the query returns something similar to the following output: current_database | schemaname | tblname | real_size | extra_size | extra_ratio | fillfactor | bloat_size | bloat_ratio | is_na ------------------+--------------------+-------------------------+-----------+------------+--------------------+------------+------------+--------------------+------- postgres | public | sample_table | 41746432 | 24576 | 0.0588697017268446 | 100 | 24576 | 0.0588697017268446 | f VACUUM FULL and re-indexing are locking operations that block other sessions, but PG_REPACK is an online method to reorganize the tables and indexes. You can query the pg_stat_all_tables and pg_stat_user_tables to check the last autovacuum or manual vacuum execution. For example, see the following code: SELECT schemaname,relname as table_name, last_vacuum, last_analyze, last_autovacuum, last_autoanalyze, n_live_tup,n_dead_tup from pg_stat_user_tables; You receive the following code as output: schemaname | table_name | last_vacuum | last_analyze | last_autovacuum | last_autoanalyze | n_live_tup | n_dead_tup ------------+-------------+-------------+--------------+-----------------+-------------------------------+------------+------------ public | vacuum_test | | | | 2019-01-23 06:44:56.257586+00 | 13671089 | 0 You can also use this code: SELECT schemaname, relname as table_name, last_vacuum, last_analyze, last_autovacuum, last_autoanalyze, n_live_tup, n_dead_tup from pg_stat_all_tables; You receive the following code as output: schemaname | table_name | last_vacuum | last_analyze | last_autovacuum | last_autoanalyze | n_live_tup | n_dead_tup --------------------+----------------+-------------------------------+-------------------------------+--------------------+------------------+------------+------------ information_schema | sql_sizing | 2019-01-23 07:05:06.524004+00 | 2019-01-23 07:05:06.52429+00 | | | 23 | 0 To run VACUUM ANALYZE on a table, enter the following code: Vacuum analyze ; To run VACUUM ANALYZE on the database, enter the following code: Vacuum analyze verbose; Only the superuser or database owner can run a vacuum on system tables. If substantial bloat in system tables causes performance degradation, or when you must free up bloated space to the disk, you must run VACUUM FULL. Only run this command outside of business hours, because it locks the tables on which it runs. To check the transactional age of the database, enter the following code: SELECT datname, age(datfrozenxid) from pg_database order by age(datfrozenxid) desc limit 20; To prevent transaction wraparound issues in the database, enter the following code: Vacuum freeze; The autovacuum process can also perform these activities, and it is highly recommended that you keep it enabled. Amazon RDS for PostgreSQL has autovacuuming enabled by default. Make sure that you tune the autovacuum parameters to best suit your requirements. In Amazon RDS, the parameter rds.adaptive_autovacuum helps automatically tune the autovacuum parameters whenever the database exceeds the transaction ID thresholds. Enter the following code to check if autovacuum is running in PostgreSQL version 9.6 and above: SELECT datname, usename, pid, waiting, current_timestamp - xact_start AS xact_runtime, query FROM pg_stat_activity WHERE upper(query) like '%VACUUM%' ORDER BY xact_start; Long-running queries and sessions To terminate queries that have run for a long time or are blocking another session, check the PID of the query from the pg_stat_activity table. To kill the query, run the following commands. To cancel the query without disconnecting the connection, enter the following code: SELECT pg_cancel_backend(pid); To terminate the connection and cancel all other queries in that connection, enter the following code: SELECT pg_terminate_backend(pid); To cancel the running queries, always use PG_CANCEL_BACKEND. If the query is stuck and locking other processes, you can use PG_TERMINATE_BACKEND. After termination, you might need to re-run the session again to establish the connection. Planning backup and recovery strategies Unlike on-premises databases, which require manual backup and recovery, Aurora for PostgreSQL and RDS PostgreSQL instances have built-in features to automate backups using snapshots. You must enable these during the creation of the Amazon RDS or Aurora instance. Amazon RDS creates a storage volume snapshot to back up the entire database instance. When you create a DB snapshot, you must identify which DB instance you want to back up, and then give your DB snapshot a name so you can restore from it later. The amount of time it takes to create a snapshot varies depends on the size of your databases. For more information, see Restoring from a DB Snapshot. User management User management is one of the most critical admin tasks, and you must perform it with utmost care. When you create a new Amazon RDS PostgreSQL or Aurora for PostgreSQL instance, it creates an RDS_SUPERUSER role. This is similar to the PostgreSQL user of a typical PostgreSQL instance, but with a few limitations. You can manage users that connect to the database by setting appropriate permission levels. In a default PostgreSQL environment, you can manage user connection through the pg_hba.conf file, but in Amazon RDS for PostgreSQL, you must use GRANT/REVOKE. You can also assign access and privileges to users at a schema level or table level. You can decide on what kind of privileges you want to provide to the users. For more information, see Managing PostgreSQL users and roles. Monitoring the database Monitoring is an integral part of maintaining the reliability, availability, and performance of Amazon RDS and your AWS solutions. Collect monitoring data from all the parts of your AWS solution so that you can debug a multi-point failure if one occurs. One of the major tasks is to set up a detailed level of monitoring for your Amazon RDS and Aurora instances. Amazon Aurora and Amazon RDS offer two types of monitoring by default: Amazon CloudWatch and Amazon RDS Performance Insights. Monitoring with CloudWatch CloudWatch offers the following metrics available for Amazon RDS and Aurora PostgreSQL: High CPU or RAM consumption Disk space consumption Network traffic Database connections IOPS metrics Maximum Used Transaction IDs Queue Depth For more information, see Monitoring Amazon Aurora DB Cluster Metrics. CloudWatch has many metrics available to monitor the health of the Amazon RDS and Aurora instances at the hardware level. However, you must configure Amazon SNS (alarm) on each metric. Monitoring with Performance Insights Amazon RDS Performance Insights employs lightweight data collection methods without impacting the performance of your applications to tune the database for performance. Performance Insights offers the following metrics: OS metrics: CPU Utilization – Wait, Idle, Steal, Nice Disk I/O – Read KbPS, Write IOsPS Load Average Swap – Cached, Free, Total Database metrics: Cache – blocks hit, buffers allocated Checkpoint – Checkpoint timed, buffers checkpoint, checkpoint write latency For more information, see Performance Insights for Amazon RDS for PostgreSQL. Summary This post shared a few common administrator responsibilities on Amazon RDS and Aurora for PostgreSQL databases. This provides a basic framework that you can implement on your test and production workloads. The post also highlights logging and log auditing for better management of the instances. If you have questions or comments about this post, post your thoughts in the comments.   About the Author   John Solomon is a Consultant with AWS Global Competency Center India, working closely with customers who are migrating from on-premises to the AWS Cloud. He is an AWS certified speaker and speaks at various meetups, breakout sessions, webinars, etc. He is an ardent member of the PostgreSQL community and works as a database administrator for PostgreSQL databases.       https://probdm.com/site/MjI4MDE
0 notes
luxus4me · 6 years ago
Link
Envato Tuts+ Code http://j.mp/2CZd6IK
In this article, we’re going to review PDO CRUD—a form builder and database management tool. PDO CRUD helps you build forms for your database tables with just a few lines of code, making it quick and easy to bootstrap a database application.
There are plenty of extensions available for database abstraction and specifically CRUD (create, read, update, and delete) generation for PHP and MySQL. And of course, you’ll also find commercial options that provide ready-to-use features and extended support. In the case of commercial options, you can also expect quality code, bug fixes, and new enhancements.
Today, we’re going to discuss the PDO CRUD tool, available at CodeCanyon for purchase at a very reasonable price. It’s a complete CRUD builder tool which allows you to build applications just by providing database tables and writing a few lines of code.
It works with multiple database back-ends, including MySQL, Postgres, and SQLite. In this article, we’ll see how to use PDO CRUD to build a CRUD system with the MySQL database back-end.
Installation and Configuration
In this section, we’ll see how to install and configure the PDO CRUD tool once you’ve purchased and downloaded it from CodeCanyon.
As soon as you purchase it, you’ll be able to download the zip file. Extract it, and you will find the directory with the main plugin code: PDOCrud/script. Copy this directory to your PHP application.
For example, if your project is configured at /web/demo-app/public_html, you should copy the script directory to /web/demo-app/public_html/script.
Next, you need to enter your database back-end details in the configuration file. The configuration file is located at /web/demo-app/public_html/script/config/config.php. Open that file in your favorite text editor and change the following details in that file.
$config["script_url"] = "http://my-demo-app"; /************************ database ************************/ //Set the host name to connect for database $config["hostname"] = "localhost"; //Set the database name $config["database"] = "demo_app_db"; //Set the username for database access $config["username"] = "demo_app"; //Set the pwd for the database user $config["password"] = "demo_app"; //Set the database type to be used $config["dbtype"] = "mysql"
As you can see, the details are self-explanatory. The $config["script_url"] is set to the URL which you use to access your site.
Once you’ve saved the database details, you’re ready to use the PDO CRUD tool. In our example, we’ll create two MySQL tables that hold employee and department data.
employees: holds employee information
department: holds department information
Open your database management tool and run the following commands to create tables as we’ve just discussed above. I use PhpMyAdmin to work with the MySQL database back-end.
Firstly, let’s create the department table.
CREATE TABLE `department` ( `id` int(11) UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY, `department_name` varchar(255) NOT NULL DEFAULT '' ) ENGINE=MyISAM DEFAULT CHARSET=utf8;
Next, we’ll create the employee table.
CREATE TABLE `employee` ( `id` int(12) UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY, `dept_id` int(11) UNSIGNED NOT NULL, `first_name` varchar(255) NOT NULL DEFAULT '', `last_name` varchar(255) NOT NULL DEFAULT '', `email` varchar(255) NOT NULL DEFAULT '', `phone` varchar(255) NOT NULL DEFAULT '' ) ENGINE=MyISAM DEFAULT CHARSET=utf8;
As you can see, we’ve used the dept_id column in the employee table, which holds the id of the corresponding department stored in the department table.
Once you’ve created the tables in your database, we’re ready to build a CRUD application interface using the PDO CRUD tool!
How to Set Up Basic CRUD
In this section, we’ll see how you can set up a basic CRUD interface using the PDO CRUD tool by writing just a few lines of code.
The Department Table
We’ll start with the department table.
Let’s create department.php with the following contents. If your document root is /web/demo-app/public_html/, create the department.php file at /web/demo-app/public_html/department.php. Recall that we’ve already copied the script directory to /web/demo-app/public_html/script.
<?php require_once "script/pdocrud.php"; $pdocrud = new PDOCrud(); echo $pdocrud->dbTable("department")->render();
And now, if you point your browser to the department.php file, you should see something like this:
Phew! With just two lines of code, you have a ready-to-use CRUD UI which allows you to perform all the necessary create, read, update, and delete actions on your model. Not to mention that the default listing view itself contains a lot of features, including:
search
built-in pagination
print
export records to CSV, PDF or Excel format
bulk delete operation
sorting by columns
Click on the Add button on the right-hand side, and it’ll open the form to add a department record.
Let’s add a few records using the Add button and see how it looks.
As you can see, this is a pretty light-weight and neat interface. With almost no effort, we’ve built a CRUD for the department model! Next, we’ll see how to do the same for the employee table.
The Employee Table
In this section, we’ll see how to build a CRUD for the employee table. Let’s create employee.php with the following contents.
<?php require_once "script/pdocrud.php"; $pdocrud = new PDOCrud(); echo $pdocrud->dbTable("employee")->render();
It's pretty much the same code as last time; we just need to change the name of the table. If you click on the Add button, it also brings you a nice form which allows you to add the employee record.
You might have spotted one problem: the Dept id field is a text field, but it would be better as a drop-down containing the name of the departments. Let’s see how to achieve this.
<?php require_once "script/pdocrud.php"; $pdocrud = new PDOCrud(); // get departments $data = $pdocrud->getPDOModelObj()->select("department"); $options = array(); foreach($data as $record) { $options[$record['id']] = $record['department_name']; } // change the type of the dept_id field from textfield to select dropdown $pdocrud->fieldTypes("dept_id", "select"); $pdocrud->fieldDataBinding("dept_id", $options, "", "","array"); echo $pdocrud->dbTable("employee")->render();
In this code, we've accessed the department table through PDO CRUD so that we can associate the department name with the department ids. Then, we've updated the binding options for the department id field so that it will render as a dropdown (select) list.
Now, click on the Add button to see how it looks! You should see the Dept Id field is now converted to a dropdown!
Let’s add a few employee records and see how the employee listing looks:
That looks nice! But we have another small issue here: you can see that the Dept id column shows the ID of the department, and it would be nice to display the actual department name instead. Let’s find out how to achieve this!
Let’s revise the code of employee.php with the following contents.
<?php require_once "script/pdocrud.php"; $pdocrud = new PDOCrud(); // change the type of the dept_id field from textfield to select dropdown $data = $pdocrud->getPDOModelObj()->select("department"); $options = array(); foreach($data as $record) { $options[$record['id']] = $record['department_name']; } $pdocrud->fieldTypes("dept_id", "select"); $pdocrud->fieldDataBinding("dept_id", $options, "", "","array"); $pdocrud->crudTableCol(array("first_name","last_name", "department_name", "email","phone")); $pdocrud->joinTable("department", "employee.dept_id = department.id", "INNER JOIN"); echo $pdocrud->dbTable("employee")->render();
Here, we've created a join between the employee and department tables with $pdocrud->joinTable, and then told PDO CRUD to render only the employee name, department name, and contact info with $pdocrud->crudTableCol.
And with that change, the employee listing should look like this:
As you can see, the PDO CRUD script is pretty flexible and allows you every possible option to customize your UI.
So far, we’ve discussed how to set up a basic CRUD interface. We’ll see a few more options that you could use to enhance and customize your CRUD UI in the next section.
Customization Options
In this section, we’ll see a few customization options provided by the PDO CRUD tool. Of course, it’s not possible to go through all the options since the PDO CRUD tool provides much more than we could cover in a single article, but I’ll try to highlight a couple of important ones.
Inline Edit
Inline editing is one of the most important features, allowing you to edit a record quickly on the listing page itself. Let’s see how to enable it for the department listing page.
Let’s revise the department.php script as shown in the following snippet.
<?php require_once "script/pdocrud.php"; $pdocrud = new PDOCrud(); $pdocrud->setSettings("inlineEditbtn", true); echo $pdocrud->dbTable("department")->render();
As you can see, we’ve just enabled the inlineEditbtn setting, and the inline editing feature is there right away!
This is a really handy feature which allows you to edit records on the fly!
Filters
As you might have noticed, the department listing page already provides a free text search to filter records. However, you may want to add your own custom filters to improve the search feature. That’s what exactly the Filters option provides as it allows you to build custom filters!
We’ll use the employee.php for this feature as it’s the perfect demonstration use-case. On the employee listing page, we’re displaying the department name for each employee record, so let’s build a department filter which allows you to filter records by the department name.
Go ahead and revise your employee.php as shown in the following snippet.
<?php require_once "script/pdocrud.php"; $pdocrud = new PDOCrud(); $data = $pdocrud->getPDOModelObj()->select("department"); $options = array(); foreach($data as $record) { $options[$record['id']] = $record['department_name']; } $pdocrud->fieldTypes("dept_id", "select");//change state to select dropdown $pdocrud->fieldDataBinding("dept_id", $options, "", "","array");//add data using array in select dropdown $pdocrud->crudTableCol(array("first_name","last_name", "department_name", "email","phone")); $pdocrud->joinTable("department", "employee.dept_id = department.id", "INNER JOIN"); $pdocrud->addFilter("department_filter", "Department", "dept_id", "dropdown"); $pdocrud->setFilterSource("department_filter", $options, "", "", "array"); echo $pdocrud->dbTable("employee")->render();
We’ve just added two lines, with calls to addFilter and setFilterSource, and with that, the employee list looks like the following:
Isn’t that cool? With just two lines of code, you’ve added your custom filter!
Image Uploads
This is a must-have feature should you wish to set up file uploads in your forms. With just a single line of code, you can convert a regular field to a file-upload field, as shown in the following snippet.
I'll assume that you have a profile_image field in your employee table, and that you’re ready to convert it to a file-upload field!
<?php require_once "script/pdocrud.php"; $pdocrud = new PDOCrud(); $pdocrud->fieldTypes("profile_image", "image"); echo $pdocrud->dbTable("employee")->render();
That's it! Users will now be able to upload an image to the profile_image field.
CAPTCHA
Nowadays, if you want to save your site from spamming, CAPTCHA verification is an essential feature. The PDO CRUD tool already provides a couple of options to choose from.
It provides two options: CAPTCHA and ReCAPTCHA. If you select the CAPTCHA option, it presents a mathematical puzzle for the user to solve. On the other hand, if you select the ReCAPTCHA option, it presents a famous I’m not a robot puzzle!
If you want to add a simple CAPTCHA puzzle, you need to add the following line before you render your CRUD.
$pdocrud->formAddCaptcha("captcha");
On the other hand, if you prefer ReCAPTCHA, you can achieve the same by using the following snippet.
$pdocrud->recaptcha("your-site-key","site-secret");
You just need to replace the your-site-key and site-secret arguments with valid credentials from Google.
So far, we’ve discussed options that enhance the functionality of your application. Next, we’ll see how you could alter the skin and thus the look and feel of your application.
Skins
If you don’t like the default skin, you have a couple of options to choose from. The PDO CRUD tool provides dark, fair, green and advanced skins as other options to choose from.
For example, the following listing is based on the green theme.
It looks nice, doesn't it?
Pure Bootstrap
Although the default skin already supports responsive layouts, the PDO CRUD tool also supports Bootstrap library integration!
You need to use the following snippet should you wish to build your layout using the Bootstrap library.
<?php require_once "script/pdocrud.php"; $pdocrud = new PDOCrud(false, "pure", "pure"); echo $pdocrud->dbTable("department")->render();
And here’s what it looks like:
Conclusion
Today, we reviewed the PDO CRUD advanced database form builder and data management tool available at CodeCanyon. This is a CRUD application interface builder tool at its core. It provides a variety of customization options that cover almost everything a CRUD system requires.
As I said earlier, it’s really difficult to cover everything the PDO CRUD tool provides in a single article, but hopefully the official documentation should give you some insight into its comprehensive features.
I hope you’re convinced that the PDO CRUD tool is powerful enough to fulfill your requirements and allows you to get rid of the repetitive work you have to do every time you want to set up a CRUD in your application. Although it’s a commercial plugin, I believe it’s reasonably priced considering the plethora of features it provides.
If you have any suggestions or comments, feel free to use the feed below and I’ll be happy to engage in a conversation!
http://j.mp/2TFLhdX via Envato Tuts+ Code URL : http://j.mp/2etecmc
0 notes
t-baba · 8 years ago
Photo
Tumblr media
Search and Autocomplete in Rails Apps
Searching is one of the most common features found on virtually any website. There are numerous solutions out there for easily incorporating search into your application, but in this article I'll discuss Postgres' native search in Rails applications powered by the pg_search gem. On top of that, I’ll show you how to add an autocomplete feature with the help of the select2 plugin.
I'll explore three examples of employing search and autocomplete features in Rails applications. Specifically, this article covers:
building a basic search feature
discussing additional options supported by pg_search
building autocomplete functionality to display matched user names
using a third-party service to query for geographical locations based on the user's input and powering this feature with autocomplete.
The source code can be found at GitHub.
The working demo is available at sitepoint-autocomplete.herokuapp.com.
Getting Started
Go ahead and create a new Rails application. I’ll be using Rails 5.0.1, but most of the concepts explained in this article apply to older versions as well. As long as we’re going to use Postgres' search, the app should be initialized with the PostgreSQL database adapter:
rails new Autocomplete --database=postgresql
Create a new PG database and setup config/database.yml properly. To exclude my Postgres username and password from the version control system, I'm using the dotenv-rails gem:
Gemfile
# ... group :development do gem 'dotenv-rails' end
To install it, run the following:
$ bundle install
and create a file in the project's root:
.env
PG_USER: 'user' PG_PASS: 'password'
Then exclude this file from version control:
.gitignore
.env
Your database configuration may look like this:
config/database.yml
development: adapter: postgresql database: autocomplete host: localhost user: < %= ENV['PG_USER'] %> password: < %= ENV['PG_PASS'] %>
Now let's create a table and populate it with sample data. Rather than invent anything complex here, I'll simply generate a users table with name and surname columns:
$ rails g model User name:string surname:string $ rails db:migrate
Our sample users should have distinct names so that we can test the search feature. So I'll use the Faker gem:
Gemfile
# ... group :development do gem 'faker' end
Install it by running this:
$ bundle install
Then tweak the seeds.rb file to create 50 users with random names and surnames:
db/seeds.rb
50.times do User.create({name: Faker::Name.first_name, surname: Faker::Name.last_name}) end
Run the script:
$ rails db:seed
Lastly, introduce a root route, controller with the corresponding action and a view. For now, it will only display all the users:
config/routes.rb
# ... resources :users, only: [:index] root to: 'users#index'
users_controller.rb
class UsersController < ApplicationController def index @users = User.all end end
views/users/index.html.erb
<ul> < %= render @users %>
views/users/_user.html.erb
<li> < %= user.name %> < %= user.surname %> </li>
That's it; all preparations are done! Now we can proceed to the next section and add a search feature to the app.
Continue reading %Search and Autocomplete in Rails Apps%
by Ilya Bodrov-Krukowski via SitePoint http://ift.tt/2pXQe6N
0 notes
kmiddleton14-blog · 8 years ago
Text
How to build a basic end to end web application using React
In this blog post, I will be going over how to achieve a basic skeleton setup for a web application that uses React, Redux, Sequelize, and Express.  This post is intended for anyone who wants to quickly setup a web app for either a hackathon, personal project, or just to understand how all these tools fit together.
Directory structure
In our directory, we will have three main folders:
browser
public
server
Browser will consist of all the client side code where we will add our react components and containers, starting html file, and react-redux reducers and action-creators
Public will be all the static files we want to be made (you guessed it) public.  Here is where we will have our css files, images, favicon, and our bundle that webpack builds for us
Server is where we will be adding our express app and routes, and sequelize database structures
project | +-- browser | | | +-- react | | | | | +-- action-creators | | | | | +-- components | | | | | +-- containers | | | | | +-- reducers | | | | | +-- index.js | | | +-- index.html | +-- public | | | +-- bundle.js (public js file webpack automatically builds) | | | +-- css | +-- server | | | +-- app | | | | | +-- index.js (setting up express and routes) | | | | +-- db | | | | | +-- models | | | | | +-- db.js | | | +-- node_modules (created automatically from npm)
Install
Now that we have all our folders setup correctly in our project, it’s time to initiate our npm package in our terminal. This will create our package.json file that holds our project's dependencies. After initializing the package.json it should appear in your project's directory
>npm init
Time to download all our necessary libraries. On the browser side, we will need react and redux. For react we will need both react and react-dom libraries. React-dom is needed to replace react into the dom.  And we need the redux library with axios since axios allows us to make our ajax calls using redux
>npm install --save react react-dom redux axios
We also need babel and webpack in our dev dependencies. This bundles our react jsx files into the bundle.js public file.
>npm install --save-dev webpack babel-core babel-loader babel-preset-es2015 babel-preset-react
On our server side, we need to setup express and the database. I will use postgres for my database in this example.
>npm install --save express
When downloading the sequelize library, it requires the database you will be using library after.
>npm install —save sequelize pg pg-hstore
After all of these libraries are installed, all the dependencies should be populated in your package.json file
Setting up browser files
Let’s add our webpack.config.js file.  But first, what exactly is webpack doing and why do we need it again?  Webpack takes some javascript code, reads in that source, and then produces bundled javascript code in one file to run in the browser (bundle.js). 
So how does webpack know where to get the javascript files and know where to put the bundled output?  It looks in our webpack.config.js file for the entry and output file paths. Our entry file should be the main react root component.
'use strict'; var webpack = require('webpack'); module.exports = { entry: './browser/app.js', (entry point) output: { path: __dirname, filename: './public/bundle.js' (output path of bundle) }, context: __dirname, devtool: 'source-map', resolve: { extensions: ['', '.js', '.jsx'] }, module: { loaders: [ { test: /jsx?$/, exclude: /(node_modules|bower_components)/, loader: 'babel', query: { presets: ['react', 'es2015'] } } ] } };
In the module section, we tell it to use babel which transforms our react jsx files into code that can be run in the browser. To build our webpack, we need to use this command in the terminal. This step is necessary since webpack takes the code we've written, and makes it into an executable output
>webpack -w
After getting our webpack config file setup, we need to setup our index.html file, located in our browser folder
<!DOCTYPE html> <html> <head> <title>My Page</title> <script src="/bundle.js"></script> </head> <body> <div id='app'></div> </body> </html>
To me, this is one of the coolest thing about React! This is our entire html file that will be initially sent to the server. From there, the rest of our code will be added to the DOM inside that div tag that we called "app". Our next step will be adding our first react file and specifying which html tag to add it to
Within out react folder, add an index.js file.
import React from 'react'; import ReactDOM from 'react-dom'; ReactDOM.render( <h1>Welcome to your homepage!</h1>, document.getElementById('app') );
the document.getElementById here is very important since it specifies the name of the div to add all our jsx code to. This is a very basic h1 tag being added for demonstration, but we could add other components here as well or some javascript.
Time to add our server and express route
In our app folder, add an index.js with the following code
const express = require('express'); const app = express(); const path = require('path'); app.use(express.static(path.join(__dirname, '../public'))); app.get('/', (req, res) => { const filePath = path.resolve('browser/index.html') res.sendFile(filePath); }) const port = 8080; const server = app.listen(port, function(err) { if(err) throw err; console.log('server is listneing on port ', port) }) module.exports = app;
This is a very basic server and express route which only sends across our index.html file when landing on localhost:8080
0 notes