#postgres configuration parameters
Explore tagged Tumblr posts
newtglobal · 11 months ago
Text
The Ultimate Guide to Migrating from Oracle to PostgreSQL: Challenges and Solutions
Challenges in Migrating from Oracle to PostgreSQL
Migrating from Oracle to PostgreSQL is a significant endeavor that can yield substantial benefits in terms of cost savings, flexibility, and advanced features. Understanding these challenges is crucial for ensuring a smooth and successful transition. Here are some of the essential impediments organizations may face during the migration:
1. Schema Differences
Challenge: Oracle and PostgreSQL have different schema structures, which can complicate the migration process. Oracle's extensive use of features such as PL/SQL, packages, and sequences needs careful mapping to PostgreSQL equivalents.
Solution:
Schema Conversion Tools: Utilize tools like Ora2Pg, AWS Schema Conversion Tool (SCT), and EDB Postgres Migration Toolkit to automate and simplify the conversion of schemas.
Manual Adjustments: In some cases, manual adjustments may be necessary to address specific incompatibilities or custom Oracle features not directly supported by PostgreSQL.
2. Data Type Incompatibilities
Challenge: Oracle and PostgreSQL support diverse information sorts, and coordinate mapping between these sorts can be challenging. For illustration, Oracle's NUMBER information sort has no coordinate identical in PostgreSQL.
Solution:
Data Type Mapping: Use migration tools that can automatically map Oracle data types to PostgreSQL data types, such as PgLoader and Ora2Pg.
Custom Scripts: Write custom scripts to handle complex data type conversions that are not supported by automated tools.
3. Stored Procedures and Triggers
Challenge: Oracle's PL/SQL and PostgreSQL's PL/pgSQL are similar but have distinct differences that can complicate the migration of stored procedures, functions, and triggers.
Solution:
Code Conversion Tools: Use tools like Ora2Pg to convert PL/SQL code to PL/pgSQL. However, be prepared to review and test the converted code thoroughly.
Manual Rewriting: For complex procedures and triggers, manual rewriting and optimization may be necessary to ensure they work correctly in PostgreSQL.
4. Performance Optimization
Challenge: Performance tuning is essential to ensure that the PostgreSQL database performs as well or better than the original Oracle database. Differences in indexing, query optimization, and execution plans can affect performance.
Solution:
Indexing Strategies: Analyze and implement appropriate indexing strategies tailored to PostgreSQL.
Query Optimization: Optimize queries and consider using PostgreSQL-specific features, such as table partitioning and advanced indexing techniques.
Configuration Tuning: Adjust PostgreSQL configuration parameters to suit the workload and hardware environment.
5. Data Migration and Integrity
Challenge: Ensuring data judgment during the migration process is critical. Huge volumes of information and complex information connections can make data migration challenging.
Solution:
Data Migration Tools: Use tools like PgLoader and the data migration features of Ora2Pg to facilitate efficient and accurate data transfer.
Validation: Perform thorough data validation and integrity checks post-migration to guarantee that all information has been precisely exchanged and is steady.
6. Application Compatibility
Challenge: Applications built to interact with Oracle may require modifications to work seamlessly with PostgreSQL. This includes changes to database connection settings, SQL queries, and error handling.
Solution:
Code Review: Conduct a comprehensive review of application code to identify and modify Oracle-specific SQL queries and database interactions.
Testing: Implement extensive testing to ensure that applications function correctly with the new PostgreSQL database.
7. Training and Expertise
Challenge: The migration process requires a deep understanding of both Oracle and PostgreSQL. Lack of expertise in PostgreSQL can be a significant barrier.
Solution:
Training Programs: Invest in training programs for database administrators and developers to build expertise in PostgreSQL.
Consultants: Consider hiring experienced consultants or engaging with vendors who specialize in database migrations.
8. Downtime and Business Continuity
Challenge: Minimizing downtime during the migration is crucial for maintaining business continuity. Unexpected issues during migration can lead to extended downtime and disruptions.
Solution:
Detailed Planning: create a comprehensive migration plan with detailed timelines and possibility plans for potential issues.
Incremental Migration: Consider incremental or phased migration approaches to reduce downtime and ensure a smoother transition.
Elevating Data Operations: The Impact of PostgreSQL Migration on Innovation
PostgreSQL Migration not only enhances data management capabilities but also positions organizations to better adapt to future technological advancements. With careful management of the PostgreSQL migration process, businesses can unlock the full potential of PostgreSQL, driving innovation and efficiency in their data operations. From Oracle to PostgreSQL: Effective Strategies for a Smooth Migration Navigating the migration from Oracle to PostgreSQL involves overcoming several challenges, from schema conversion to data integrity and performance optimization. Addressing these issues requires a combination of effective tools, such as Ora2Pg and AWS SCT, and strategic planning. By leveraging these tools and investing in comprehensive training, organizations can ensure a smoother transition and maintain business continuity. The key to victory lies in meticulous planning and execution, including phased migrations and thorough testing. Despite the complexities, the rewards of adopting PostgreSQL- cost efficiency, scalability, and advanced features far outweigh the initial hurdles. Thanks For Reading
For More Information, Visit Our Website: https://newtglobal.com/
0 notes
codeonedigest · 2 years ago
Text
Spring Boot Microservice Project with Postgres DB Tutorial with Java Example for Beginners  
Full Video Link: https://youtu.be/iw4wO9gEb50 Hi, a new #video on #springboot #microservices with #postgres #database is published on #codeonedigest #youtube channel. Complete guide for #spring boot microservices with #postgressql. Learn #programming #
In this video, we will learn, how to download, install postgres database, how to integrate Postgres database with a Spring Boot Microservice Application and perform different CRUD operations i.e. Create, Read, Update, and Delete operations on the Customer entity. Spring Boot is built on the top of the spring and contains all the features of spring. And is becoming a favorite of developers these…
Tumblr media
View On WordPress
0 notes
absalomcarlisle1 · 4 years ago
Text
Absalom Carlisle - DATA ANALYST
Absalom Carlisle is a customer-focused leader in operations, data analytics, project management and business development. Drives process improvements to contain costs, increase productivity and grow revenue through data analysis using-Python, SQL and Excel. Creates strategies and allocates resources through competitive analysis and business intelligence insights with visualizations using Tableau and Power-BI. Excellent presentation, analytical, communication and problem-solving-skills. Develops strong relationships with stakeholders to mitigate issues and to foster change. Nashville Software School will enhance and help me acquire new skills from a competitive program with unparalleled instructions. Working on individual & Group projects using real data set from local companies is invaluable. The agile remote-working environment-has/will continue to solidify my expertise as I prepare my journey to join Data Analytics career path.
Technical Skills
·   DATA ANALYSIS    SQL SERVER                                                                          POSTGRES SQL     EXCEL/PIVOT TABLES
·   PYTHON/JUPYTER NOTEBOOKS                                                                        TABLEAU/TABLEAU-PREP    POWER BI
·   SSRS/SSIS                                                                                                              GITBASH/GITHUB    KANBAN
DATA ANALYST EXPERIENCE
Querying Databases with SQL
Indexing and Query Tuning                                                                                    
Report Design W/Data Sets and Aggregates                                  
Sub-Reports-Parameters and Filters
Data Visualization W/Tableau and Power-BI
 Report Deployment                                                              
Metadata Repository                                                                
Data Warehousing-Delivery Process                                      
Data Warehouse Schemas
Star Schemas-Snowflakes Schemas                                  
PROFESIONAL EXPERIENCE
Quantrell Auto Group
Director of Operations | 2016- 2020
·         Fostered strong partnerships with business leaders, senior business managers, and business vendors.
·         Analyzed business vendor performances using Excel data with Tableau to create reports and dashboards for insights that helped implement vendor specific plans, garnering monthly savings of $25K.
·         Managed and worked with high profile Contractors and architecture firms that delivered 3 new $7M construction building projects for Subaru, Volvo and Cadillac on time and under budget.
·         Led energy savings initiative that updated HVAC systems, installed LED lighting though-out campus, introduced and managed remote controlled meters - reducing monthly costs from $38K to $18K and gaining $34K in energy rebate from the utility company- as a result, the company received Green Dealer Award recognition nationally.
·         Collected, tracked and organized data to evaluate current business and market trends using Tableau.
·         Conducted in-depth research of vehicle segments and presented to Sr. management recommendations to improve accuracy of residual values forecasts by 25%.
·         Identified inefficiencies in equipment values forecasts and recommended improved policies.
·         Manipulated residual values segment data and rankings using pivot tables, pivot charts.
·         Created routine and ad-hoc reports for internal and for external customer’s requests.
·         Provided project budgeting and cost estimation for proposal submission.
·         Established weekly short-term vehicle forecast based on historical data sets, enabling better anticipation capacity.
·         Selected by management to head the operational integration of Avaya Telecommunication system, Cisco Meraki Cloud network system and the Printer install project.
·         Scheduled and completed 14 Cisco Meraki inspections to 16 buildings, contributing 99% network up-time.
·         Following design plans, installed and configured 112 workstations and Cisco Meraki Switches, fulfilling 100% user needs.
Clayton Healthcare Services Founder | 2009 - 2015
·         Successfully managed home healthcare business from zero to six-figure annual revenues. Drove growth through strategic planning, budgeting, and business development.
·         Built a competent team from scratch as a startup company.
·         Built strategic marketing and business development plans.
·         Built and managed basic finance, bookkeeping, and accounting functions using excel.
·         Processed, audited and maintained daily, monthly payable-related activities, including data entry of payables and related processing, self-auditing of work product, reviews and processing of employee’s reimbursements, and policy/procedure compliance.
·         Increased market share through innovative marketing strategies and excellent customer service.
JP Morgan Chase
Portfolio Analyst 2006-2009
·         Researched potential equity, fixed income, and alternative investments for high net-worth individuals and institutional clients.
·         Analyzed quarterly performance data to identify trends in operations using Alteryx and Excel.
·         SME in providing recommendations for Equity Solutions programs to enable portfolio managers to buy securities at their own discretion.
·         Created ad-hoc reports to facilitate executive-level decision making
·         Maintained and monitored offered operational support for key performance indicators and trends dashboards
EDUCATION & TRAINING
Bachelor of Science in Managerial Economics                     2011                      Washington University
St. Louis, MO
Project Management Certification                                           2014                    St. Louis University
Microsoft BI Full Stack Certification
St. Louis, MO
Data Science/Analytics                                                            Jan 2021     Nashville Software School                                                                                      Nashville, TN
1 note · View note
readevalprint · 4 years ago
Text
Ichiran@home 2021: the ultimate guide
Recently I’ve been contacted by several people who wanted to use my Japanese text segmenter Ichiran in their own projects. This is not surprising since it’s vastly superior to Mecab and similar software, and is occassionally updated with new vocabulary unlike many other segmenters. Ichiran powers ichi.moe which is a very cool webapp that helped literally dozens of people learn Japanese.
A big obstacle towards the adoption of Ichiran is the fact that it’s written in Common Lisp and people who want to use it are often unfamiliar with this language. To fix this issue, I’m now providing a way to build Ichiran as a command line utility, which could then be called as a subprocess by scripts in other languages.
This is a master post how to get Ichiran installed and how to use it for people who don’t know any Common Lisp at all. I’m providing instructions for Linux (Ubuntu) and Windows, I haven’t tested whether it works on other operating systems but it probably should.
PostgreSQL
Ichiran uses a PostgreSQL database as a source for its vocabulary and other things. On Linux install postgresql using your preferred package manager. On Windows use the official installer. You should remember the password for the postgres user, or create a new user if you know how to do it.
Download the latest release of Ichiran database. On the release page there are commands needed to restore the dump. On Windows they don't really work, instead try to create database and restore the dump using pgAdmin (which is usually installed together with Postgres). Right-click on PostgreSQL/Databases/postgres and select "Query tool...". Paste the following into Query editor and hit the Execute button.
CREATE DATABASE [database_name] WITH TEMPLATE = template0 OWNER = postgres ENCODING = 'UTF8' LC_COLLATE = 'Japanese_Japan.932' LC_CTYPE = 'Japanese_Japan.932' TABLESPACE = pg_default CONNECTION LIMIT = -1;
Then refresh the Databases folder and you should see your new database. Right-click on it then select "Restore", then choose the file that you downloaded (it wants ".backup" extension by default so choose "Format: All files" if you can't find the file).
You might get a bunch of errors when restoring the dump saying that "user ichiran doesn't exist". Just ignore them.
SBCL
Ichiran uses SBCL to run its Common Lisp code. You can download Windows binaries for SBCL 2.0.0 from the official site, and on Linux you can use the package manager, or also use binaries from the official site although they might be incompatible with your operating system.
However you really want the latest version 2.1.0, especially on Windows for uh... reasons. There's a workaround for Windows 10 though, so if you don't mind turning on that option, you can stick with SBCL 2.0.0 really.
After installing some version of SBCL (SBCL requires SBCL to compile itself), download the source code of the latest version and let's get to business.
On Linux it should be easy, just run
sh make.sh --fancy sudo sh install.sh
in the source directory.
On Windows it's somewhat harder. Install MSYS2, then run "MSYS2 MinGW 64-bit".
pacman -S mingw-w64-x86_64-toolchain make # for paths in MSYS2 replace drive prefix C:/ by /c/ and so on cd [path_to_sbcl_source] export PATH="$PATH:[directory_where_sbcl.exe_is_currently]" # check that you can run sbcl from command line now # type (sb-ext:quit) to quit sbcl sh make.sh --fancy unset SBCL_HOME INSTALL_ROOT=/c/sbcl sh install.sh
Then edit Windows environment variables so that PATH contains c:\sbcl\bin and SBCL_HOME is c:\sbcl\lib\sbcl (replace c:\sbcl here and in INSTALL_ROOT with another directory if applicable). Check that you can run a normal Windows shell (cmd) and run sbcl from it.
Quicklisp
Quicklisp is a library manager for Common Lisp. You'll need it to install the dependencies of Ichiran. Download quicklisp.lisp from the official site and run the following command:
sbcl --load /path/to/quicklisp.lisp
In SBCL shell execute the following commands:
(quicklisp-quickstart:install) (ql:add-to-init-file) (sb-ext:quit)
This will ensure quicklisp is loaded every time SBCL starts.
Ichiran
Find the directory ~/quicklisp/local-projects (%USERPROFILE%\quicklisp\local-projects on Windows) and git clone Ichiran source code into it. It is possible to place it into an arbitrary directory, but that requires configuring ASDF, while ~/quicklisp/local-projects/ should work out of the box, as should ~/common-lisp/ but I'm not sure about Windows equivalent for this one.
Ichiran wouldn't load without settings.lisp file which you might notice is absent from the repository. Instead, there's a settings.lisp.template file. Copy settings.lisp.template to settings.lisp and edit the following values in settings.lisp:
*connection* this is the main database connection. It is a list of at least 4 elements: database name, database user (usually "postgres"), database password and database host ("localhost"). It can be followed by options like :port 5434 if the database is running on a non-standard port.
*connections* is an optional parameter, if you want to switch between several databases. You can probably ignore it.
*jmdict-data* this should be a path to these files from JMdict project. They contain descriptions of parts of speech etc.
ignore all the other parameters, they're only needed for creating the database from scratch
Run sbcl. You should now be able to load Ichiran with
(ql:quickload :ichiran)
On the first run, run the following command. It should also be run after downloading a new database dump and updating Ichiran code, as it fixes various issues with the original JMdict data.
(ichiran/mnt:add-errata)
Run the test suite with
(ichiran/test:run-all-tests)
If not all tests pass, you did something wrong! If none of the tests pass, check that you configured the database connection correctly. If all tests pass, you have a working installation of Ichiran. Congratulations!
Some commands that can be used in Ichiran:
(ichiran:romanize "一覧は最高だぞ" :with-info t) this is basically a text-only equivalent of ichi.moe, everyone's favorite webapp based on Ichiran.
(ichiran/dict:simple-segment "一覧は最高だぞ") returns a list of WORD-INFO objects which contain a lot of interesting data which is available through "accessor functions". For example (mapcar 'ichiran/dict:word-info-text (ichiran/dict:simple-segment "一覧は最高だぞ") will return a list of separate words in a sentence.
(ichiran/dict:dict-segment "一覧は最高だぞ" :limit 5) like simple-segment but returns top 5 segmentations.
(ichiran/dict:word-info-from-text "一覧") gets a WORD-INFO object for a specific word.
ichiran/dict:word-info-str converts a WORD-INFO object to a human-readable string.
ichiran/dict:word-info-gloss-json converts a WORD-INFO object into a "json" "object" containing dictionary information about a word, which is not really JSON but an equivalent Lisp representation of it. But, it can be converted into a real JSON string with jsown:to-json function. Putting it all together, the following code will convert the word 一覧 into a JSON string:
(jsown:to-json (ichiran/dict:word-info-json (ichiran/dict:word-info-from-text "一覧")))
Now, if you're not familiar with Common Lisp all this stuff might seem confusing. Which is where ichiran-cli comes in, a brand new Command Line Interface to Ichiran.
ichiran-cli
ichiran-cli is just a simple command-line application that can be called by scripts just like mecab and its ilk. The main difference is that it must be built by the user, who has already did the previous steps of the Ichiran installation process. It needs to access the postgres database and the connection settings from settings.lisp are currently "baked in" during the build. It also contains a cache of some database references, so modifying the database (i.e. updating to a newer database dump) without also rebuilding ichiran-cli is highly inadvisable.
The build process is very easy. Just run sbcl and execute the following commands:
(ql:quickload :ichiran/cli) (ichiran/cli:build)
sbcl should exit at this point, and you'll have a new ichiran-cli (ichiran-cli.exe on Windows) executable in ichiran source directory. If sbcl didn't exit, try deleting the old ichiran-cli and do it again, it seems that on Linux sbcl sometimes can't overwrite this file for some reason.
Use -h option to show how to use this tool. There will be more options in the future but at the time of this post, it prints out the following:
>ichiran-cli -h Command line interface for Ichiran Usage: ichiran-cli [-h|--help] [-e|--eval] [-i|--with-info] [-f|--full] [input] Available options: -h, --help print this help text -e, --eval evaluate arbitrary expression and print the result -i, --with-info print dictionary info -f, --full full split info (as JSON) By default calls ichiran:romanize, other options change this behavior
Here's the example usage of these switches
ichiran-cli "一覧は最高だぞ" just prints out the romanization
ichiran-cli -i "一覧は最高だぞ" - equivalent of ichiran:romanize :with-info t above
ichiran-cli -f "一覧は最高だぞ" - outputs the full result of segmentation as JSON. This is the one you'll probably want to use in scripts etc.
ichiran-cli -e "(+ 1 2 3)" - execute arbitrary Common Lisp code... yup that's right. Since this is a new feature, I don't know yet which commands people really want, so this option can be used to execute any command such as those listed in the previous section.
By the way, as I mentioned before, on Windows SBCL prior to 2.1.0 doesn't parse non-ascii command line arguments correctly. Which is why I had to include a section about building a newer version of SBCL. However if you use Windows 10, there's a workaround that avoids having to build SBCL 2.1.0. Open "Language Settings", find a link to "Administrative language settings", click on "Change system locale...", and turn on "Beta: Use Unicode UTF-8 for worldwide language support". Then reboot your computer. Voila, everything will work now. At least in regards to SBCL. I can't guarantee that other command line apps which use locales will work after that.
That's it for now, hope you enjoy playing around with Ichiran in this new year. よろしくおねがいします!
6 notes · View notes
technovert · 4 years ago
Text
A DATA INTEGRATION APPROACH TO MAXIMIZE YOUR ROI
The data Integration approach adopted by many data integration projects relies on a set of premium tools leading to cash burnout with RoI less than the standard.
To overcome this and to maximize the RoI, we lay down a data integration approach that makes use of open-source tools over the premium to deliver better results and an even more confident return on the investment.  
Tumblr media
Adopt a two-stage data integration approach:
Part 1 explains the process of setting up technicals and part 2 covers the execution approach involving challenges faced and solutions to the same.
Part 1: Setting Up
The following are the widely relied data sources:
REST API Source with standard NoSQL JSON (with nested datasets)
REST API Source with full data schema in JSON
CSV Files in AWS S3
Relational Tables from Postgres DB
There are 2 different JSON types above in which the former is conventional, and the latter is here
Along with the data movement, it is necessary to facilitate Plug-n-Play architecture, Notifications, Audit data for Reporting, Un-burdened Intelligent scheduling, and setting up all the necessary instances.
The landing Data warehouse chosen was AWS Redshift which is a one-stop for the operational data stores (ODS) as well as facts & dimensions. As said, we completely relied on open-source tools over the tools from tech giants like Oracle, Microsoft, Informatica, Talend, etc.,  
The data integration was successful by leveraging Python, SQL, and Apache Airflow to do all the work. Use Python for Extraction; SQL to Load & Transform the data and Airflow to orchestrate the loads via python-based scheduler code. Below is the data flow architecture.
Data Flow Architecture
Part 2: Execution
The above data flow architecture gives a fair idea of how the data was integrated. The execution is explained in parallel with the challenges faced and how they were solved.
Challenges:
Plug-n-Play Model.  
Dealing with the nested data in JSON.
Intelligent Scheduling.
Code Maintainability for future enhancements.  
1. Plug-n-Play Model
To meet the changing business needs, the addition of columns or a datastore is obvious and if the business is doing great, expansion to different regions is apparent. The following aspects were made sure to ensure a continuous integration process.
A new column will not break the process.
A new data store should be added with minimal work by a non-technical user.
To bring down the time consumed for any new store addition(expansion) integration from days to hours.  
The same is achieved by using:
config table which is the heart of the process holding all the jobs needed to be executed, their last extract dates, and parameters for making the REST API call/extract data from RDBMS.
Re-usable python templates which are read-modified-executed based on the parameters from the config table.
Audit table for logging all the crucial events happening whilst integration.
Control table for mailing and Tableau report refreshes after the ELT process
By creating state-of-art DAGs which can generate DAGs(jobs) with configuration decided in the config table for that particular job.
Any new table which is being planned for the extraction or any new store being added as part of business expansion needs its entries into the config table.
The DAG Generator DAG run will build jobs for you in a snap which will be available in Airflow UI on the subsequent refresh within seconds, and the new jobs are executed on the next schedule along with existing jobs.
2. Dealing with the nested data in JSON.
It is a fact that No-SQL JSONS hold a great advantage from a storage and data redundancy perspective but add a lot of pain while reading the nested data out of the inner arrays.
The following approach is adopted to conquer the above problem:
Configured AWS Redshift Spectrum, with IAM Role and IAM Policy as needed to access AWS Glue Catalog and associating the same with AWS Redshift database cluster
Created external database, external schema & external tables in AWS Redshift database
Created AWS Redshift procedures with specific syntax to read data in the inner array
AWS Redshift was leveraged to parse the data directly from JSON residing in AWS S3 onto an external table (no loading is involved) in AWS Redshift which was further transformed to rows and columns as needed by relational tables.
3. Intelligent Scheduling
There were multiple scenarios in orchestration needs:
Time-based – Batch scheduling; MicroELTs ran to time & again within a day for short intervals.
Event-based – File drop in S3
For the batch scheduling, neither the jobs were run all in series (since it is going to be an underutilization of resources and a time-consuming process) nor in parallel as the workers in airflow will be overwhelmed.  
A certain number of jobs were automated to keep running asynchronously until all the processes were completed. By using a python routine to do intelligent scheduling. The code reads the set of jobs being executed as part of the current batch into a job execution/job config table and keeps running those four jobs until all the jobs are in a completed/failed state as per the below logical flow diagram.
Logical Flow Diagram
For Event-based triggering, a file would be dropped in S3 by an application, and the integration process will be triggered by reading this event and starts the loading process to a data warehouse.
The configuration is as follows:
CloudWatch event which will trigger a Lambda function which in turn makes an API call to trigger Airflow DAG
4. Code Maintainability for future enhancements
A Data Integration project is always collaborative work and maintaining the correct source code is of dire importance. Also, if a deployment goes wrong, the capability to roll back to the original version is necessary.
For projects which involve programming, it is necessary to have a version control mechanism. To have that version control mechanism, configure the GIT repository to hold the DAG files in Airflow with Kubernetes executor.
Take away:
This data integration approach is successful in completely removing the premium costs while decreasing the course of the project. All because of the reliance on open-source tech and utilizing them to the fullest.
By leveraging any ETL tool in the market, the project duration would be beyond 6 months as it requires building a job for each operational data store. The best-recommended option is using scripting in conjunction with any ETL tool to repetitively build jobs that would more or less fall/overlap with the way it is now executed.  
Talk to our Data Integration experts:
Looking for a one-stop location for all your integration needs? Our data integration experts can help you align your strategy or offer you a consultation to draw a roadmap that quickly turns your business data into actionable insights with a robust Data Integration approach and a framework tailored for your specs.
1 note · View note
cricketpiner · 3 years ago
Text
Xcode for mac os x 10.7.5
Tumblr media
#XCODE FOR MAC OS X 10.7.5 HOW TO#
#XCODE FOR MAC OS X 10.7.5 FOR MAC OS X#
#XCODE FOR MAC OS X 10.7.5 INSTALL#
#XCODE FOR MAC OS X 10.7.5 64 BIT#
#XCODE FOR MAC OS X 10.7.5 CODE#
#XCODE FOR MAC OS X 10.7.5 INSTALL#
Homebrew will install a couple of packages required by Python and then Python itself. Now that you’ve got Homebrew installing Python is simple: brew install python See or type brew help or man brew for more info on Homebrew. Normal executables go in /usr/local/bin/ and Python scripts installed by Homebrew go in /usr/local/share/python/. bash_profile) file: export PATH=/usr/local/bin:/usr/local/share/python:$PATH To add Homebrew installed executables and Python scripts to your path you’ll want to add the following line to your. Homebrew installs things to /usr/local/ so you don’t need sudo permissions. To install it simply launch a terminal and enter ruby -e "$(curl -fsSkL /mxcl/homebrew/go)"
#XCODE FOR MAC OS X 10.7.5 FOR MAC OS X#
Homebrew is an excellent package manager for Mac OS X that can install a large number of packages. (You’ll need a free Apple ID.) (See also. If you prefer another editor it’s possible to get only the libraries and compilers that you need with the Command Line Tools for Xcode. However, I use hardly any of its features and unless you’re an iOS or Mac developer you probably won’t either.
#XCODE FOR MAC OS X 10.7.5 CODE#
I use the Xcode editor because I like its syntax highlighting, code completion, and organizer. On Lion you can install Xcode from the App Store, on Snow Leopard you’ll have to get an older Xcode from. You will need Apple’s developer tools in order to compile Python and the other installs. Update: If doing all the stuff below doesn’t seem like your cup of tea, it’s also possible to install Python, NumPy, SciPy, and matplotlib using double-click binary installers (resulting in a much less flexible installation), see this post to learn how. On Snow Leopard you won’t install Xcode via the App Store, you’ll have to download it from Apple.Īfter I’d helped my friend I found this blog post describing a procedure pretty much the same as below. These instructions are for Lion but should work on Snow Leopard or Mountain Lion without much trouble. See the “Install Python” page for the most recent instructions.Ī bit ago a friend and I both had fresh Mac OS X Lion installs so I helped him set up his computers with a scientific Python setup and did mine at the same time. Update: These instructions are over a year old, though they may still work for you.
Setting the State of a Postgres Sequence.
You can work around the bug in Xcode 3.2.x by using the -k-no_order_inits command line parameter when compiling a dynamic library. There is however an issue when compiling dynamic libraries with FPC under Mac OS X 10.6 due to a bug in the Xcode 3.2.x linker. Xcode 3.2.x - 4.2 compatibility (Mac OS X 10.6)įPC 3.0.4 is qualified for use with Mac OS X 10.4 till macOS 10.14. Afterwards, FPC will install and function correctly. To install them manually, open Xcode, go to Preferences, select "Downloads" and install the "Command Line Tools". Xcode 4.3 and later however no longer install the command line tools by default, which are required by FPC. Xcode 4.3 - 5.x compatibility (Mac OS X 10.7/OS X 10.8)įPC 3.0.4 is qualified for use with Mac OS X 10.4 till macOS 10.14. To install them manually, open "/Applications/Utilities/Terminal", execute "xcode-select -install" and choose "Install". Xcode 5 and later however no longer install the command line tools by default, which are required by FPC. Xcode 5+ compatibility (OS X 10.9 for OS X 10.8, see below)įPC 3.0.4 is qualified for use with Mac OS X 10.4 till macOS 10.14.
#XCODE FOR MAC OS X 10.7.5 HOW TO#
See also the section below on how to install the command line tools. If you already installed FPC under a previous Mac OS X/OS X/macOS version, you will have to reinstall FPC 3.0.4a under macOS 10.14 to get a configuration file that enables the compiler to find the necessary files). Xcode 10 installs some command line file in different locations compared to previous releases. These workarounds are required because we do not pay Apple 79 euro per year, which would prove you can trust us.įPC 3.0.4 is qualified for use with Mac OS X 10.4 till macOS 10.14. If this does not work, you may first have to go to System Preferences -> Security & Privacy -> General, and "Allow apps downloaded from: Mac App Store and Identified developers". If you get the message that the FPC installer was created by an unknown developer and cannot be opened, right-click on the installation package and choose "Open" in the contextual menu. "Unknown developer" error when installing (Mac OS X 10.7 and later)
#XCODE FOR MAC OS X 10.7.5 64 BIT#
Fpc-3.0.4a.intel-macosx.dmg : FPC for 32 and 64 bit Intelįpc-3.0.4.powerpc-macosx.dmg : FPC for 32 and 64 bit PowerPCįpc-3.0.5. : FPC cross-compilers from Intel to 32 and 64 bit iOSįpc-3.0.4. : FPC cross-compilers from Intel to 32 and 64 bit PowerPCįpc-3.0.4. : FPC cross-compiler from Intel to JVM (very limited Pascal RTL support, but full JDK 1.5 support)
Tumblr media
0 notes
mainsindie · 3 years ago
Text
Postgresql create database
Tumblr media
Postgresql create database how to#
Postgresql create database install#
Postgresql create database upgrade#
An Azure resource group is a logical container into which Azure resources are deployed and managed. # to limit / allow access to the PostgreSQL serverĮcho "Using resource group $resourceGroup with login: $login, password: $password."Ĭreate a resource group with the az group create command. # Specify appropriate IP address values for your environment Server="msdocs-postgresql-server-$randomIdentifier" Tag="create-postgresql-server-and-firewall-rule" ResourceGroup="msdocs-postgresql-rg-$randomIdentifier" Use the public IP address of the computer you're using to restrict access to the server to only your IP address. Replace 0.0.0.0 with the IP address range to match your specific environment. Server names need to be globally unique across all of Azure so the $RANDOM function is used to create the server name.Ĭhange the location as appropriate for your environment. The following values are used in subsequent commands to create the database and required resources. or use 'az login'įor more information, see set active subscription or log in interactively Set parameter values subscription="" # add subscription hereĪz account set -s $subscription #. If you don't have an Azure subscription, create an Azure free account before you begin. Use the following script to sign in using a different subscription, replacing with your Azure Subscription ID. Sign in to AzureĬloud Shell is automatically authenticated under the initial account signed-in with. Subsequent sessions will use Azure CLI in a Bash environment, Select Copy to copy the blocks of code, paste it into the Cloud Shell, and press Enter to run it. When Cloud Shell opens, verify that Bash is selected for your environment. You can also launch Cloud Shell in a separate browser tab by going to. To open the Cloud Shell, just select Try it from the upper right corner of a code block. It has common Azure tools preinstalled and configured to use with your account. The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article.
Postgresql create database upgrade#
To upgrade to the latest version, run az upgrade. Run az version to find the version and dependent libraries that are installed. For more information about extensions, see Use extensions with the Azure CLI.
Postgresql create database install#
When you're prompted, install the Azure CLI extension on first use. For other sign-in options, see Sign in with the Azure CLI. To finish the authentication process, follow the steps displayed in your terminal. If you're using a local installation, sign in to the Azure CLI by using the az login command.
Postgresql create database how to#
For more information, see How to run the Azure CLI in a Docker container. If you're running on Windows or macOS, consider running Azure CLI in a Docker container. If you prefer to run CLI reference commands locally, install the Azure CLI. For more information, see Azure Cloud Shell Quickstart - Bash. Use the Bash environment in Azure Cloud Shell. Consider using the simpler az postgres up Azure CLI command.
Tumblr media
0 notes
justgroups · 3 years ago
Text
Installing postgres app
Tumblr media
#INSTALLING POSTGRES APP HOW TO#
#INSTALLING POSTGRES APP INSTALL#
#INSTALLING POSTGRES APP UPDATE#
#INSTALLING POSTGRES APP CODE#
#INSTALLING POSTGRES APP PASSWORD#
Now, we can give our new user access to administer our new database:
ALTER ROLE myprojectuser SET timezone TO 'UTC'.
ALTER ROLE myprojectuser SET default_transaction_isolation TO 'read committed'.
ALTER ROLE myprojectuser SET client_encoding TO 'utf8'.
These are all recommendations from the Django project itself: By default, our Django projects will be set to use UTC. We are also setting the default transaction isolation scheme to “read committed”, which blocks reads from uncommitted transactions. We are setting the default encoding to UTF-8, which Django expects. This will speed up database operations so that the correct values do not have to be queried and set each time a connection is established.
#INSTALLING POSTGRES APP PASSWORD#
CREATE USER myprojectuser WITH PASSWORD ' password' Īfterwards, we’ll modify a few of the connection parameters for the user we just created.
Next, create a database user for our project. Note: Every Postgres statement must end with a semi-colon, so make sure that your command ends with one if you are experiencing issues. You will be given a PostgreSQL prompt where we can set up our requirements.įirst, create a database for your project: Log into an interactive Postgres session by typing: We can use sudo and pass in the username with the -u option. We need to use this user to perform administrative tasks. Basically, this means that if the user’s operating system username matches a valid Postgres username, that user can login with no further authentication.ĭuring the Postgres installation, an operating system user named postgres was created to correspond to the postgres PostgreSQL administrative user. We’re going to jump right in and create a database and database user for our Django application.īy default, Postgres uses an authentication scheme called “peer authentication” for local connections. Creating the PostgreSQL Database and User
#INSTALLING POSTGRES APP INSTALL#
This will install pip, the Python development files needed to build Gunicorn later, the Postgres database system and the libraries needed to interact with it, and the Nginx web server.
sudo apt install python-pip python-dev libpq-dev postgresql postgresql-contrib nginx curl.
If you are starting new projects, it is strongly recommended that you choose Python 3.
sudo apt install python3-pip python3-dev libpq-dev postgresql postgresql-contrib nginx curlĭjango 1.11 is the last release of Django that will support Python 2.
If you are using Django with Python 3, type: The packages we install depend on which version of Python your project will use.
#INSTALLING POSTGRES APP UPDATE#
We need to update the local apt package index and then download and install the packages. We will use the Python package manager pip to install additional components a bit later. To begin the process, we’ll download and install all of the items we need from the Ubuntu repositories. Installing the Packages from the Ubuntu Repositories We will then set up Nginx in front of Gunicorn to take advantage of its high performance connection handling mechanisms and its easy-to-implement security features. This will serve as an interface to our application, translating client requests from HTTP to Python calls that our application can process. Once we have our database and application up and running, we will install and configure the Gunicorn application server. Installing Django into an environment specific to your project will allow your projects and their requirements to be handled separately. We will be installing Django within a virtual environment.
#INSTALLING POSTGRES APP HOW TO#
You can learn how to set this up by running through our initial server setup guide. In order to complete this guide, you should have a fresh Ubuntu 20.04 server instance with a basic firewall and a non-root user with sudo privileges configured. We will then set up Nginx to reverse proxy to Gunicorn, giving us access to its security and performance features to serve our apps. We will configure the Gunicorn application server to interface with our applications. We will be setting up a PostgreSQL database instead of using the default SQLite database. In this guide, we will demonstrate how to install and configure some components on Ubuntu 20.04 to support and serve Django applications.
#INSTALLING POSTGRES APP CODE#
Django includes a simplified development server for testing your code locally, but for anything even slightly production related, a more secure and powerful web server is required. Introductionĭjango is a powerful web framework that can help you get your Python application or website off the ground. A previous version of this article was written by Justin Ellingwood.
Tumblr media
0 notes
mainsdesign · 3 years ago
Text
Tableplus view sql print
Tumblr media
#TABLEPLUS VIEW SQL PRINT INSTALL#
#TABLEPLUS VIEW SQL PRINT CODE#
You can vastly extend TablePlus to fit your needs by installing plugins written by others or writing your own in javascript. We help you troubleshoot your problems with TablePlus at a lightning speed. We shipped more than 1000 improvements over the past year. There's always something cool to be discovered in the new updates released weekly. With native build, we eliminate needless complexity & extraneous details that you can get it up and run in less than a second.Įvery function has a shortcut key to keep your hands always on the keyboard. We only focus on the most important features. We don't want to be an app that does many things, but masters none. We've built the best practices for SQL Editor into our default to help you boost your productivity
#TABLEPLUS VIEW SQL PRINT INSTALL#
You don't need to install any SSH client in order to connect to the server.
#TABLEPLUS VIEW SQL PRINT CODE#
Quickly get a snapshot of your database with multi-tab and multi-window view, as well as stay in control of what you have changed on your database with Code Review. It's also equipped with many security features to protect your database, including native libssh and TLS to encrypt your connection. Query, edit and save your database easily with a native app that can run fast like a Lambo. With native build, simple design and powerful features, it makes database management easier, faster & more efficient for you. Modern, native, and friendly GUI tool for relational databases: MySQL, PostgreSQL & more. Most clients connect over SSL by default, but sometimes it’s necessary to add the sslmode=require query parameter to your database URL before connecting.Database Management made easy. Enable SSLĪpplications must support and enable SSL to connect to a Heroku Postgres database. This ensures that any changes to the database’s URL will automatically propagate to your other apps. If you are connecting to a database from other Heroku apps, you can now attach a database add-on directly to multiple applications. Attach the database as an add-on to other Heroku apps This way, you ensure your process or application always has correct database credentials. For example, you may follow 12Factor application configuration principles by using the Heroku CLI and invoke your process like so: DATABASE_URL=$(heroku config:get DATABASE_URL -a your-app) your_process Automated failover events on HA-enabled plans.Īlways fetch the database URL config var from the corresponding Heroku app when your application starts.Security issues or threats that require Heroku Postgres staff to rotate database credentials.Catastrophic hardware failures that require Heroku Postgres staff to recover your database on new hardware.User-initiated database credential rotations using heroku pg:credentials:rotate.The database URL is managed by Heroku and will change under some circumstances such as: To make effective use of Heroku Postgres databases outside of a Heroku application, keep in mind the following: Don’t copy and paste credentials to a separate environment or app code For private databases, outside access can be enabled using Mutual TLS. However, except for private and shield tier databases, Heroku Postgres databases are accessible from anywhere and can be used from any application using standard Postgres clients. This variable is managed by Heroku, and is the primary way we tell you about your database’s network location and credentials. Your database is attached to the Heroku app and is accessible via an app config var containing the database URL, even if you host no code in the application itself. You can find the application name on the database page at. All Heroku Postgres databases have a corresponding Heroku application. Heroku Postgres databases are designed to be used with a Heroku app. You can browse, query, edit your data and database structure in a simple and clean spreadsheet-like editor. With the native build, simple design, and powerful features, TablePlus makes database management easier, faster & more efficient for you. Connecting to Heroku Postgres Databases from Outside of Heroku English - 日本語に切り替える TablePlus is a modern, native, and friendly GUI tool for relational databases.
Tumblr media
1 note · View note
coolwizardprince · 3 years ago
Text
Using pg chameleon to Migrate Data from MySQL to openGauss
Introduction to pg_chameleon
pg_chameleon is a real-time replication tool compiled in Python 3 for migrating data from MySQL to PostgreSQL. The tool uses the mysql-replication library to extract row images from MySQL. The row images are stored in PostgreSQL in JSONB format.
A pl/pgsql function in PostgreSQL is executed to decode row images in JSONB format and replay the changes to PostgreSQL. In addition, the tool uses the read-only mode to pull full data from MySQL to PostgreSQL through initial configuration. In this way, the tool provides the function of copying the initial full data and subsequent incremental data online in real time.
pg_chameleon has the following features:
Provides online real-time replication by reading the MySQL BinLog.
Supports reading data from multiple MySQL schemas and restoring the data to the target PostgreSQL database. The source schemas and target schemas can use different names.
Implements real-time replication through a daemon. The daemon consists of two subprocesses. One is responsible for reading logs from MySQL, and the other is responsible for replaying changes to PostgreSQL.
openGauss is compatible with PostgreSQL communication protocols and most syntaxes. For this reason, you can use pg_chameleon to migrate data from MySQL to openGauss. In addition, the real-time replication capabilities of pg_chameleon greatly reduce the service interruption duration during database switchover.
pg_chameleon Issues in openGauss
pg_chameleon depends on the psycopg2 driver, and the psycopg2 driver uses the pg_config tool to check the PostgreSQL version and restricts PostgreSQL of earlier versions from using this driver. The pg_config tool of openGauss returns the version of openGauss (the current version is openGauss 2.0.0). As a result, the driver reports a version error “ Psycopg requires PostgreSQL client library (libpq) >= 9.1”. You need to use psycopg2 through source code compilation and remove related restrictions in the source header file psycopg/psycopg.h.
pg_chameleon sets the GUC parameter LOCK_TIMEOUT to limit the timeout for waiting for locks in PostgreSQL. openGauss does not support this parameter. (openGauss supports the GUC parameter lockwait_timeout, which needs to be set by the administrator.) You need to delete related settings from the source code of pg_chameleon.
pg_chameleon uses the syntax of the UPSERT statement to specify the replacement operation when a constraint is violated. The function and syntax of the UPSERT statement supported by openGauss is different from those supported by PostgreSQL. openGauss uses the ON DUPLICATE KEY UPDATE { column_name = { expression | DEFAULT } } [, …] syntax, while PostgreSQL uses the ON CONFLICT [ conflict_target ] DO UPDATE SET { column_name = { expression | DEFAULT } } syntax. Therefore, these two databases differ slightly in functions and syntaxes. You need to modify the related UPSERT statement in the source code of pg_chameleon.
pg_chameleon uses the CREATE SCHEMA IF NOT EXISTS and CREATE INDEX IF NOT EXISTS syntaxes. openGauss does not support the IF NOT EXISTS option of schemas and indexes. You need to modify the logic so that the system checks whether the schemas and indexes exist before creating them.
To select the array range, openGauss runs column_name[start, end], while PostgreSQL runs column_name[start:end]. You need to modify the array range selection mode in the source code of pg_chameleon.
pg_chameleon uses the INHERITS function, but openGauss does not support inherited tables. You need to modify the SQL statements and tables that use inherited tables.
Next, use pg_chameleon to migrate data from MySQL to openGauss.
Configuring pg_chameleon
pg_chameleon uses the config-example.yaml configuration file in ~/.pg_chameleon/configuration to define configurations during migration. The configuration file consists of four parts: global settings, type_override, postgres destination connection, and sources. global settings is used to set the log file path, log level, and others. type_override allows users to customize type conversion rules and overwrite existing default conversion rules. postgres destination connection is used to configure the parameters for connecting to openGauss. sources is used to define the parameters for connecting to MySQL and other configurable items during replication.
For more details about the configuration items, see the official website:
https://pgchameleon.org/documents_v2/configuration_file.html
The following is an example of the configuration file:# global settings pid_dir: '~/.pg_chameleon/pid/' log_dir: '~/.pg_chameleon/logs/' log_dest: file log_level: info log_days_keep: 10 rollbar_key: '' rollbar_env: '' # type_override allows the user to override the default type conversion # into a different one. type_override: "tinyint(1)": override_to: boolean override_tables: - "*" # postgres destination connection pg_conn: host: "1.1.1.1" port: "5432" user: "opengauss_test" password: "password_123" database: "opengauss_database" charset: "utf8" sources: mysql: db_conn: host: "1.1.1.1" port: "3306" user: "mysql_test" password: "password123" charset: 'utf8' connect_timeout: 10 schema_mappings: mysql_database:sch_mysql_database limit_tables: skip_tables: grant_select_to: - usr_migration lock_timeout: "120s" my_server_id: 1 replica_batch_size: 10000 replay_max_rows: 10000 batch_retention: '1 day' copy_max_memory: "300M" copy_mode: 'file' out_dir: /tmp sleep_loop: 1 on_error_replay: continue on_error_read: continue auto_maintenance: "disabled" gtid_enable: false type: mysql keep_existing_schema: No
The preceding configuration file indicates that the username and password for connecting to MySQL are mysql_test and password123 respectively during data migration. The IP address and port number of the MySQL server are 1.1.1.1 and 3306, respectively. The source database is mysql_database.
The username and password for connecting to openGauss are opengauss_test and password_123, respectively. The IP address and port number of the openGauss server are 1.1.1.1 and 5432, respectively. The target database is opengauss_database. The sch_mysql_database schema is created in opengauss_database, and all tables to be migrated are in this schema.
Note that the user must have the permission to remotely connect to MySQL and openGauss as well as the read and write permissions on the corresponding databases. For openGauss, the host where pg_chameleon runs must be in the remote access whitelist of openGauss. For MySQL, the user must have the RELOAD, REPLICATION CLIENT, and REPLICATION SLAVE permissions.
The following describes the migration procedure.
Creating Users and Databases
The following shows how to create the users and databases in openGauss required for migration.
The following shows how to create the users in MySQL required for migration and grant related permissions to the users.
Enabling the Replication Function of MySQL
Modify the MySQL configuration file. Generally, the configuration file is /etc/my.cnf or the cnf configuration file in the /etc/my.cnf.d/ folder. Modify the following configurations in the [mysqld] configuration block (if the [mysqld] configuration block does not exist, add it):[mysqld] binlog_format= ROW log_bin = mysql-bin server_id = 1 binlog_row_image=FULL expire_logs_days = 10
After the modification, restart MySQL for the configurations to take effect.
Runing pg_chameleon to Migrate Data
Create and activate a virtual Python environment.
python3 -m venv venv
source venv/bin/activate
Download and install psycopg2 and pg_chameleon.
Run the pip install pip –upgrade command to upgrade pip.
Add the folder where the pg_config tool of openGauss is located to the $PATH environment variable. Example:
export PATH={openGauss-server}/dest/bin:$PATH
Download the source code of psycopg2 at https://github.com/psycopg/psycopg2, remove the restriction of checking the PostgreSQL version, and run the python setup.py install command to compile the source code and install the tool.
Download the source code of pg_chameleon at https://github.com/the4thdoctor/pg_chameleon, solve the preceding issues in openGauss, and run the python setup.py install command to compile the source code and install the tool.
Create the configuration file directory of pg_chameleon.
chameleon set_configuration_files
Modify the configuration file of pg_chameleon.
cd ~/.pg_chameleon/configuration
cp config-example.yml default.yml
Modify the default.yml file as required. Modify the connection configuration information, user information, database information, and schema mapping specified by pg_conn and mysql. An example of the configuration file is provided for reference.
Initialize the replication stream.
chameleon create_replica_schema –config default
chameleon add_source –config default –source mysql
In this step, an auxiliary schema and table are created for the replication process in openGauss.
Copy basic data.
chameleon init_replica –config default –source mysql
After this step is complete, the current full data in MySQL is copied to openGauss.
You can view the replication result in openGauss.
Enable online real-time replication.
chameleon start_replica –config default –source mysql
After real-time replication is enabled, insert a data record into MySQL.
View the data in the test_decimal table in openGauss.
The newly inserted data record is successfully copied to openGauss.
Disable online replication.
chameleon stop_replica –config default –source mysql
chameleon detach_replica –config default –source mysql
chameleon drop_replica_schema –config default
0 notes
orionesolutions1 · 3 years ago
Text
Top 5 Server Management Software Tools 2022
A server is a computer that aids in storing, transmitting, and reception of data. In a nutshell, it fulfills the function of providing services. Servers can be anything from computers to software programs to storage devices.
Servers are incredibly sophisticated machines. It necessitates cold rooms, as well as regular updates and maintenance, to function correctly. In the absence of updates and maintenance, a company may encounter several issues that negatively impact its performance.
Several Top Server Management Software Tools 2022 work as a savior to avoid server-related problems. Let’s take a look.
LogicMonitor
LogicMonitor is a network monitoring and management platform delivered as a software-as-a-service (SaaS). It provides a customized and hybrid cloud-based infrastructure to the enterprise. The interface is hosted and accessed via the cloud, but the data collecting takes place on the network. LogicMonitor also works with both Windows and Linux operating systems.
With its installation, LogicMonitor promises an agentless and straightforward system. It automatically scans the network after installation to identify all connected devices, although you can also do it manually.
LogicMonitor comes with a user-friendly dashboard with several pre-built layouts. You can customize these templates as per your unique needs.
LogicMonitor may also create custom notifications and send them out via email and SMS. If virtualization is something you’re interested in, this fantastic tool supports VMware ESXi and Microsoft Hyper-V.
DataDog
Datadog’s main product is a SaaS-based server management service. Although it is a cloud-based solution, it can also monitor apps hosted on-premise. It’s a pleasant surprise to see that it provides APIs, services, over 350 integrations, and compatibility for various network protocols. TCP (Transmission Control Protocol), SNMP (Simple Network Management Protocol), and SSH are examples of network protocols (Secure Shell).
Its user-friendly interface presents data clearly and concisely. Customization, on the other hand, is complex with Datadog. However, with detailed guides and helpful pulldowns, it is possible to do so gradually.
Compared to other server management solutions, Datadog takes a unique approach to reporting. Filtering based on period, time and kind of events, and priority usually provides focused and easy search parameters.
Datadog’s installation is more complicated than LogicMonitor’s. Because it is a cloud-hosted product, the initial setup is relatively straightforward. Later stages, however, will necessitate the installation of additional agents on each host system in your network. A console-based technique employing a terminal is required for ESXi or NetFlow.
Adding new devices to an agent-based system can be a lengthy process. Because it lacks an automatic device detection feature, it must download a unique agent for each device or service.
ManageEngine OpManager
ManageEngine OpManager is a cost-effective alternative for organizations searching for a lightweight suit. It’s package software, not a SaaS, unlike Datadog and LogicMonitor.
This utility requires you to choose a database for storage and a path. It comes with Postgres as the default pathway, but you may upgrade to SQL Server for an additional fee.
For organizations searching for visualization, ManageEngine OpManager is an excellent choice. This technology well supports heavy hitters like VMWare and Microsoft Hyper-V hypervisors. With proper monitoring, it also quickly activates and disables.
Although it offers a variety of pre-configured alerts such as alarms, trap alarms, standard events, and Syslog alarms, the alert setting is a little complicated. ManageEngine Applications Manager Plug-In is a functionality offered as an add-on.
You can personalize the product and purchase additional features based on your needs.
Paessler PRTG Network Monitor
Paessler PRTG Network Monitor is a server management software solution that is both old and popular. With years of experience on the market, it has evolved into one of the most feature-rich platforms. However, it is deployed on-premise and does not have a cloud-based support system.
Paessler requires a Microsoft Windows Server machine, as we know that bundled software has specific requirements. That system should also have at least two CPU cores, three gigabytes of RAM, and 250 gigabytes of storage.
Although Paessler PRTG Network Monitor is hosted locally, it features a web-based interface, which is undoubtedly beneficial. This interface has a lot of features and is easy to use. Its network maps are the best on the market, giving you a thorough and precise picture of how well your network is doing.
Agentless server management is made more accessible with PRTG Network Monitor. It contains an auto-discover feature that populates all of your network’s devices. It can also be done manually by manually adding every device. You’ll need to provide an IP address and the type of device for this.
Progress WhatsUp Gold
Progress Another popular choice that’s been around for a while is WhatsUp Gold. You must purchase a license for each device you own using this software.
WhatsUp Gold is a Windows-based application that must be downloaded and installed locally. However, the installation process is not complicated; you must choose installation paths and grant network access.
Configuration alerts are a breeze with WhatsUp Gold. Its policies are assigned based on three statuses: down, maintenance, or up. However, if you want a lot of customized alerts, this isn’t the way to go. It also lacks a customized alert for reports. However, it provides a detailed examination of the pertinent data. It’s also possible to save it as a Microsoft Excel or Adobe Acrobat file.
Consider your current infrastructure before selecting a competent server management software tool. It would also help ensure that the platform chosen is compatible with all devices. Then they must assess which management elements are required and whether monitoring capabilities are sufficient.
If you own a small business, your needs will differ from those of a large corporation. If you still have questions regarding Top Server Management software tools 2022, don’t hesitate to contact Orion eSolutions. With vast industry experience, we ensure the best delivery and server software tools.
0 notes
absalomcar · 4 years ago
Text
DATA ANALYST
  Absalom O. Carlisle
Nashville Area• 314-629-5273 •[email protected]•https://www.linkedin.com/in/absalom-carlisle-05807a83/
DATA ANALYST
Customer-focused leader in operations, data analytics, project management and business development. Drives process improvements to contain costs, increase productivity and grow revenue through data analysis using-Python, SQL and Excel. Creates strategies and allocates resources through competitive analysis and business intelligence insights with visualizations using Tableau and Power-BI. Excellent presentation, analytical, communication and problem-solving-skills. Develops strong relationships with stakeholders to mitigate issues and to foster change. Nashville Software School will enhance and help me acquire new skills from a competitive program with unparalleled instructions. Working on individual & Group projects using real data set from local companies is invaluable. The agile remote-working environment-has/will continue to solidify my expertise as I prepare my journey to join Data Analytics career path.
Technical Skills
·         DATA ANALYSIS    SQL SERVER    POSTGRES SQL     EXCEL/PIVOT TABLES
·         PYTHON/JUPYTER NOTEBOOKSTABLEAU/TABLEAU-PREP    POWER BI
·         SSRS/SSIS   GITBASH/GITHUB    KANBAN
data analyst experience
Querying Databases with SQL
Indexing and Query Tuning                                                 
Report Design W/Data Sets and Aggregates
Sub-Reports-Parameters and Filter
Data Visualization W/Tableau and Power-BI
  Report Deployment                                                              
Metadata Repository                                                          
Data Warehousing-Delivery Process
Data Warehouse Schemas
Star Schemas-Snowflakes Schemas
PROFESIONAL EXPERIENCE
Quantrell Auto Group
Director of Operations | 2016- 2020
·         Fostered strong partnerships with business leaders, senior business managers, and business vendors.
·         Analyzed business vendor performances using Excel data with Tableau to create reports and dashboards for insights that helped implement vendor specific plans, garnering monthly savings of $25K.
·         Managed and worked with high profile Contractors and architecture firms that delivered 3 new $7M construction building projects for Subaru, Volvo and Cadillac on time and under budget.
·         Led energy savings initiative that updated HVAC systems, installed LED lighting though-out campus, introduced and managed remote controlled meters - reducing monthly costs from $38K to $18K and gaining $34K in energy rebate from the utility company- as a result, the company received Green Dealer Award recognition nationally.
·         Collected, tracked and organized data to evaluate current business and market trends using Tableau.
·         Conducted in-depth research of vehicle segments and presented to Sr. Management recommendations to improve accuracy of residual values forecasts by 25%.
·         Identified inefficiencies in equipment values forecasts and recommended improved policies.
·         Manipulated residual values segment data and rankings using pivot tables, pivot charts.
·         Created routine and ad-hoc reports for internal and for external customer’s requests.
·         Provided project budgeting and cost estimation for proposal submission.
·         Established weekly short-term vehicle forecast based on historical data sets, enabling better anticipation capacity.
·         Selected by management to head the operational integration of Avaya Telecommunication system, Cisco Meraki Cloud network system and the Printer install project.
·         Scheduled and completed 14 Cisco Meraki inspections to 16 buildings, contributing 99% network up-time.
·         Following design plans, installed and configured 112 workstations and Cisco Meraki Switches, fulfilling 100% user needs.
Clayton Healthcare Services
Founder | 2009 - 2015
·         Successfully managed home healthcare business from zero to six-figure annual revenues. Drove growth through strategic planning, budgeting, and business development.
·         Built a competent team from scratch as a startup company.
·         Built strategic marketing and business development plans.
·         Built and managed basic finance, bookkeeping, and accounting functions using excel.
·         Processed, audited and maintained daily, monthly payable-related activities, including data entry of payables and related processing, self-auditing of work product, reviews and processing of employee’s reimbursements, and policy/procedure compliance.
·         Increased market share through innovative marketing strategies and excellent customer service.
JP Morgan Chase
Portfolio Analyst 2006-2009
·         Researched potential equity, fixed income, and alternative investments for high net-worth individuals and institutional clients.
·         Analyzed quarterly performance data to identify trends in operations using Alteryx and Excel.
·         SME in providing recommendations for Equity Solutions programs to enable portfolio managers to buy securities at their own discretion.
·         Created ad-hoc reports to facilitate executive-level decision making
·         Maintained and monitored offered operational support for key performance indicators and trends dashboards
EDUCATION & TRAINING
Bachelor of Science in Managerial Economics                                                                                                         2011   Washington University
St. Louis, MO
Project Management Certification                                                                                                                              2014
St. Louis University
Microsoft BI Full Stack Certification
St. Louis, MO
Data Science/AnalyticsJan 2021
Nashville Software School                                                                                                               
Nashville, TN
0 notes
globalmediacampaign · 4 years ago
Text
Options for legacy application modernization with Amazon Aurora and Amazon DynamoDB
Legacy application modernization can be complex. To reduce complexity and risk, you can choose an iterative approach by first replatforming the workload to Amazon Aurora. Then you can use the cloud-native integrations in Aurora to introduce other AWS services around the edges of the workload, often without changes to the application itself. This approach allows teams to experiment, iterate, and modernize legacy workloads iteratively. Modern cloud applications often use several database types working in unison, creating rich experiences for customers. To that end, the AWS database portfolio consists of multiple purpose-built database services that allow you to use the right tool for the right job based on the nature of the data, access patterns, and scalability requirements. For example, a modern cloud-native ecommerce solution can use a relational database for customer transactions and a nonrelational document database for product catalog and marketing promotions. If you’re migrating a legacy on-premises application to AWS, it can be challenging to identify the right purpose-built approach. Furthermore, introducing purpose-built databases to an application that runs on an old-guard commercial database might require extensive rearchitecture. In this post, I propose a modernization approach for legacy applications that make extensive use of semistructured data such as XML in a relational database. Starting in the mid-90s, developers began experimenting with storing XML in relational databases. Although commercial and open-source databases have since introduced native support for nonrelational data types, an impedance mismatch still exists between the relational SQL query language and access methods that may introduce data integrity and scalability challenges for your application. Retrieval of rows based on the value of an XML attribute can involve a resource-consuming full table scan, which may result in performance bottlenecks. Because enforcing accuracy and consistency of relationships between tables, or referential integrity, on nonrelational data types in a relational database isn’t possible, it may lead to orphaned records and data quality challenges. For such scenarios, I demonstrate a way to introduce Amazon DynamoDB alongside Amazon Aurora PostgreSQL-compatible edition, using the native integration of AWS Lambda with Aurora, without any modifications to your application’s code. DynamoDB is a fully managed key-value and document database with single-millisecond query performance, which makes it ideal to store and query nonrelational data at any scale. This approach paves the way to gradual rearchitecture, whereby new code paths can start to query DynamoDB following the Command-Query Responsibility Segregation pattern. When your applications are ready to cut over reads and writes to DynamoDB, you can remove XML from Aurora tables entirely. Solution overview The solution mirrors XML data stored in an Aurora PostgreSQL table to DynamoDB documents in an event-driven and durable way by using the Aurora integration with Lambda. Because of this integration, Lambda functions can be called directly from within an Aurora database instance by using stored procedures or user-defined functions. The following diagram details the solution architecture and event flows. The solution deploys the following resources and configurations: Amazon Virtual Private Cloud (Amazon VPC) with two public and private subnets across two AWS Availability Zones An Aurora PostgreSQL cluster in the private subnets, encrypted by an AWS KMS managed customer master key (CMK), and bootstrapped with a orders table with sample XML A pgAdmin Amazon Elastic Compute Cloud (Amazon EC2) instance deployed in the public subnet to access the Aurora cluster A DynamoDB table with on-demand capacity mode A Lambda function to transform XML payloads to DynamoDB documents and translate INSERT, UPDATE, and DELETE operations from Aurora PostgreSQL to DynamoDB An Amazon Simple Queue Service (Amazon SQS) queue serving as a dead-letter queue for the Lambda function A secret in AWS Secrets Manager to securely store Aurora admin account credentials AWS Identity and Access Management (IAM) roles granting required permissions to the Aurora cluster, Lambda function and pgAdmin EC2 instance The solution registers the Lambda function with the Aurora cluster to enable event-driven offloading of data from the postgres.orders table to DynamoDB, as numbered in the preceding diagram: When an INSERT, UPDATE, or DELETE statement is run on the Aurora orders table, the PostgreSQL trigger function invokes the Lambda function asynchronously for each row, after it’s committed. Every function invocation receives the operation code (TG_OP), and—as applicable—the new row (NEW) and the old row (OLD) as payload. The Lambda function parses the payload, converts XML to JSON, and performs the DynamoDB PutItem action in case of INSERT or UPDATE and the DeleteItem action in case of DELETE. If an INSERT, UPDATE or DELETE event fails all processing attempts or expires without being processed, it’s stored in the SQS dead-letter queue for further processing. The source postgres.orders table stores generated order data combining XML with relational attributes (see the following example of a table row with id = 1). You can choose which columns or XML attributes get offloaded to DynamoDB by modifying the Lambda function code. In this solution, the whole table row, including XML, gets offloaded to simplify querying and enforce data integrity (see the following example of a corresponding DynamoDB item with id = 1). Prerequisites Before deploying this solution, make sure that you have access to an AWS account with permissions to deploy the AWS services used in this post through AWS CloudFormation. Costs are associated with using these resources. See AWS Pricing for details. To minimize costs, I demonstrate how to clean up the AWS resources at the end of this post. Deploy the solution To deploy the solution with CloudFormation, complete the following steps: Choose Launch Stack. By default, the solution deploys to the AWS Region, us-east-2, but you can change this Region. Make sure you deploy to a Region where Aurora PostgreSQL is available. For AuroraAdminPassword, enter an admin account password for your Aurora cluster, keeping the defaults for other parameters. Acknowledge that CloudFormation might create AWS Identity and Access Management (IAM) resources. Choose Create stack. The deployment takes around 20 minutes. When the deployment has completed, note the provisioned stack’s outputs on the Outputs The outputs are as follows: LambdaConsoleLink and DynamoDBTableConsoleLink contain AWS Management Console links to the provisioned Lambda function and DynamoDB table, respectively. You can follow these links to explore the function’s code and review the DynamoDB table items. EC2InstanceConnectURI contains a deep link to connect to the pgAdmin EC2 instance using SSH via EC2 Instance Connect. The EC2 instance has PostgreSQL tooling installed; you can log in and use psql to run queries from the command line. AuroraPrivateEndpointAddress and AuroraPrivateEndpointPort contain the writer endpoint address and port for the Aurora cluster. This is a private endpoint only accessible from the pgAdmin EC2 instance. pgAdminURL is the internet-facing link to access the pgAdmin instance. Test the solution To test the solution, complete the following steps: Open the DynamoDB table by using the DynamoDBTableConsoleLink link from the stack outputs. Some data is already in the DynamoDB table because we ran INSERT operations on the Aurora database instance as part of bootstrapping. Open a new browser tab and navigate to the pgAdminURL link to access the pgAdmin instance. The Aurora database instance should already be registered. To connect to the Aurora database instance, expand the Servers tree and enter the AuroraAdminPassword you used to create the stack. Choose the postgres database and on the Tools menu, and then choose Query Tool to start a SQL session. Run the following INSERT, UPDATE, and DELETE statements one by one, and return to the DynamoDB browser tab to observe how changes in the Aurora postgres.orders table are reflected in the DynamoDB table. -- UPDATE example UPDATE orders SET order_status = 'pending' WHERE id < 5; -- DELETE example DELETE FROM orders WHERE id > 10; -- INSERT example INSERT INTO orders (order_status, order_data) VALUES ('malformed_order', ' error retrieving kindle id '); The resulting set of items in the DynamoDB table reflects the changes in the postgres.orders table. You can further explore the two triggers (sync_insert_update_delete_to_dynamodb and sync_truncate_to_dynamodb) and the trigger function sync_to_dynamodb() that makes calls to the Lambda function. In the pgAdmin browser tab, on the Tools menu, choose Search Objects. Search for sync. Choose (double-click) a search result to reveal it in the pgAdmin object hierarchy. To review the underlying statements, choose an object (right-click) and choose CREATE Script. Security of the solution The solution incorporates the following AWS security best practices: Encryption at rest – The Aurora cluster is encrypted by using an AWS KMS managed customer master key (CMK). Security – AWS Secrets Manager is used to store and manage Aurora admin account credentials. Identity and access management – The least privilege principle is followed when creating IAM policies. Network isolation – For additional network access control, the Aurora cluster is deployed to two private subnets with a security group permitting traffic only from the pgAdmin EC2 instance. To further harden this solution, you can introduce VPC endpoints to ensure private connectivity between the Lambda function, Amazon SQS, and DynamoDB. Reliability of the solution Aurora is designed to be reliable, durable, and fault tolerant. The Aurora cluster in this solution is deployed across two Availability Zones, with the primary instance in Availability Zone 1 and a replica in Availability Zone 2. In case of a failure event, the replica is promoted to the primary, the cluster DNS endpoint continues to serve connection requests, and the calls to the Lambda function continue in Availability Zone 2 (refer to the solution architecture earlier in this post). Aurora asynchronous calls to Lambda retry on errors, and when a function returns an error after running, Lambda by default retries two more times by using exponential backoff. With the maximum retry attempts parameter, you can configure the maximum number of retries between 0 and 2. Moreover, if a Lambda function returns an error before running (for example, due to lack of available concurrency), Lambda by default keeps retrying for up to 6 hours. With the maximum event age parameter, you can configure this duration between 60 seconds and 6 hours. When the maximum retry attempts or the maximum event age is reached, an event is discarded and persisted in the SQS dead-letter queue for reprocessing. It’s important to ensure that the code of the Lambda function is idempotent. For example, you can use optimistic locking with version number in DynamoDB by ensuring the OLD value matches the document stored in DynamoDB and rejecting the modification otherwise. Reprocessing of the SQS dead-letter queue is beyond the scope of this solution, and its implementation varies between use cases. It’s important to ensure that the reprocessing logic performs timestamp or version checks to prevent a newer item in DynamoDB from being overwritten by an older item from the SQS dead-letter queue. This solution preserves the atomicity of a SQL transaction as a single, all-or-nothing operation. Lambda calls are deferred until a SQL transaction has been successfully committed by using INITIALLY DEFERRED PostgreSQL triggers. Performance efficiency of the solution Aurora integration with Lambda can introduce performance overhead. The amount of overhead depends on the complexity of the PostgreSQL trigger function and the Lambda function itself, and I recommend establishing a performance baseline by benchmarking your workload with Lambda integration disabled. Upon reenabling the Lambda integration, use Amazon CloudWatch and PostgreSQL Statistics Collector to analyze the following: Aurora CPU and memory metrics, and resize the Aurora cluster accordingly Lambda concurrency metrics, requesting a quota increase if you require more than 1,000 concurrent requests Lambda duration and success rate metrics, allocating more memory if necessary DynamoDB metrics to ensure no throttling is taking place on the DynamoDB side PostgreSQL sustained and peak throughput in rows or transactions per second If your Aurora workload is bursty, consider Lambda provisioned concurrency to avoid throttling To illustrate the performance impact of enabling Lambda integration, I provisioned two identical environments in us-east-2 with the following parameters: AuroraDBInstanceClass – db.r5.xlarge pgAdminEC2InstanceType – m5.xlarge AuroraEngineVersion – 12.4 Both environments ran a simulation of a write-heavy workload with 100 INSERT, 20 SELECT, 200 UPDATE, and 20 DELETE threads running queries in a tight loop on the Aurora postgres.orders table. One of the environments had Lambda integration disabled. After 24 hours of stress testing, I collected the metrics using CloudWatch metrics, PostgreSQL Statistics Collector, and Amazon RDS Performance Insights. From an Aurora throughput perspective, enabling Lambda integration on the postgres.orders table reduces the peak read and write throughput to 69% of the baseline measurement (see rows 1 and 2 in the following table). # Throughput measurement INSERT/sec UPDATE/sec DELETE/sec SELECT/sec % of baseline throughput 1 db.r5.xlarge without Lambda integration 772 1,472 159 10,084 100% (baseline) 2 db.r5.xlarge with Lambda integration 576 887 99 7,032 69% 3 db.r5.2xlarge with Lambda integration 729 1,443 152 10,513 103% 4 db.r6g.xlarge with Lambda integration 641 1,148 128 8,203 81% To fully compensate for the reduction in throughput, one option is to double the vCPU count and memory size and change to the higher db.r5.2xlarge Aurora instance class at an increase in on-demand cost (row 3 in the preceding table). Alternatively, you can choose to retain the vCPU count and memory size, and move to the AWS Graviton2 processor-based db.r6g.xlarge Aurora instance class. Because of Graviton’s better price/performance for Aurora, the peak read and write throughput is at 81% of the baseline measurement (row 4 in the preceding table), at a 10% reduction in on-demand cost in us-east-2. As shown in the following graph, the DynamoDB table consumed between 2,630 and 2,855 write capacity units, and Lambda concurrency fluctuated between 259 and 292. No throttling was detected. You can reproduce these results by running a load generator script located in /tmp/perf.py on the pgAdmin EC2 instance. # Lambda integration on /tmp/perf.py 100 20 200 20 true # Lambda integration off /tmp/perf.py 100 20 200 20 false Additional considerations This solution doesn’t cover the initial population of DynamoDB with XML data from Aurora. To achieve this, you can use AWS Database Migration Service (AWS DMS) or CREATE TABLE AS. Be aware of certain service limits before using this solution. The Lambda payload limit is 256 KB for asynchronous invocation, and the DynamoDB maximum item size limit is 400 KB. If your Aurora table stores more than 256 KB of XML data per row, an alternative approach is to use Amazon DocumentDB (with MongoDB compatibility), which can store up to 16 MB per document, or offload XML to Amazon Simple Storage Service (Amazon S3). Clean up To avoid incurring future charges, delete the CloudFormation stack. In the CloudFormation console, change the Region if necessary, choose the stack, and then choose Delete. It can take up to 20 minutes for the clean up to complete. Summary In this post, I proposed a modernization approach for legacy applications that make extensive use of XML in a relational database. Heavy use of nonrelational objects in a relational database can lead to scalability issues, orphaned records, and data quality challenges. By introducing DynamoDB alongside Aurora via native Lambda integration, you can gradually rearchitect legacy applications to query DynamoDB following the Command-Query Responsibility Segregation pattern. When your applications are ready to cut over reads and writes to DynamoDB, you can remove XML from Aurora tables entirely. You can extend this approach to offload JSON, YAML, and other nonrelational object types. As next steps, I recommend reviewing the Lambda function code and exploring the multitude of ways Lambda can be invoked from Aurora, such as synchronously; before, after, and instead of a row being committed; per SQL statement; or per row. About the author Igor is an AWS enterprise solutions architect, and he works closely with Australia’s largest financial services organizations. Prior to AWS, Igor held solution architecture and engineering roles with tier-1 consultancies and software vendors. Igor is passionate about all things data and modern software engineering. Outside of work, he enjoys writing and performing music, a good audiobook, or a jog, often combining the latter two. https://aws.amazon.com/blogs/database/options-for-legacy-application-modernization-with-amazon-aurora-and-amazon-dynamodb/
0 notes
theduoseries · 5 years ago
Text
How to build DApp with Cosmos SDK?
Why TruStory chose to build on the Cosmos SDK and an overview of its architecture Binance Initial author: Shane Vitarana Original link:
Tumblr media
Why choose Cosmos SDK? ** 01_**_ TruStory starts because they build a distributed app whose appearance and experience are the same as normal apps that people are accustomed to. We don't would like users to wait for confirmation whenever they perform an operation, and we don't want customers to have to deal with outstanding transactions (). The App also calls numerous fine-grained dealings, which are all based on activities triggered at a specific time. Therefore we soon found that this cannot be accomplished on the Ethereum mainnet. We need something more delicate and robust. So we believe that sidechains based on Ethereum may be effective. We built an early on prototype on the Loom SDK (), however the framework had not been mature sufficiently to meet up our requirements at that time. Nevertheless, we like Tendermint (), that is a BFT consensus middleware, which can empower Loom. It has quick finality (about 5 seconds), which is essential for the user connection with frequently interacting social apps. Next, we made a decision to attempt the Cosmos SDK () since it was developed simply by the Tendermint team. We have been pleased to find that it gets the following good characteristics: * Statically typed compiled programming language (GO) * Well-built blockchain framework * Modular architecture * Highly configurable * Built-in governance Another thing we learned on Cosmos may be the ability to build a community of stakeholders around the network. We think that the future network can be more equivalent, giving customers and stakeholders even more control and ownership of the system. For example, companies like Airbnb will demand the united states Securities and Swap Commission () to change the guidelines to permit landlords to carry shares in the business. Compared with traditional companies (), social networks later on will tend to be more like cooperative enterprises. Users and stakeholders will be able to serve as customers of services, network operators (verifiers) or even both, and participate in network procedures pretty much according to their wishes. Furthermore, token holders will be able to delegate their tokens to validators and passively take part in the network. Development in line with the Cosmos SDK can provide us with organic technical features to understand user ownership and governance. Customers and validators can vote on upgrades and brand-new features, and assist maintain the values ??and suggestions of the network and user local community (). Users are encouraged to become good citizens and assist coordinate on the network. We've seen effective blockchain governance on Cosmos Hub (), the first network based on the SDK mainnet. (Translator's Take note: The first network based on the SDK mainnet in the Cosmos ecosystem ought to be IRIS Hub) Up to now, virtually all proposals () have significantly more than 60% of the votes. Establishing a self-sustaining and self-governing network can solve many problems of current social networks. It's no magic formula that Facebook offers issues with content material censorship () and personal privacy.
Tumblr media
Staking is DeFi** 02_**_ TruStory's incentive mechanism is based on displaying content through pledges. Staking on content could be thought of as an development like the Facebook like key. Likes are a good signal of issue, but they do not offer any form of reward for users to invest time making content. TruStory customers pledge user-created content, which is similar to validators pledge ATOM () for benefit sharing while also safeguarding the Cosmos Hub. In this process, users earn interest in the proper execution of TRU (TruStory's native token). Zaki Manian at the cross-chain conference Without punishment, the incentive mechanism is incomplete. This content of TruStory is adjusted by users who've obtained sufficiently TRU. Users could be fined for bad behavior, and they'll be imprisoned for a period of time. Users who are imprisoned will lose some TRU as punishment. TruStory also programs to put into action staking with the infrastructure level. Put simply, super customers can operate nodes of the TruStory blockchain, also called validators. The validator is responsible for safeguarding the TruStory blockchain by signing transactions and submitting blocks in the PoS system. Similar to TruStory customers, validators pledge TRUs and you will be greatly fined for poor behaviors such as for example offline and double signing. TruStory architecture overview** 03_**_ The first version of TruStory () was a Schelling Point game ((game_theory)) predicated on confirmation and verification of content, but it has recently been transformed right into a platform for more constructive debate around ideas. The Cosmos SDK played a significant role in this turning point (), enabling us to fork our blockchain while migrating all data to the new chain. Each Cosmos chain is guided by a genesis document (#what-is-a-genesis-document) that defines the original state of the blockchain. To fork the blockchain, you may use the order line device that exports the current App condition to produce a fresh genesis file. Make use of any device that can take up a fresh chain to update the genesis file, and then it is possible to migrate. This process is similar to the common database migration () in iterative growth, which brings a far more flexible method of blockchain development. This is a high-level view of TruStory's basic architecture: TruStory architecture The TruChain Area contains a set of validators running the TruStory blockchain node (TruChain). TruChain is really a part built with the Cosmos SDK and implemented in Go. TruAPI is also written in Go and is a GraphQL lighting client responsible for querying and broadcasting dealings to the chain. In addition, it communicates with the Postgres data source to acquire client-specific information that will not have to be on-chain, including information that has nothing related to TRU ideals, such as for example chat logs and consumer configuration documents. GraphQL enables internet and mobile clients to execute information queries from the smooth merge of two chains and databases.
Tumblr media
The native mobile client and web client are written in React Native () and React Native for Web respectively. This allows high-level code sharing between local cellular devices (iOS and Google android) and the web. They are connected to the chain via TruAPI. Some microservices can also enable TruStory, such as for example pushing notification services by observing Tendermint events. Push notifications can be extremely powerful when coupled with governance, because users can directly receive notifications of new suggestions and get reminders to vote. TruChain nodes are designed around several core custom modules: * claim * staking [1] * slashing [1] * bank [1] The opinion module is principally for users to store opinion data. It includes the content material of all views and high-level metadata for every view. This is an example of TruStory's take on man-made climate shift: The staking module provides all functions related to staking parameters. A good feature of the Cosmos SDK is that we now have block-level activities that work before and after each block is prepared. After every block, we check the expired pledges and distribute benefits. The penalty and confiscation module punishes bad behavior in the App and rewards users for spontaneous review of bad content. For instance, if an argument will be rejected by way of a certain amount of customers, the pledge tokens of the creators and supporters of the argument will undoubtedly be fined. This can help to maintain an incentive mechanism to help keep the city running. THE LENDER module tracks the supply of tokens and all transactions of users. It mainly works with the wallet function of the App. Each TruChain node also offers a CLI and REST interface, that your validator use to pledge at the network levels. To learn concerning the tools to generate the template files required for custom made modules, please check the Cosmos Module Generator: In the event you develop on top of Cosmos? ** 04_**_ As a fan of distributed technology, I am very worked up about Ethereum 2.0. Nonetheless it is still under advancement, and it will take at least per year to be ready. However, the Cosmos Hub () was launched in March 2019, and there are multiple projects running testnets. After IBC (Inter-Blockchain Conversation Process:) is implemented, Cosmos chains like TruChain will be able to speak to the Cosmos Hub and finally transfer property between Bitcoin and Ethereum through peg area (). Most distributed applications do not require thousands of PoW Ethereum nodes to make sure security. A PoS part chain or custom made blockchain guarded by hundreds or even dozens of nodes is enough. As an additional benefit, you can create a governance framework that meets the needs of one's App, and handle forks, upgrades, and decisions related to the continuing future of the network in a far more structured way. Cosmos SDK is among the best choices. However, building a community of validators and running a sovereign chain may not suit the needs of each project. Some people may decide to purchase an existing validator set and the protection it provides. For instance, Parity's Polkadot () is made around a shared safety model, where each chain is linked to a relay chain and inherits its protection. Although Cosmos currently relies on each chain to provide its security, there are plans to provide shared security in the future (). Another benefit of the customized Cosmos chain is normally that there is you don't need to pay gas fees inside platform tokens (such as for example Ethereum's ETH). These gas fees generally flow to events that have nothing to do with your App or project. Cosmos allows fuel to be compensated in App's indigenous tokens and enables validators to control the charges they would like to charge. For more details on how Cosmos functions, please see Preethi Kasireddy's two-part series: Part 1: So how exactly does Cosmos function, and so how exactly does it change from Bitcoin and Ethereum? the next part: ?? Sign up for the TruStory beta through beta.trustory.io (). Please keep tuned in for the launch of our public testnet [1] The staking, slashing and bank modules of TruStory will vary from the Cosmos SDK modules of the same name.
Tumblr media
0 notes
tak4hir0 · 5 years ago
Link
Lightning Web Components is our open source UI framework to build enterprise-scale apps that run on Salesforce, Heroku, Google Cloud Platform, or anywhere else. When running these apps on these different platforms, you can choose your own backend stack and data source, or you may want surface data from Salesforce in them. In this blog post, we will explore some options and considerations when using Salesforce as the data source. Authentication Salesforce provides a comprehensive set of REST and SOAP APIs that can be used to access its data and services from a client or server. The first step before accessing the APIs, is to establish a session with Salesforce. You can either use a username and password, or any of the OAuth flows listed here. Depending on your use case, these flows can be executed by client-side or server-side JavaScript. You can either build this logic from scratch or use external libraries like JSforce. Here are some considerations when deciding on an Authentication Flow for your app. Client Side Authentication You can use the OAuth User-Agent Flow to execute the handshake process using client side JavaScript alone. It involves a simple redirection to the /oauth2/authorize endpoint and takes in the Consumer Key of a Connected App as a parameter. Once the authorization is successful, the access token is encoded in the redirection URL. When you run client-side JavaScript, all the code is executed on the user’s device, so sensitive data like passwords and client secrets are accessible and exploitable. For this reason, this flow doesn’t use the client secret. However, the access token is encoded into the redirection URL which is exposed to the user and other apps on the device. Hence, care must be taken to remove callbacks from browser history. You can call window.location.replace(); to remove the callback from the browser’s history. It is best to use this type of Auth flow when building Lightning Web Components for desktop or mobile apps that have an embedded browser. Once you have the access token, you can pass it in the header of any HTTP requests to access Salesforce APIs. Building and sending a request from client-side JavaScript poses a risk, because the access token becomes available to the client and can be exploited. Therefore, sensitive business logic involving access tokens, usernames and passwords must never be written in client side JavaScript, because they are inadvertently exposed. To increase security and provide a better level of abstraction between your custom application and the APIs, you should use a middleware like Express, MuleSoft or any other ESB of your choice. Server Side Authentication You can use the Web server flow or the JWT Bearer flow to execute the handshake process using server side JavaScript like Node JS or any other stack of your choice. In case of Lightning Web Components, the create-lwc-app tool provides an option to create and use an Express server as a backend. You can choose an OAuth flow that suits your requirements. For instance, you can use the JWT Bearer flow when you want to use a single integration user to access data on behalf of all users. Use cases include showing read-only data (e.g. product catalog) to unauthenticated users. The web-server flow on the other hand can be used for per-user authorization. Use cases include websites where data relevant to the logged in user is shown (e.g. cart, order history etc.). You can also refer to this Trailhead Module that talks in detail about the use cases for different OAuth flows.   When running authentication flows on a server, it is expected that the server protects and securely stores all the secrets. In the case of Web Server flow, the client secret that prevents a spoofing server must be stored securely. In the case of JWT Bearer flow, an X509 Certificate that corresponds to the private key of the app must be created and stored in a keystore. These secrets and certificate aliases also have to be configurable (generally using Environment Variables) and should never be hardcoded into your codebase. This also allows you to change them without rebuilding the app and to deploy instances of your app in different environments with ease. When developing locally, for example with Node.js, these are stored in a .env file, which can then be accessed in your code by using libraries like dotenv, saving you the trouble of setting them manually every time. You should exclude sensitive configuration files like .env from version control by referencing them in specific files like .gitignore for git. Data Residency Securing access to Salesforce data doesn’t stop with authentication. Data must be stored and transmitted securely as well. Data on the Salesforce Platform is secured with its core security capabilities like Sharing Model, Object and Field Level Security and optionally Salesforce Shield for encryption and high compliance. Using Salesforce APIs allows you real time access to data without making a copy of it. The data returned by the API is bound by the permissions of the user accessing the API. Depending on your use case, you might want to replicate Salesforce data into a local/managed database. Since you can deploy Lightning Web Components Open Source (LWC OSS) apps on any platform, there are different options that each platform provides for data storage and replication. For example, Heroku Connect is an add-on by Heroku that provides a data synchronization service between Salesforce and Heroku Postgres databases. Add-Ons/Connectors like these are built to securely store tokens, and establish a session with Salesforce when needed. It is important to remember that once data is replicated locally, it is not bound by the same Sharing Model that is present in Salesforce. It is therefore necessary to implement your own access control mechanism. Also, never write the logic that queries for data or filters data based on access controls on the client side, because it can be easily tampered with. In the screenshot below, an if condition is being used by the component to only show the data relevant to the logged in user. This statement can be easily removed using browser tools which would then give the logged in user access to all the data that is being returned by the server. As a best practice, you should always use a middleware to abstract sensitive logic from the client-side and make sure that the middleware returns only the data that’s relevant to the user and nothing more. Summary In this blog post, you’ve learned about different approaches to authenticate to Salesforce from an app built with LWC OSS and what factors determine the approach you take. You’ve seen drawbacks of accessing data from the client side, and how a server can help you secure your implementation. You’ve also seen how the responsibility of data security varies with choice of data residency. However, it is also important to note that this blog post doesn’t exhaustively list all of the options available for secure Salesforce data access, but instead provides general indication patterns and principles that are used. Now it’s time to get hands-on! Below are a few resources to help you get started. Sample Code Lightning Web Components OSS foundation and documentation Trailhead Project: Access Salesforce Data with Lightning Web Components Open Source Trailhead Module: Connected App Basics   About the Author Aditya Naag Topalli is a 13x Certified Senior Developer Evangelist at Salesforce. He focuses on Lightning Web Components, Einstein Platform Services, and integrations. He writes technical content and speaks frequently at webinars and conferences around the world. Follow him on Twitter @adityanaag.
0 notes
ksirohi-blog · 6 years ago
Text
Projects
MirrorText (Python, Docker, AWS, Bash, Linux) May (2020) Built a web application that helps the user to measure text similarity between two documents using tf-idf & cosine similarity method. Used Docker for containerization and hosted the service on AWS EC2 with Linux instance. Useful for finding plagiarism, or one can use its API as well.  
Spark Application on Uber Rides Data (Spark, AWS, Bash)  May (2020) Developed a Spark code on AWS EC2 instance to process 14 million rows of uber rides data. Also used bash to automate data collection and to run my scripts. I learned how fast and useful spark is when we have huge amount of data. 
PriceComm (AWS, Database, Data-Pipeline, Web-Scrapping, JSON) March (2020) Developed a comparison platform that shows real-time prices, deals and discounts on multiple products from number of e-commerece websites. It is like tripadvisor, but only for shoes and cloths.
ETL (Wearable Devices) (Spark, AWS, S3, Postgres, Python, OOPS) April (2020) Followed data warehousing best practice to perform ETL, and also designed a star schema. I learned how to combine cutting edge tools and techniques to implement fundamentals of data modeling. 
Quantitative Database Analysis (SQL, MySQL, Power BI, Python) December (2019) Designed a relational database and then calculated following things - (Customer life cycle value, Cross-Segment Analysis, Sales Forecasting). Learned how to deal with OLTP transactional data and perform joins in it. 
Stock Exchnage with Pymongo (NoSQL, Python)  April (2020) Collected real time stock-exchange data by making API calls using python, and then configured the MongoDB database to save results as document-store.
Fraud Detection Using Customer Transaction (Machine Leaning, Python) September (2019) Trained 3 different machine learning models that can detect fraudalent transactions on the usage of credit cards. Biggest challenge I faced was the imbalance class and parameter tuning. 
Lead Score Prediction and Cost Benefit Analysis (GLM, Regression) December (2019) (Blog Post) Implemented logistic regression to perform classification and to calculate probability of possible future customer. Also, calculated cost and benefit of acquiring new customers. 
Cancer Classification Using Gene Expressions (Predictive Mod., Python)
Property Assessment System (R-Shiny, Tableau)
Analysis of Boston Utility Consumption Patterns (Python)
Fake News Classification (R, TF-IDF, RandomForest)
Visualization (Tableau)
Fright Cost Analysis for ConMed Corporation 
Life-Expectancy Data Analysis 
TripAdvisor Reviews Data Analysis (R)  
0 notes