#databaseconnectivity
Explore tagged Tumblr posts
Text
Building Professional Web Apps with PHP and MySQL
In the dynamic realm of web development, crafting robust and efficient web applications is an exciting journey. This guide will walk you through the essentials of creating professional web apps using the powerful duo of PHP and MySQL.
Why PHP and MySQL?
PHP and MySQL make a formidable pair for several reasons. PHP, a server-side scripting language, seamlessly integrates with MySQL, a reliable and widely used relational database management system. The synergy between the two allows for the creation of dynamic, data-driven websites with ease.Read More… Read the full article
#BackendDevelopment#DatabaseConnectivity#DatabaseIntegration#DynamicWebApplications#FullStackDevelopment#MySQL#MySQLDatabase#PHP#PHPandMySQLTutorial#PHPFrameworks#PHPProgramming#PHPWebDevelopment#ProfessionalApps#Server-SideDevelopment#Server-SideScripting#WebAppDesign#WebApplicationDevelopment#WebDevelopment#WebDevelopmentTools#WebProgramming
3 notes
·
View notes
Text
Prisma Vs Mongoose
When working with MongoDB in Node.js applications, developers often choose between Prisma and Mongoose. Both are powerful tools, but they cater to different needs and offer unique features…
#prisma#mongoose#reactjs#react#mern#mernstack#mongodb#nosql#javascript#js#nodejs#npm#prismaorm#orm#database#databaseconnectivity#developers#developer#software#softwaredeveloper#programming#programminglanguage#javascriptprogramming#advancedjavascript
1 note
·
View note
Text
A Beginner Guide to Tableau Software
Tableau Software is a powerful tool that helps you visualize data in a way that is easy to understand. Imagine you have lots of numbers and information. Tableau turns that into charts, graphs, and maps, making it simple to see patterns and trends. It is like turning a confusing spreadsheet into a colorful picture.
Getting started with Tableau is straightforward. You can drag and drop data to create different types of visualizations. It works with many data sources, like Excel, Google Sheets, and databases. You do not need to be a tech expert to use it, Tableau is designed to be user-friendly.
With Tableau, you can create dashboards that combine multiple visualizations, making it easy to share insights with others. It is great for students, business professionals, and anyone who wants to make data-driven decisions. Whether you want to track sales, analyze survey results, or see trends in website traffic, Tableau software helps you see the big picture quickly and clearly.
#Tableau#Datavisualization#Beginnerguide#Dataanalysis#Dashboards#Businessintelligence#Data#Userfriendly#Charts#Graphs#Datatrends#Visualanalytics#ExcelIntegration#Googlesheets#Databaseconnections#Datainsights
0 notes
Text
Optimizing Workflow: Connecting Contact Form 7 to External Databases
In the realm of website development, efficiency and seamless workflow are paramount. As businesses strive to capture and manage user data, the integration of Contact Form 7 with external databases emerges as a powerful solution. In this blog, we delve into the intricacies of optimizing workflow by seamlessly connecting Contact Form 7 to external databases, exploring the benefits, the process, and the impact on overall web development efficiency.

The Power of Contact Form 7
Contact Form 7 is a popular WordPress plugin that simplifies the creation and management of contact forms. Its user-friendly interface, customization options, and seamless integration with WordPress have made it a go-to choice for website owners. However, to truly harness its potential, businesses are increasingly looking to extend its functionality by integrating it with external databases.
Why Connect Contact Form 7 to External Databases?
Centralized Data Management: By connecting Contact Form 7 to external database, user submissions are no longer confined to the WordPress backend. Instead, they are stored in a central database, streamlining data management and providing a unified view of user interactions.
Advanced Data Analysis: External databases often offer advanced querying and reporting capabilities. Connecting Contact Form 7 to such databases allows businesses to analyze user data comprehensively, gaining valuable insights for strategic decision-making.
Enhanced Security: Storing sensitive user data in an external database can enhance security measures. With robust security protocols in place, businesses can ensure the confidentiality and integrity of user information.
Scalability: As businesses grow, so does the volume of user data. External databases are designed to handle large datasets efficiently, ensuring scalability without compromising performance.
Step-by-Step Guide: Connecting Contact Form 7 to External Databases
Step 1: Choose the Right Database
Select a compatible external database for integration. Common choices include MySQL, PostgreSQL, or MongoDB. Ensure that the database supports the necessary data types and features required for your specific needs.
Step 2: Install and Configure Contact Form 7
If you haven't already, install the Contact Form 7 plugin on your WordPress website. Create the forms you need, customizing them according to your requirements.
Step 3: Set Up Database Connection
Utilize the appropriate connection method based on your chosen database. This might involve configuring database connection details, setting up authentication, and ensuring the necessary permissions are granted.
Step 4: Use Hooks and Actions
Contact Form 7 to external database provides hooks and actions that allow developers to extend its functionality. Utilize these hooks to capture form submission data and trigger actions that send the data to the external database.
Step 5: Implement Error Handling and Logging
Develop robust error handling mechanisms to capture any issues that may arise during the data transfer process. Implement logging to keep track of successful submissions and identify potential areas for improvement.
Step 6: Test the Integration
Before deploying the solution on your live website, thoroughly test the integration. Check for data accuracy, submission speed, and ensure that the integration does not adversely impact the user experience.
Benefits of Optimizing Workflow with Contact Form 7 and External Databases
Efficiency: With user data seamlessly transferred to an external database, the workflow is optimized. There's no need to manually export and import data between systems, saving time and reducing the risk of errors.
Automation: The integration allows for automation of various tasks, such as updating customer records, triggering follow-up emails, or populating analytics dashboards in real-time.
Customization: External databases provide greater flexibility in data storage and retrieval. Businesses can tailor their database structure to align with specific reporting requirements or industry regulations.
Improved Analytics: The ability to perform advanced analytics on user data stored in external databases opens doors to deeper insights. Businesses can track user behavior, preferences, and trends with greater precision.
Scalability: As the volume of user interactions grows, external databases can handle the load more efficiently, ensuring a smooth and scalable workflow.
Challenges and Best Practices
Challenges:
Data Synchronization: Ensuring data consistency between the WordPress database and the external database can be a challenge. Implementing robust synchronization processes is crucial.
Security Concerns: Handling sensitive user data requires stringent security measures. Employ encryption, secure connections, and regularly audit access controls to mitigate security risks.
Best Practices:
Regular Backups: Schedule regular backups of your external database to prevent data loss in the event of unforeseen circumstances.
Optimization: Regularly optimize your database by removing unnecessary data and ensuring that queries are efficient to maintain optimal performance.
Conclusion: Elevating Web Development Efficiency
Connecting Contact Form 7 to external databases is not just a technical enhancement; it's a strategic move towards optimizing web development workflows. By streamlining data management, enhancing analytics capabilities, and ensuring scalability, businesses can stay ahead in the competitive online landscape. As technology continues to evolve, this integration serves as a testament to the adaptability and efficiency that businesses can achieve by harnessing the power of Contact Form 7 and external databases. Through careful implementation and adherence to best practices, the journey towards an optimized workflow is within reach for every web developer and business owner aiming to make the most of their online presence.
0 notes
Text
How to connect database in ASP.NET?

Connection Strings
Database providers require some form of connection string to connect to the database. Sometimes this connection string contains sensitive information that needs to be protected. Connection string needs to change when application moves from development environments to production environment, such as development, testing, and production.
ASP.NET Core
In ASP.NET Core the configuration system is very flexible, and the connection string could be stored in appsettings.json, an environment variable, the user secret store, or another configuration source. The following example shows the connection string stored in appsettings.json. JSONCopy { "ConnectionStrings": { �� "EFSourceDatabase": "Server= (localdb)\\mssqllocaldb;Database=EFSource;Trusted_Connection=True;" }, } The context is typically configured in Startup.cs with the connection string being read from configuration. Note the GetConnectionString () method looks for a configuration value whose key is ConnectionStrings :. You need to import the Microsoft.Extensions.Configuration namespace to use this extension method. C#Copy public void ConfigureServices(IServiceCollection services) { services.AddDbContext(options => options.UseSqlServer(Configuration.GetConnectionString("EFSourceDatabase"))); }
WinForms & WPF Applications
WinForms, WPF, and ASP.NET 4 applications have a tried and tested connection string pattern. The connection string should be added to your application's App.config file (Web.config if you are using ASP.NET). If your connection string contains sensitive information, such as username and password, you can protect the contents of the configuration file using the Secret Manager tool. XMLCopy The providerName setting is not required on EntityFramework (EF) Core connection strings stored in App.config because the database provider is configured via code. Read the connection string using the ConfigurationManager API in your context's OnConfiguring method. You may need to add a reference to the System.Configuration framework assembly to be able to use this API. C#Copy public class EFSourceContext : DbContext { public DbSet User { get; set; } public DbSet Posts { get; set; } protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder) { optionsBuilder.UseSqlServer (ConfigurationManager.ConnectionStrings.ConnectionString); } }
Universal Windows Platform (UWP)
Connection strings in a UWP application are typically a SQLite connection that just specifies a local filename. They typically do not contain sensitive information, and do not need to be changed as an application is deployed. These connection strings are usually fine to be left in code, as shown below. If you wish to move them out of code then UWP supports the concept of settings, see the App Settings section of the UWP documentation for details. C#Copy public class EFSourceContext : DbContext { public DbSet User { get; set; } public DbSet Posts { get; set; } protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder) { optionsBuilder.UseSqlite("Data Source=EFSource.db"); } } Read the full article
0 notes
Text
Connecting to a MySQL Database Using PHP with mysqli Extension

As a web developer, you want to ensure that your database connections are secure and efficient. If you're using PHP to connect to a MySQL database, the mysqli extension is a great option to achieve this. Our latest post covers the steps involved in connecting to a MySQL database using PHP with mysqli, so you can get the most out of your database connections. Check it out now!
#PHP #MySQL #mysqli #DatabaseConnection #WebDevelopment
1 note
·
View note
Text
Building resilient applications with Amazon DocumentDB (with MongoDB compatibility), Part 1: Client configuration
Amazon DocumentDB (with MongoDB compatibility) is a fast, scalable, highly available, and fully managed document database service that supports MongoDB workloads. You can use the same MongoDB 3.6 and 4.0 application code, drivers, and tools to run, manage, and scale workloads on Amazon DocumentDB without worrying about managing the underlying infrastructure. As a document database, Amazon DocumentDB makes it easy to store, query, and index JSON data. MongoDB drivers provide parameters that you can use to configure applications that connect to Amazon DocumentDB. Although it’s good to have options, it’s important to understand how these parameters impact your application behavior and what to consider when determining appropriate values for these configurations. In this multi-part series, you learn about best practices for building applications when interacting with Amazon DocumentDB. In this first post, I discuss client-side configurations and how to use them for effective connection and cursor management. Application configuration Database connection management determines an application’s behavior in terms of performance, resource utilization, availability, and resiliency. In this section, I discuss various configuration options in the MongoDB driver for effective connection management with Amazon DocumentDB. I also discuss how Amazon DocumentDB handles write concerns, configurations for journaling, and best practices for working with connection and cursor limits. I use the MongoDB Java driver throughout this post, but most of the concepts explained here are applicable to other drivers as well. A prerequisite for following the examples in this post is to provision an Amazon DocumentDB cluster and create an Amazon Elastic Compute Cloud (Amazon EC2) instance in the same VPC as your cluster to deploy your application. If you have not deployed an Amazon DocumentDB cluster yet, see Getting started with Amazon DocumentDB (with MongoDB compatibility); Part 1 – using Amazon EC2. Connection string Your application needs a database connection string to connect to the Amazon DocumentDB cluster. Navigate to your Amazon DocumentDB cluster to copy the connection string to connect to the cluster with an application. This connection string is built optimally to connect as a replica set and to use read preference as SecondaryPreferred. As shown in the following screenshot, copy the connection string to your application and replace the placeholder password with your cluster’s password. Your application can now connect to the Amazon DocumentDB cluster. The benefit of connecting as a replica set is that it enables your MongoDB driver to discover cluster topology automatically, including when instances are added or removed from the cluster. Using a replica set allows your application to be aware of the primary instance in the cluster for sending write requests and replica instances for sending read requests. The Replica set rs0 is created automatically when you create an Amazon DocumentDB cluster. When you use replicaSet=rs0 in the connection string, it tells the MongoDB driver that you’re connecting as a replica set. The following diagram illustrates that, when connecting using replica set, the application has visibility to current cluster topology. This view of topology is automatically updated when instances are added or removed. The connection string has read preference specified as readPreference=SecondaryPreferred. When connected as a replica set, the application uses this setting to route the read requests to the replicas and write requests to the primary. For other values that can be configured for readPreference, see read preference options. The connection string includes a cluster endpoint, which is a CNAME pointing to the primary instance of the cluster. During failover, the cluster endpoint makes sure writes from the application are redirected to the newly selected primary instance. Therefore, when you connect to your cluster as a replica set while using the readPreference as secondaryPreferred, your application can scale reads by using the available instances in your Amazon DocumentDB cluster. The reads from replica using secondaryPreferred are eventually consistent. The connection string allows the application to automatically load balance requests across available instances in the cluster and also to new instances when you perform a scale operation. Cluster endpoints provide resiliency for write operations by automatically routing the write requests to the primary instance. If you’re connecting to the Amazon DocumentDB cluster from your application via SSH tunnel, you can’t connect using replica set mode. You should drop replicaSet=rs0 from the connection string provided in the next section. The application throws timeoutError if you connect using replica set mode via SSH. For more information see, connecting to Amazon DocumentDB cluster from outside an Amazon VPC. Sample code The following code shows how to connect to Amazon DocumentDB using the connection string: configureSSL(); String username = ""; //Update username for DocumentDB String password = ""; //Update password for DocumentDB String clusterEndpoint = ":27017";//Update Cluster EndPoint for DocumentDB String template = "mongodb://%s:%s@%s/test?ssl=true&replicaSet=rs0&readpreference=%s"; String readPreference = "secondaryPreferred"; String connectionString = String.format(template, username, password, clusterEndpoint, readPreference); MongoClient mongoClient = MongoClients.create(connectionString); Monitoring metrics With the readPreference set to secondaryPreferred, the read workload should be distributed across all reader instances. The following visualization monitors the DatabaseConnections metric on the replica instances. The metric shows that the application connected to both replica instances, getting-started-with-documentdb2 and getting-started-with-documentdb3, to perform read operations, and therefore has the ability to scale reads. Let’s monitor the operations on the instances to validate that the reads are distributed to available replicas. In the application, I run a few inserts and find operations. The OpcountersQuery metric in the following visualization indicates the number of queries issued in a 1-minute period. Based on our connection string, the replica instances in the cluster should handle the find query requests. The following metric shows that only write (OpcountersInsert) requests are sent to our primary instance and no reads (OpcountersQuery) are sent to the primary instance. The CPUUtilization metric (see the following visualization) shows that about 5% CPU is utilized in each instance. When this usage increases due to increase in read traffic, for example to 75%, you can scale reads by adding more read replicas. When connected as a replica set, the application uses the MongoDB driver to monitor the Amazon DocumentDB cluster topology changes in real time. As new read replicas are added to the cluster, the application automatically distributes the requests to these replicas as they become available, without any changes or manual intervention to the application. For more information on scaling replicas, see scaling Amazon DocumentDB clusters. Client settings MongoDB drivers provide additional capabilities to manage connections to the database. In this section, we look at some of these settings to understand how to configure applications using these settings. Connection management Connection pooling and related timeout configurations are key for connection management. It’s a best practice to explicitly declare these configurations, and the values vary based on your use case and access patterns. When you observe connection timeout issues on your application side, review the following configurations to ensure that appropriate values are used while creating connection to the database. Connection pool A connection pool is a cached set of connections that the application can reuse to avoid creating new database connections for every request. A MongoDB driver creates connections in the pool as defined by the minPoolSize and maxPoolSize setting, when needed by the application. For Java driver, the default value for minPoolSize is 0 and maxPoolSize is 100. Default value varies by driver; for example, NodeJS driver has a default value of 5, whereas Python has default value of 100. To make sure that your application doesn’t run out of connections in the pool, it’s recommended to set max pool size to 10–20% more than the number of concurrent queries that the application runs. Typically, the value for maxPoolSize is set taking into consideration slow queries, fast queries, and overall latency requirements of your application. If maxPoolSize is set too high, your application can create database connections up to the maxPoolSize, which results in more connections than what your application needs at sustained load. These connections use memory on your application server and can adversely affect your application’s performance. It’s important to benchmark your application by simulating production workloads to determine if the chosen value for your connection pool satisfies your latency requirements. Connection pools are specific to MongoClient. Additionally, connections in the pool are declared when the MongoClient is initialized and all the connections in the pool are dropped when the MongoClient is stopped. WaitQueueSize WaitQueueSize is the maximum number of threads allowed to wait for a connection to become available from the pool. Synchronous drivers make blocking calls. These calls wait for the result of an operation and therefore it’s less likely that the application exhausts the wait queue. With asynchronous drivers, it’s common to perform concurrent operations in a request. This means using a connection from the pool for each operation, which can cause the connections to exhaust and require the application to have threads waiting for a connection. Therefore, wait queue size is generally applicable to asynchronous drivers, and available threads determine the parallel running of operations on the driver. When maxPoolSize is set too low, the probability of getting the MongoWaitQueueFull exception is high, due to unavailability of connections. If you see this exception, try increasing maxPoolSize. WaitQueueTimeMS WaitQueueTimeMS is the maximum time a thread waits for a connection to become available on the connection pool. If all available connections in the pool are serving concurrent requests, new requests wait for the connections to complete the operation and be available in the pool. These new requests time out if a connection is not available within a time defined by WaitQueueTimeMS. The default value varies by driver; for example, 2 minutes for a Java driver and 0 for Python. If this time is reached and no connection becomes available, a MongoTimeoutException is raised. If you’re frequently running into this exception, identify long-running queries and fine-tune them or try increasing maxPoolSize. ServerSelectionTimeoutMS ServerSelectionTimeoutMS is the maximum time the driver waits to select an instance for an operation before raising a timeout exception. The driver selects the primary instance for write operations and the replica instance for read operations, and waits for a time defined by ServerSelectionTimeoutMS before raising a MongoTimeoutException. The default value for this setting is 30 seconds and is sufficient for a new primary to be elected during failover. It’s a best practice to handle this error to make the application aware of any hardware and software problems and take appropriate actions (like changing failover time). To make your application resilient to server failures, implement appropriate retry mechanisms like exponential backoff. ConnectTimeoutMS ConnectTimeoutMS is the maximum time the driver waits before a new connection attempt is stopped. The value for this setting depends on your network infrastructure. Because Amazon DocumentDB runs in a VPC-only setup and the clients are a part of this VPC, the connection timeout should not exceed the default value of 10 seconds. The following are some of the common reasons for connection timeout errors: If your application throws MongoTimeoutException consistently, it’s likely that your application server’s security group isn’t configured to interact with Amazon DocumentDB. To remediate this issue, ensure that your application is running in the same VPC as Amazon DocumentDB, and the security group assigned to Amazon DocumentDB has an inbound rule configured to receive requests from the application server on port 27017 over TCP. If you’re setting a timeout in your application context to manage application lifecycle processes, for example setting a ContextTimeout in the go routine, you should ensure that the timeout value is set higher than the maximum session duration of your application. If these timeouts are set too low, the application can frequently close connections and open new connections, resulting in suboptimal utilization of resources. If you see frequent connection failure errors on the client (such as connection reset), look for long-running queries that are holding up the connections and optimize these queries or increase timeout setting. It’s also a best practice to set maxIdleTimeMS to close idle connections in the pool. You can also use a singleton pattern to create a MongoClient object, which we explain later in this post. MaxIdleTimeMS MaxIdleTimeMS is the maximum time the driver waits before removing and closing an idle connection in the pool. If you observe increasing connection timeout errors over a period of time, setting maxIdleTimeMS to a value that is greater than the maximum session duration of your application can help close idle connections. SocketTimeout SocketTimeout is the maximum time for sending or receiving data on a socket before timing out. This setting typically comes into play after a connection with the cluster is established. Each connection uses the TCP socket to send and receive data. The driver raises the appropriate type of MongoSocketException if it can’t read or write to the socket. It’s best to leave this setting at its default value of 0 (no timeout) because operations may not take the same time to complete, depending upon your access patterns and queries. The default value for SocketTimeout varies by driver. For example, Java and NodeJS have 0 or no SocketTimeout by default, but Ruby has a default value of 5 seconds. You should only change this value if you’re certain about your query patterns and response times. It’s important to note that the queries on the database don’t time out when the SocketTimeout error occurs and only the client socket is closed. Sample code The following code uses MongoClientSettings to create a client (compared to the connection string approach in the previous section) as a convenient way to configure connection management settings: MongoClientSettings settings = MongoClientSettings.builder() .applyToClusterSettings(builder -> builder.hosts(Arrays.asList(new ServerAddress(clusterEndpoint, 27017)))) .applyToClusterSettings(builder -> builder.requiredClusterType(ClusterType.REPLICA_SET)) .applyToClusterSettings(builder -> builder.requiredReplicaSetName("rs0")) .applyToClusterSettings(builder -> builder.mode(ClusterConnectionMode.MULTIPLE)) .readPreference(ReadPreference.secondaryPreferred()) .applyToSslSettings(builder -> builder.enabled(true)) .credential(MongoCredential.createCredential(username,"Admin",password.toCharArray())) .applyToConnectionPoolSettings(builder -> builder.maxSize(10)) .applyToConnectionPoolSettings(builder -> builder.maxWaitQueueSize(2)) .applyToConnectionPoolSettings(builder -> builder.maxConnectionIdleTime(10, TimeUnit.MINUTES)) .applyToConnectionPoolSettings(builder -> builder.maxWaitTime(2, TimeUnit.MINUTES)) .applyToClusterSettings(builder -> builder.serverSelectionTimeout(30, TimeUnit.SECONDS)) .applyToSocketSettings(builder -> builder.connectTimeout(10, TimeUnit.SECONDS)) .applyToSocketSettings(builder -> builder.readTimeout(0, TimeUnit.SECONDS)) .build(); MongoClient mongoClient = MongoClients.create(settings); Durable writes Amazon DocumentDB replicates six copies of the data across three Availability Zones. MongoDB drivers provide an option to tune write concern and journal files. Amazon DocumentDB doesn’t expect developers to set write concern and journal, and ignores the values sent for w and j (writeConcern and journal). Amazon DocumentDB always writes data with writeConcern: majority and journal: true so the writes are durably recorded on a majority of nodes before sending an acknowledgement to the client. This behavior makes sure that there is no data loss after receiving an acknowledgment from the database and removes the burden from developers to manually tune write concerns. For example, Amazon DocumentDB ignores the following code:: collection.withWriteConcern(WriteConcern.W1.withJournal(false)); The code is implicitly replaced as the following: collection.withWriteConcern(WriteConcern.MAJORITY.withJournal(true)); Amazon DocumentDB does not support the wtimeout option. The value passed in this setting is ignored and writes to the primary instance are guaranteed not to block indefinitely. Because the writes are always durable, setting the read preference to Primary provides read-your-own-write consistency, if desired. Cursor and connection limits When designing your application, it’s important to understand the service limits and quotas because it may have an impact on your design decision. For example, if your application requires support for a high number of concurrent requests and therefore needs more connections, you should select an instance size that satisfies your connection limit requirements. In this section, I discuss cursor and connection limits in detail. Cursor limits When you perform read operations against the database that results in multiple documents (greater than batch size), the server returns a cursor. You can access documents by iterating over this cursor. Amazon DocumentDB gets the full result set but instead of sending all the data to the client, it holds the result set in memory and returns the query results in batches to the client. The client can iterate through the result set using the cursor, and the batch size can be overridden using the batchSize() method on the cursor. If cursors are not closed, Amazon DocumentDB continues to hold the data in memory, waiting for the cursor to be utilized again. It’s a best practice to close cursors when you complete processing the result set. Open cursors that are idle are closed after 10 minutes of inactivity but, when managed proactively, can save you resources and cost. Amazon CloudWatch metric DatabaseCursorsMax indicates the maximum number of open cursors on an instance in a 1-minute period. It’s recommended to set an alarm when this metric is at 80% of the limit. If this alarm occurs, your developers can inspect the application code to make sure that cursors are closed or scale your database instances to increase the cursor limits. The following code is an example of closing the cursor: Document searchQuery = new Document(); searchQuery.put("firstName", "Josee"); MongoCursor cursor = collection.find(searchQuery).iterator(); try { while (cursor.hasNext()) { System.out.println(cursor.next()); } } finally { cursor.close(); } If the preceding code is called by two users as shown in the following diagram, the application opens two connections to the Amazon DocumentDB cluster. The application receives one cursor for each request. When cursors are opened, they place a lock on the memory pages of replica instance. In write-heavy use cases, when the cursors aren’t closed, the lock prevents the replica instance from catching up with the writes from the primary instance. To avoid catchup delays, Amazon DocumentDB reboots the instance if the lock is not released in 30 seconds. When an instance reboots, connection to the instance from application is lost and re-established, and your application receives MongoSocketException. You can monitor the DatabaseCursorsTimedOut metric to observe number of cursors that timed out in a 1-minute period. If the value for this metric consistently increases, it’s a good idea to review the application and identify opportunities to close the cursor. Connection limits Each MongoClient connecting to Amazon DocumentDB uses database connections as defined by the minimum and maximum pool size setting. Creating connections consumes memory on the application server and therefore must be managed appropriately. For instance, let’s say that you have a microservice that has one query that takes 3 seconds to complete, and five operational queries that take 1 second to complete. A pool size of three should be sufficient to satisfy the SLA of 3 seconds for this microservice. One connection is used by the long-running query for 3 seconds, and two connections are used by operational queries, and the total runtime is approximately 3 seconds. The connection pool size is associated to the MongoClient object, and it’s a best practice to always create MongoClient as a singleton object. It’s not common to increase the ConnectionPerHost setting; if you decide to do so, stress test the value that is appropriate for your use case before deploying to production. In the microservice example, if there are 10 requests to your microservice, the maximum number concurrent connections is 3 (1 * 3), if MongoClient is created as a singleton object, assuming clients are using synchronous driver. If MongoClient is not created as a singleton object, the maximum number of concurrent connections is 30 (10 * 3), because one instance of MongoClient is created for each request to microservice. For clients using asynchronous drivers, this number should be further multiplied by the value set for WaitQueueSize. If you have other applications writing data to Amazon DocumentDB, such as AWS Database Migration Service (AWS DMS), AWS Lambda, or similar, you should also factor the additional connections from these services and applications on the primary instance. If you’re connecting to Amazon DocumentDB from Lambda, it’s a best practice to create the MongoClient instance outside of the handler function in the Lambda execution context as a global variable. This allows Lambda to reuse the already established Amazon DocumentDB connections and reduces runtime, because the Lambda doesn’t have to create a new connection on every invocation. For sample code, refer this python application. Amazon DocumentDB allows you to monitor the maximum number of open connections on every instance using the DatabaseConnectionsMax metric (see the following screenshot). It’s recommended to set an alarm when this metric is at 80% of the limit. In response to this alarm, you should inspect your connection pool configuration or scale your database instances to increase the connection limits. If you’re seeing ECONNREFUSED errors on the client side, it’s possible that you’re hitting the instance limit and you should consider scaling your instances up. Summary In this post, I discussed connection strings and best practices for defining connection management configurations. I also discussed cursor and connection limits and some of the common issues and possible solutions for establishing and managing Amazon DocumentDB connections. The source code referred to in this post is available in GitHub. For more information about developing applications using Amazon DocumentDB, see Developing with Amazon DocumentDB and Migrating to Amazon DocumentDB. About the Author Karthik Vijayraghavan is a Senior DocumentDB Specialist Solutions Architect at AWS. He has been helping customers modernize their applications using NoSQL databases. He enjoys solving customer problems and is passionate about providing cost effective solutions that performs at scale. Karthik started his career as a developer building web and REST services with strong focus on integration with relational databases and hence can relate to customers that are in the process of migration to NoSQL. https://aws.amazon.com/blogs/database/building-resilient-applications-with-amazon-documentdb-with-mongodb-compatibility-part-1-client-configuration/
0 notes
Link
本記事は エムスリー Advent Calendar 2020 の4日目の記事です。 こんにちは、エンジニアリンググループの高橋(@tshohe1)です。 これまでは全チームを横断的に見る SRE チームで主に働いていたのですが、数ヶ月前からサービス側のチーム(電子カルテチーム)にも参加するようになりました! 参加時のミッションはクラウド電子カルテ「エムスリーデジカル」(以下デジカル)のアーキテクチャ刷新にあたり監視周りの設定を1から検討する」というもので、本記事はその流れをまとめたものになります。 アーキテクチャ刷新の背景 監視の再検討 / 解消したい課題 監視設定の流れ ステップ1: システム構成図整備 ステップ2: リソースリストの作成 ステップ3: 利用可能なツールで取得可能なメトリクスを列挙 ステップ4: 通知のクラス分け ステップ5: SLIの選定 ステップ6: SLOの設定 ステップ7: アラート処理の実装 ステップ8: ダッシュボード作成 まとめ 今後 We're hiring! アーキテクチャ刷新の背景 デジカルの事業の成長速度は著しく(2年で利用施設数はなんと10倍!)、これまで��構成ではこれ以上スケールさせることが出来ないコンポーネントも出始めていました。そこで従来までの単一環境を辞め、一定利用施設数ごとに環境を用意(水平分割)する方針に変更することになりました。このあたりの詳しい経緯については下記記事にまとめられているので、気になる方はこちらをご参照下さい。 www.m3tech.blog 監視の再検討 / 解消したい課題 前述のアーキテクチャ刷新の伴い、監視の構成も少し見直す必要が出てきました。 また従来の監視設定には下記のような課題もあったため、これらの解消も目標としました。 アラート設定項目が多く、頻繁に Slack に通知が来てしまい対応者が疲弊する 監視設定名と通知先が揃っていないものがある Terraform 管理で通知先だけを切り替えるケースがあるせいか、Alarm 名は重要そうに見えても個人端末に通知が行かないものがあったりしました(紛らわしい) 監視設定の流れ 各種SRE本や色々な記事*1から、下記のような手順で監視を設定していくのがいいのではないかと考え実践してみました。 ステップ1: システム構成図整備 まずは構成図*2の作成が必要かなと思います。構成図があることによって下記のような作業が効率化されるはずです。 システムの構成についての理解が楽になる パフォーマンス問題が発生した際の調査で役に立つ ボトルネックとなりそうな箇所がわかりやすくなっている 今回は幸いにも私が参加する以前の新規構成の検討段階で作成された構成図が既に存在していたため、こちらの作業はスキップできました。 ステップ2: リソースリストの作成 システムの構成要素となる AWS のコンポーネントを SpreadSheet に列挙していきました。 AWS サービスを列挙(例: AWS/ALB, AWS/RDS...) そのサービスで作成するリソースを列挙(例: ~alb, ~db...) ステップ3: 利用可能なツールで取得可能なメトリクスを列挙 更に取得可能な全てのメトリクス(例: ActiveFlowCount, DatabaseConnections...)についても表にまとめていきます。またここでは、ツール選定の判断材料とするために、CloudWatch 以外の利用を検討している他の監視ツール(Datadog 等)で取得できるメトリクスについても列挙しました。 ステップ4: 通知のクラス分け 個別に通知先を個別に決めるのは大変なので、下記4パターンに分類しました。 クラス 通知先 概要 fatal - 個人端末のメールアドレス - 緊急用Slack 即時対応が必要なもの。また自然回復する可能性が高いものでもシステム影響が大きいものについてはこちらの通知を設定する。 warn - 社内メールアドレス - 社内用Slack 緊急性が低い、あるいは自然回復する可能性が高いが、念の為発生したことを通知しておきたいもの。監視が安定してきたらこのクラス不要になってくると考えられる。 ticket - 初回のみSlackのJIRA Integrationによる通知 即時の対応は不要だが、いずれは何らかの作業を実施しなければならないもの。リソース使用量が上がっていてキャパシティ不足になりそう等。 logging - 無し 調査用に残しておきたいイベントなど。通知は無しでCloudWatch Logsに残すだけ。 通知の流れは図で書くとこんな感じです。各クラスごとにトピックを作成し、上記通知ルールを設定しました。 通知クラスごとの通知の流れ ステップ5: SLIの選定 各システムの構成要素ごとに、医療施設の業務において重要な機能に影響する項目*3をステップ3で作成したリストにあるメトリクス or そのメトリクスの組み合わせで表現できるメトリクスからピックアップしました。 重要な機能を提供するためのAPIのリクエスト成功率 前段のLBのリクエスト成功率 特定 URL でフィルタリングしない全体の監視 重要な機能を提供するための非同期処理用のキューの滞留メッセージ数 等 またこの際 アラート設定が多く、頻繁に Slack に通知が来てしまい対応者が疲弊する を防ぐという目標のため、監視対象はシステム影響が出ているものに限定し、CPU使用率等のノイズになりうるものはなるべく除外するように意識しました。 ステップ6: SLOの設定 アラートの閾値を決めていくにあたって、設定対象が多い場合は個別に設定していくのは大変骨の折れる作業です。もちろん精度はそちらのほうが高くなるとは思いますが、アラート対象が増えると閾値を管理するコストも増加していきます。 今回の対象はデジカルという単一サービスではあるものの、内部では数多くのコンポーネントで構成されており、それぞれで SLO を設定したほうが良いと考えたため、設定対象がそれなりの数(20弱)になってしまいました。 それに対する解決法として、The Site Reliability Workbook(SRE本第二弾)に記載のあったクラス分けして設定する手法を採用しました。 ※ SLI "リクエスト成功率" についてのみ クラス 概要 設定SLO CRITICAL 最重要機能 例: ログイン処理 リクエスト成功率 99.95%(+ 一定レスポンスタイムを超えたものはエラー扱い) HIGH_FAST 重要性が高く、尚且早いレスポンスが求められるもの 例: 電子カルテ閲覧 リクエスト成功率 99.9%(+ 一定レスポンスタイムを超えたものはエラー扱い) HIGH_SLOW 重要性は高いが処理時間も長いもの 例: バッチ処理 リクエスト成功率 99.9% LOW 重要性が低いもの 例: ダウンしても影響が軽微なコンポーネント リクエスト成功率 99% ステップ7: アラート処理の実装 こちらの発表 でも紹介されていますが、新環境では AWS CDK で CloudFormation テンプレートを作成しインフラの変更を適用しています。そのため監視の設定も同様にリソースを定義している場所に入れていきました。 全シャード共通のリソースとして下記を定義 SNS Topic Chatbot Terraform では管理出来ないと思いますが、AWS CDK なら CloudFormation が対応しているものは大���対応しています ticket トピックに入ったメッセージを整形して JIRA チケットを作成する Lambda Function 共通 LB のログから処理時間やエラー数を取得するための MetricFilter シャードごとのリソースとして下記を定義 CloudWatch Alarm また 監視設定名と通知先が揃っていないものがある となっているのを改善するという目標のため、AWS CDK の CfnAlarm をそのまま利用するのではなく、ラッパーを用意することで命名規則や通知先の設定を強制させるようにしました。 ステップ8: ダッシュボード作成 上記で追加したCloudWatch Alarmのグラフを CloudWatch ダッシュボードへのアラームウィジェットの追加 を参考に共通ダッシュボード/シャード単位のダッシュボードにそれぞれ集約*4していきました。 これで各SLI、及びSLO超過状況のステータスが一覧できるようになりました! また丁度この時期に CloudWatch MetricsExplorer というTagごとにグラフを分けて表示できるという新機能が発表されました。これはシャードごとのグラフ表示に使えそうだ! と思い試してみましたが、下記の点から今回の利用は見送ることにしました。ただ今後には超期待です。 1つのグラフで表示できるのはリソースごとに決められたメトリクスかつ1つまでで、計算式などは使えない(成功率の計算が難しそう) カスタムメトリクスは非対応 ちなみに他の選択肢として Datadog に集約することも考えていましたがコスト的に厳しく断念しました。Datadog 側に CloudWatch Metrics のデータを同期するために AWS Integration から CloudWatch の API をコールしているのですが、規模が大きいとそのコストがかなりの額になってしまっていました。さらにはデータ取得間隔(デフォルト10分)を短くしようとすると更にそのコストが大きくなるという感じでちょっと辛く...(加えてコスト以外の理由も少しあり...) まとめ どのようなステップを踏むべきかを考え直しそのステップを漏れなく実施することで、場当たり的ではない、そこそこ網羅的な監視設定が実現できた気がします! ただ当初の課題の1つではあった監視対象を減らしてシンプルにするというところはあまり達成できなかった気がします。これは電子カルテというサービスの特性上単一のコア機能だけあれば診療業務を遂行できるというものではないという点、非同期処理が多くキューのメッセージ滞留監視設定などはどうしても必要になってしまう点が理由としてはあるかなと考えています。 今後 適切な SLO への調整はまだまだこれからといったところ 本番影響が軽微なアラートを減らす仕組みの導入 アーキテクチャ刷新直後ということもあり、閾値を緩めるのはなかなかに勇気がいることでもありまだ実現できていませんが、アラートが発生するたびにエンジニアが疲弊してしまうことを考えると早急に手を打ちたいと考えています 具体的には バーンレートアラート の導入を検討しています。初期設定時には間に合わなかったのでこれから設定しようと思っています! We're hiring! 成長著しく、社会的意義もめちゃくちゃ大きいクラウド電子カルテサービスを一緒に開発/運用していきたいというエンジニアを絶賛募集しております! ご興味のある方はぜひ下記リンクからお問い合わせ下さい! jobs.m3.com またデジカルチームでは 12/14(月) 19:00~ に採用説明会の開催を予定しています! 本サービスに少しでもご興味湧いた方、ぜひお気軽にご参加下さい! m3-engineer.connpass.com *1:少し前に「詳解システムパフォーマンス」という本を読んでいたせいか Brendan Gregg 氏に感化されていたこともあり、氏のBlogの記事なども参考にしていたりもします。 *2:機能ブロック図レベルが望ましそうではありますが、あまり細かく書いても仕様変更のとき辛いので、辛くならない程度の粒度でいいかなと思っています。 *3:アーキテクチャ刷新前から定期的にチューニングチームでチェックしていた項目の一覧がありそれを主に参考にしました。そういうものがない場合は、業務知識の豊富な人にがっつりヒアリングするしかなさそうな気がします。 *4:初期設定は手動ではありますが、基本的にこれらもAWS CDKで管理する予定です。
0 notes
Photo

Quick installer to setup your CRM with few clicks and some inputs e.g. admin credential, #databaseconnection.#sugarcrm #suitecrm #businesssolutions #callCenterCRM #RealEstateCRM For more details visit :http://bit.ly/2Hix8RB
0 notes
Video
youtube
How To Populate Combo Box Java Postgresql PgAdmin4 Netbeans Video Tutorial
populateComboboxJavaPostgresql.java
import java.sql.Connection; import java.sql.PreparedStatement; import java.sql.ResultSet; import java.sql.SQLException; import java.util.logging.Level; import java.util.logging.Logger;
public class populateComboboxJavaPostgresql extends javax.swing.JFrame { Connection connCombo; PreparedStatement pst = null; ResultSet rst = null; String SqlSel = "select * from customers limit 10"; databaseConnection dbConn = new databaseConnection();
public populateComboboxJavaPostgresql() { initComponents(); connCombo = dbConn.databaseConn(); populateComboboxJavaPostgresql(); }
public void populateComboboxJavaPostgresql() { try { pst = connCombo.prepareStatement(SqlSel); rst = pst.executeQuery(); while(rst.next()) { String CompanyNameStr = rst.getString("CompanyName");
jComboBoxPopulateJava.addItem(CompanyNameStr);
} } catch (SQLException ex) { Logger.getLogger(populateComboboxJavaPostgresql.class.getName()).log(Level.SEVERE, null, ex); }
} /** * This method is called from within the constructor to initialize the form. * WARNING: Do NOT modify this code. The content of this method is always * regenerated by the Form Editor. */ @SuppressWarnings("unchecked") // <editor-fold defaultstate="collapsed" desc="Generated Code"> private void initComponents() {
jComboBoxPopulateJava = new javax.swing.JComboBox<>();
setDefaultCloseOperation(javax.swing.WindowConstants.EXIT_ON_CLOSE);
jComboBoxPopulateJava.setModel(new javax.swing.DefaultComboBoxModel<>(new String[] { "Select Item" }));
javax.swing.GroupLayout layout = new javax.swing.GroupLayout(getContentPane()); getContentPane().setLayout(layout); layout.setHorizontalGroup( layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING) .addGroup(layout.createSequentialGroup() .addGap(138, 138, 138) .addComponent(jComboBoxPopulateJava, javax.swing.GroupLayout.PREFERRED_SIZE, javax.swing.GroupLayout.DEFAULT_SIZE, javax.swing.GroupLayout.PREFERRED_SIZE) .addContainerGap(168, Short.MAX_VALUE)) ); layout.setVerticalGroup( layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING) .addGroup(layout.createSequentialGroup() .addGap(49, 49, 49) .addComponent(jComboBoxPopulateJava, javax.swing.GroupLayout.PREFERRED_SIZE, javax.swing.GroupLayout.DEFAULT_SIZE, javax.swing.GroupLayout.PREFERRED_SIZE) .addContainerGap(225, Short.MAX_VALUE)) );
pack(); }// </editor-fold>
/** * @param args the command line arguments */ public static void main(String args[]) { /* Set the Nimbus look and feel */ //<editor-fold defaultstate="collapsed" desc=" Look and feel setting code (optional) "> /* If Nimbus (introduced in Java SE 6) is not available, stay with the default look and feel. * For details see http://download.oracle.com/javase/tutorial/uiswing/lookandfeel/plaf.html */ try { for (javax.swing.UIManager.LookAndFeelInfo info : javax.swing.UIManager.getInstalledLookAndFeels()) { if ("Nimbus".equals(info.getName())) { javax.swing.UIManager.setLookAndFeel(info.getClassName()); break; } } } catch (ClassNotFoundException ex) { java.util.logging.Logger.getLogger(populateComboboxJavaPostgresql.class.getName()).log(java.util.logging.Level.SEVERE, null, ex); } catch (InstantiationException ex) { java.util.logging.Logger.getLogger(populateComboboxJavaPostgresql.class.getName()).log(java.util.logging.Level.SEVERE, null, ex); } catch (IllegalAccessException ex) { java.util.logging.Logger.getLogger(populateComboboxJavaPostgresql.class.getName()).log(java.util.logging.Level.SEVERE, null, ex); } catch (javax.swing.UnsupportedLookAndFeelException ex) { java.util.logging.Logger.getLogger(populateComboboxJavaPostgresql.class.getName()).log(java.util.logging.Level.SEVERE, null, ex); } //</editor-fold>
/* Create and display the form */ java.awt.EventQueue.invokeLater(new Runnable() { public void run() { new populateComboboxJavaPostgresql().setVisible(true); } }); }
// Variables declaration - do not modify private javax.swing.JComboBox<String> jComboBoxPopulateJava; // End of variables declaration }
databaseConnection.java
import java.sql.Connection; import java.sql.DriverManager; import java.sql.SQLException; import java.util.logging.Level; import java.util.logging.Logger; import javax.swing.JOptionPane;
public class databaseConnection { Connection Conn = null; String url = "jdbc:postgresql://localhost:5432/northwind"; String user = "postgres"; String password = "postgres"; public Connection databaseConn() {
try { Class.forName("org.postgresql.Driver"); } catch (ClassNotFoundException ex) { Logger.getLogger(databaseConnection.class.getName()).log(Level.SEVERE, null, ex); }
try { Conn = DriverManager.getConnection(url, user, password);
JOptionPane.showMessageDialog(null, "Connected");
} catch (SQLException ex) { Logger.getLogger(databaseConnection.class.getName()).log(Level.SEVERE, null, ex); } return Conn; }
public static void main(String[] args) { databaseConnection connDatabase = new databaseConnection(); connDatabase.databaseConn(); }
}
0 notes
Video
youtube
Connect Java with MS Access database
Easily connect java to MS Access database
In this video you will get to know how to connect java with database..
#databaseConnection
0 notes
Text
Resources consumed by idle PostgreSQL connections
PostgreSQL is one of the most popular open-source relational database systems. With more than 30 years of development work, PostgreSQL has proven to be a highly reliable and robust database that can handle a large number of complex data workloads. AWS provides two managed PostgreSQL options: Amazon Relational Database Service (Amazon RDS) for PostgreSQL and Amazon Aurora PostgreSQL. This is a two-part series. In this post, I talk about how PostgreSQL manages connections and the impact of idle connections on the memory and CPU resources. In the second post, Performance impact of idle PostgreSQL connections, I discuss how idle connections impact PostgreSQL performance Connections in PostgreSQL When the PostgreSQL server is started, the main process forks to start background maintenance processes. With default configurations, the process tree looks like the following on a self-managed PostgreSQL instance: /usr/pgsql-11/bin/postmaster -D /var/lib/pgsql/11/data _ postgres: logger _ postgres: checkpointer _ postgres: background writer _ postgres: walwriter _ postgres: autovacuum launcher _ postgres: stats collector _ postgres: logical replication launcher You can see this process tree in Amazon RDS and Aurora PostgreSQL by enabling enhanced monitoring and looking at the OS Process List page (see the following screenshot). For more information, see Enhanced Monitoring. These child processes take care of activities such as logging, checkpointing, stats collection, and vacuuming. The process list in Enhanced Monitoring limits the total number of processes that are shown on the console. If you need to view the complete list of processes, consider using the pg_proctab extension to query system statistics. This extension is available in the latest RDS PostgreSQL minor versions. After initializing the maintenance child processes, the main PostgreSQL process starts waiting for new client connections. When a new connection is received, the main process forks to create a child process to handle this new connection. The main process goes back to wait for the next connection, and the newly forked child process takes care of all activities related to this new client connection. A new child process is started for each new connection received by the database. The following screenshot shows that a user app_user is connected to the database mydb from a remote host (10.0.0.123). The max_connections parameter controls the total number of connections that can be opened simultaneously. Memory used by connections PostgreSQL uses shared memory and process memory. Shared memory is a chunk of memory used primarily for data page cache. The shared_buffers parameter configures the size of the shared memory. This shared memory is used by all the PostgreSQL processes. The process memory contains process-specific data. This memory is used for maintaining the process state, data sorting, storing temporary tables data, and caching catalog data. On Linux, when a new process is forked, a copy of the process gets created. As an optimization, Linux kernel uses the copy-on-write method, which means that initially the new process keeps pointing to the same physical memory that was available in the parent process. This continues until the parent or the child process actually changes the data. At that point, a copy of the changed data gets created. This method reduces some memory overhead when PostgreSQL forks a child process on receiving the new connection. For more information about fork functionality, see the entry in the Linux manual. Idle connections Idle connections are one of the common causes of bad database performance. It’s very common to see a huge number of connections against the database. A common explanation is that they’re just idle connections and not actually doing anything. However, this is incorrect—they’re consuming server resources. To determine the impact of idle PostgreSQL connections, I performed a few tests using a Python script that uses the Psycopg 2 for PostgreSQL connectivity. The tests include the following parameters: Each test consists of 2 runs Each test run opens 100 PostgreSQL connections Depending on the test case, some activity is performed on each connection before leaving it idle The connections are left idle for 10 minutes before closing the connections The second test runs DISCARD ALL on the connection before leaving it idle The tests are performed using Amazon RDS for PostgreSQL 11.6 Although this post shows the results for Amazon RDS for PostgreSQL 11.6 only, these tests were also performed with Aurora PostgreSQL 11.6, PostgreSQL on EC2, and Amazon RDS for PostgreSQL 12 to confirm that we see a similar resource utilization trend. I used the RDS instance type db.m5.large for the test runs, which provides 2 vCPUs and 8GB memory. For storage, I used an IO1 EBS volume with 3000 IOPS. The DISCARD ALL statement discards the session state. It discards all temporary tables, plans, and sequences, along with any session-specific configurations. This statement is often used by connection poolers before reusing the connection for the next client. For each test, a run with DISCARD ALL has been added to see if there is any change in the memory utilization. Connections test #1: Connections with no activity This basic test determines the memory impact of newly opened connection. This test performs the following steps: Open 100 connections. Leave the connections idle for 10 minutes. Close the connections. During the 10-minute wait, check the connection state as follows: postgres=> select state, query, count(1) from pg_stat_activity where usename='app_user' group by 1,2 order by 1,2; state | query | count -------+-------+------- idle | | 100 (1 row) The second run repeats the same steps but runs DISCARD ALL before leaving the connection idle. If you run the preceding query, you get the following output: postgres=> select state, query, count(1) from pg_stat_activity where usename='app_user' group by 1,2 order by 1,2; state | query | count -------+-------------+------- idle | DISCARD ALL | 100 (1 row) The following Amazon CloudWatch metrics show the connections count (DatabaseConnections) and memory utilization (FreeableMemory) on an RDS PostgreSQL instance. The free memory chart shows no significant difference between the run with and without DISCARD ALL. As the connections got opened, the free memory reduced from approximately 5.27 GB to 5.12 GB. The 100 test connections used around 150 MB, which means that on average, each idle connection used around 1.5 MB. Connections test #2: Temporary tables This test determines the memory impact of creating temporary tables. In this test, the connections create and drop a temporary table in the following steps: Open a connection. Create a temporary table and insert 1 million rows with the following SQL statement: CREATE TEMP TABLE my_tmp_table (id int primary key, data text); INSERT INTO my_tmp_table values (generate_series(1,1000000,1), generate_series(1,1000000,1)::TEXT); Drop the temporary table: DROP TABLE my_tmp_table; Commit the changes. Repeat these steps for all 100 connections. Leave the connections idle for 10 minutes. Close the connections. During the 10-minute wait, check the connections state as follows: postgres=> select state, query, count(1) from pg_stat_activity where usename='app_user' group by 1,2 order by 1,2; state | query | count -------+--------+------- idle | COMMIT | 100 (1 row) The second run repeats the same step but runs DISCARD ALL before leaving the connections idle. If you run the same query again, you get the following results: postgres=> select state, query, count(1) from pg_stat_activity where usename='app_user' group by 1,2 order by 1,2; state | query | count -------+-------------+------- idle | DISCARD ALL | 100 (1 row) The following chart shows the connections count and the memory utilization on an RDS PostgreSQL instance. The free memory chart shows no significant difference between the run with and without DISCARD ALL. As the connections got opened, the free memory reduced from approximately 5.26 G to 4.22 G. The 100 test connections used around 1004 MB, which means that on average, each idle connection used around 10.04 MB. This additional memory is consumed by the buffers allocated for temporary tables storage. The parameter temp_buffers controls the maximum memory that can be allocated for temporary tables. The default value for this parameter is set to 8 MB. This memory, once allocated in a session, is not freed up until the connection is closed. Connections test #3: SELECT queries This test determines the memory impact of running some SELECT queries. In this test, each connection fetches one row from each of the tables in the PostgreSQL internal schema information_schema. In PostgreSQL 11, this schema has 60 tables and views. The test includes the following steps: Open a connection. Fetch the names of all the tables and views in information_schema: SELECT table_schema||'.'||table_name as relname from information_schema.tables WHERE table_schema='information_schema In a loop, run select on each of these tables with LIMIT 1. The following code is an example query: SELECT * FROM information_schema.columns LIMIT 1; Repeat these steps for all 100 connections. Leave the connections idle for 10 minutes. Close the connections. During the 10-minute wait, check the connections state as follows: postgres=> select state, query, count(1) from pg_stat_activity where usename='app_user' group by 1,2 order by 1,2; state | query | count -------+--------+------- idle | COMMIT | 100 (1 row) The second run repeats the same steps but runs DISCARD ALL before leaving the connections idle. Running the query again gives you the following results: postgres=> select state, query, count(1) from pg_stat_activity where usename='app_user' group by 1,2 order by 1,2; state | query | count -------+-------------+------- idle | DISCARD ALL | 100 (1 row) The following chart shows the connections count and the memory utilization on an RDS PostgreSQL instance. The free memory chart shows no significant difference between the run with and without DISCARD ALL. As the connections got opened, the free memory reduced from approximately 5.25 GB to 4.17 GB. The 100 test connections used around 1080 MB, which means that on average, each idle connection used around 10.8 MB. Connections test #4: Temporary table and SELECT queries This test is a combination of tests 2 and 3 to determine the memory impact of creating a temporary table and running some SELECT queries on same connection. The test includes the following steps: Open a connection. Fetch the names of all the tables and views in information_schema: SELECT table_schema||'.'||table_name as relname from information_schema.tables WHERE table_schema='information_schema In a loop, run select on each of these tables with LIMIT 1. The following is an example query: SELECT * FROM information_schema.columns LIMIT 1; Create a temporary table and insert 1 million rows: CREATE TEMP TABLE my_tmp_table (id int primary key, data text); INSERT INTO my_tmp_table values (generate_series(1,1000000,1), generate_series(1,1000000,1)::TEXT); Drop the temporary table: DROP TABLE my_tmp_table; Commit the changes. Repeat these steps for all 100 connections. Leave the connections idle for 10 minutes. Close the connections. During the 10-minute wait, check the connection state as follows: postgres=> select state, query, count(1) from pg_stat_activity where usename='app_user' group by 1,2 order by 1,2; state | query | count -------+--------+------- idle | COMMIT | 100 (1 row) The second run repeats the same step but runs DISCARD ALL before leaving the connections idle. Running the preceding query gives the following results: postgres=> select state, query, count(1) from pg_stat_activity where usename='app_user' group by 1,2 order by 1,2; state | query | count -------+-------------+------- idle | DISCARD ALL | 100 (1 row) The following chart shows the connections count and the memory utilization on an RDS PostgreSQL instance. The free memory chart shows no significant difference between the run with and without DISCARD ALL. As the connections got opened, the free memory reduced from approximately 5.24 GB to 3.79 GB. The 100 test connections used around 1450 MB, which means that on average, each idle connection used around 14.5 MB. CPU impact So far, we have focused on memory impact only, but the metrics show that CPU utilization also goes up when the number of idle connections go up. The idle connections have minimal impact on the CPU, but this can be an issue if CPU utilization is already high. The following figure shows the connection counts and CPU utilizations with different numbers of idle connections. In this test, the connections were open and left idle for 10 minutes before closing the connections and waiting another 10 minutes before opening next batch of connections. The figure shows that the CPU utilization was around 1% with some small spikes to 2% with no connections. The utilization increased to 2% with 100 idle connections, increased to 3% with 500 idle connections, increased to 5% with 1,000 idle connections, increased to 6% with 1,500 idle connections and increased to 8% with 2,000 idle. Note that this utilization is for an instance with 2 vCPUs. CPU utilization goes up with the number of connections because PostgreSQL needs to examine each process to check the status. This is required irrespective of whether the connection is active or idle. Summary PostgreSQL connections consume memory and CPU resources even when idle. As queries are run on a connection, memory gets allocated. This memory isn’t completely freed up even when the connection goes idle. In all the scenarios described in this post, idle connections result in memory consumption irrespective of DISCARD ALL. The amount of memory consumed by each connection varies based on factors such as the type and count of queries run by the connection, and the usage of temporary tables. As per the test results shown in this post, the memory utilization ranged from around 1.5–14.5 MB per connection. If your application is configured in a way that results in idle connections, it’s recommended to use a connection pooler so your memory and CPU resources aren’t wasted just to manage the idle connections. The following post in this series shows how these idle connections impact your database performance. About the Author Yaser Raja is a Senior Consultant with Professional Services team at Amazon Web Services. He works with customers to build scalable, highly available and secure solutions in AWS cloud. His focus area is homogenous and heterogeneous migrations of on-premise databases to AWS RDS and Aurora PostgreSQL. https://aws.amazon.com/blogs/database/resources-consumed-by-idle-postgresql-connections/
0 notes
Text
Creating an Amazon CloudWatch dashboard to monitor Amazon RDS and Amazon Aurora MySQL
Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. A highly performant database is key to delivering latency SLAs, so monitoring is critical. Amazon CloudWatch is a monitoring and observability service built for DevOps engineers, developers, site reliability engineers (SREs), and IT managers. Amazon RDS sends metrics to CloudWatch for each active database instance every minute. Monitoring is enabled by default. For example, Amazon RDS sends CPU utilization, the number of database connections in use, freeable memory, network throughput, read and write IOPS information, and more. As a part of Amazon RDS for MySQL database performance monitoring, it’s important to keep an eye on slow query logs and error logs in addition to default monitoring. Slow query logs help you find slow-performing queries in the database so you can investigate the reasons behind the slowness and tune the queries if needed. Error logs help you to find the query errors, which further helps you find the changes in the application due to those errors. However, monitoring these logs manually through log files (on the Amazon RDS console or by downloading locally) is a time-consuming process. This post talks about monitoring the slow query log and error log by creating a CloudWatch dashboard using Amazon CloudWatch Logs Insights (which enables you to interactively search and analyze your log data in Amazon CloudWatch Logs). It also covers application host metrics that help you monitor your application host. This dashboard also includes some database-related metrics to troubleshoot performance issues, which we discuss in the post. Pre-requisites Before you get started, you must complete the following prerequisites: Create an RDS for MySQL instance or Aurora MySQL cluster and make sure you selected the option to publish error and slow query logs to CloudWatch. If you have an RDS for MySQL instance already created, you can enable log exports to publish to CloudWatch by modifying the instance. Make sure the CloudWatch log group is created and logs are published to CloudWatch by looking at the log groups. You can also encrypt log data using AWS Key Management Service (AWS KMS). For instructions, see Encrypt Log Data in CloudWatch Logs Using AWS Key Management Service. Solution overview In this post, we cover the following high-level steps: Creating the CloudWatch dashboard manually Creating the CloudWatch dashboard using a template Viewing the CloudWatch dashboard Creating the CloudWatch dashboard manually This section discusses creating the CloudWatch dashboard with database metrics like the following: DatabaseConnections Deadlocks DeleteLatency UpdateLatency InsertLatency SelectLatency CommitLatency CommitThroughput DMLThroughput DeleteThroughput InsertThroughput SelectThroughput UpdateThroughput The available metrics depend on the database engine. For more information about supported metrics, see Overview of monitoring Amazon RDS and Monitoring Amazon Aurora DB cluster metrics. The following steps are specifically for creating a metric dashboard that applies to the entire cluster (aggregated numbers from all cluster instances). If you want to create instance-specific widgets, you need to look for instances and use DB instance identifiers instead of clusters and DB cluster identifiers. On the Amazon CloudWatch console, under Dashboards, choose Create dashboard. For Name, enter a name, for example demo-mysql-dev. Choose Create dashboard. Choose Logs table as your widget type. Choose Next. Find your log group. Log groups for Aurora MySQL and Amazon RDS for MySQL are different (see the following table). Log groups for Aurora MySQL include cluster, whereas log groups for Amazon RDS for MySQL contain instance. Choose the right log group for your slowquery log. Log Group Aurora MySQL Amazon RDS for MySQL Slowquery log /aws/rds/cluster//slowquery /aws/rds/instance//slowquery Error log /aws/rds/cluster//error /aws/rds/instance//error In the text field, enter the following insights query: parse @message "# Time: * User@Host: * Id: * Query_time: * Lock_time: * Rows_sent: * Rows_examined: * timestamp=*;*" as Time, User, Id, Query_time,Lock_time,Rows_sent,Rows_examined,timestamp,query | sort Time asc Choose Create widget. Rename the title to Slow queries sorted by time. Choose Save dashboard. Choose Add widget. Repeat the preceding steps to add a widget for the slowquery log, named Slow queries sorted by query time, with the following query: parse @message "# Time: * User@Host: * Id: * Query_time: * Lock_time: * Rows_sent: * Rows_examined: * timestamp=*;*" as Time, User, Id, Query_time,Lock_time,Rows_sent,Rows_examined,timestamp,Query | display Time, Query_time, Query | sort Query_time desc Choose Add widget. Repeat the same steps to add a widget for the error log, named Error log file, with the following query: filter @message not like /Note/ Choose Add widget to add a widget for the database CPU utilization. For the widget type, choose Line. Choose Next. Select Metrics. Choose Configure. Choose RDS. For DBClusterIdentifier, choose your DB identifier. For Metric name, choose CPUUtilization. Choose Create widget. Choose Add widget to add a widget for the database workload. For the widget type, choose Line. Choose Next. Select Metrics. Choose Configure. Choose RDS. For DBClusterIdentifier, choose your database’s metrics. The metric list may include CommitThroughput, DMLThroughput, DeleteThroughput, InsertThroughput, SelectThroughput, and UpdateThroughput. Choose Create widget. You can also create latency metrics to match with the throughputs. Choose Add widget to add a widget for the application CPU utilization. For the widget type, choose Line. Choose Next. Select Metrics. Choose Configure. Choose EC2. For Per-Instance Metric, choose your application metrics, such as CPU utilization, network in and out, or disk reads and writes. Choose Create widget. Repeat these steps to select any metrics you want, including your customized metrics, and add them to the dashboard. For example, you can add databaseConnections, Deadlocks, DeleteLatency, UpdateLatency, InsertLatency, SelectLatency, and CommitLatency. Choose Save dashboard. To back up the dashboard, on the Actions drop-down menu, choose View/edit source. Choose Copy source. Enter the code into your preferred text editor and save it. If you want to create a dashboard for another Aurora MySQL or RDS for MySQL database, you can use the backup JSON file as a template and follow the steps in the next section. Creating the CloudWatch dashboard using a template This section focuses on creating the CloudWatch dashboard using a template that creates the same widgets from the previous section. On the Amazon RDS console, under Databases, find the DB identifier name and Region of your database. For example, the following screenshot shows the DB identifier is demo-mysql-dev, and the Region is us-east-1. The log group /aws/rds/cluster is for creating a dashboard for Aurora MySQL (/aws/rds/instance for Amazon RDS for MySQL). Copy the following template code to any text editor. Replace with your DB identifier name. If your Region isn’t us-east-1, replace with your current Region. { "widgets": [ { "type": "log", "x": 0,"y": 0,"width": 24,"height": 6, "properties": { "query": "SOURCE '/aws/rds/cluster//slowquery' | #fields @messagen parse @message "# Time: * User@Host: * Id: * Query_time: * Lock_time: * Rows_sent: * Rows_examined: * timestamp=*;*" n as Time, User, Id, Query_time,Lock_time,Rows_sent,Rows_examined,timestamp,queryn | sort Time ascn n", "region": "", "title": "Slow queries with detailed info", "view": "table" }}, { "type": "log","x": 0,"y": 6,"width": 24,"height": 6, "properties": { "query": "SOURCE '/aws/rds/cluster//slowquery' | parse @message "# Time: * User@Host: * Id: * Query_time: * Lock_time: * Rows_sent: * Rows_examined: * timestamp=*;*" nas Time, User, Id, Query_time,Lock_time,Rows_sent,Rows_examined,timestamp,Query n| display Time, Query_time, Queryn| sort Query_time descn nn", "region": "", "title": "Top Slow Queries sorted by Query Time", "view": "table" }}, { "type": "metric","x": 0,"y": 18,"width": 24,"height": 6, "properties": { "metrics": [ [ "AWS/RDS", "ConnectionAttempts", "DBClusterIdentifier", "", { "visible": false } ], [ ".", "RowLockTime", ".", "." ], [ ".", "DMLLatency", ".", "." ], [ ".", "InsertLatency", ".", "." ], [ ".", "InsertThroughput", ".", ".", { "visible": false } ], [ ".", "Deadlocks", ".", "." ]], "view": "timeSeries", "stacked": true, "region": "", "period": 60, "stat": "Average" }}, { "type": "metric","x": 0,"y": 30,"width": 24,"height": 9, "properties": { "metrics": [ [ "AWS/RDS", "ConnectionAttempts", "DBClusterIdentifier", "", { "visible": false } ], [ ".", "DatabaseConnections", ".", ".", { "stat": "Maximum" } ], [ ".", "AbortedClients", ".", ".", { "visible": false } ]], "view": "timeSeries", "stacked": true, "region": "", "period": 60, "stat": "Average" }}, { "type": "metric","x": 0,"y": 39,"width": 24,"height": 6, "properties": { "view": "timeSeries", "stacked": true, "metrics": [ [ "AWS/RDS", "CPUUtilization", "DBClusterIdentifier", "" ]], "region": "", "title": "DB CPUUtilization", "period": 60 }}, { "type": "metric","x": 0,"y": 45,"width": 24,"height": 6, "properties": { "view": "timeSeries", "stacked": true, "metrics": [ [ "AWS/RDS", "DMLThroughput", "DBClusterIdentifier", "" ], [ ".", "DeleteThroughput", ".", "." ], [ ".", "InsertThroughput", ".", "." ], [ ".", "UpdateThroughput", ".", "." ], [ ".", "SelectThroughput", ".", "." ], [ ".", "CommitThroughput", ".", "." ]], "region": "", "title": "DB workLoad - CommitThroughput, DMLThroughput, DeleteThroughput, InsertThroughput, SelectThroughput, UpdateThroughput" }}, { "type": "log","x": 0,"y": 12,"width": 24,"height": 6, "properties": { "query": "SOURCE '/aws/rds/cluster//error' | fields @messagen| sort @timestamp descn| limit 200", "region": "", "title": "Top 200 lines of Error Log", "view": "table" } } ] } On the CloudWatch console, choose Dashboards. Choose Create dashboard. Enter a name to this dashboard (for example, demo-mysql-dev). Choose Create dashboard. Choose Cancel. On the Actions drop-down menu, choose View/edit source. The dashboard source page opens with the default code. Replace the default with the preceding template code. Choose update. Choose Save dashboard. The following screenshots show the metrics on your new dashboard. To create the dashboard using the AWS Command Line Interface (AWS CLI), use the following template: aws cloudwatch put-dashboard --dashboard-name DB-MySQL-1 --dashboard-body '{ "widgets": [ { "type": "log", "x": 0,"y": 0,"width": 24,"height": 6, "properties": { "query": "SOURCE '/aws/rds/cluster/demo-mysql-dev/slowquery' | #fields @messagen parse @message "# Time: * User@Host: * Id: * Query_time: * Lock_time: * Rows_sent: * Rows_examined: * timestamp=*;*" n as Time, User, Id, Query_time,Lock_time,Rows_sent,Rows_examined,timestamp,queryn | sort Time ascn n", "region": "us-east-1", "title": "Slow queries with detailed info", "view": "table" }}, { "type": "log","x": 0,"y": 6,"width": 24,"height": 6, "properties": { "query": "SOURCE '/aws/rds/cluster/demo-mysql-dev/slowquery' | parse @message "# Time: * User@Host: * Id: * Query_time: * Lock_time: * Rows_sent: * Rows_examined: * timestamp=*;*" nas Time, User, Id, Query_time,Lock_time,Rows_sent,Rows_examined,timestamp,Query n| display Time, Query_time, Queryn| sort Query_time descn nn", "region": "us-east-1", "title": "Top Slow Queries sorted by Query Time", "view": "table" }}, { "type": "metric","x": 0,"y": 18,"width": 24,"height": 6, "properties": { "metrics": [ [ "AWS/RDS", "ConnectionAttempts", "DBClusterIdentifier", "demo-mysql-dev", { "visible": false } ], [ ".", "RowLockTime", ".", "." ], [ ".", "DMLLatency", ".", "." ], [ ".", "InsertLatency", ".", "." ], [ ".", "InsertThroughput", ".", ".", { "visible": false } ], [ ".", "Deadlocks", ".", "." ]], "view": "timeSeries", "stacked": true, "region": "us-east-1", "period": 60, "stat": "Average" }}, { "type": "metric","x": 0,"y": 30,"width": 24,"height": 9, "properties": { "metrics": [ [ "AWS/RDS", "ConnectionAttempts", "DBClusterIdentifier", "demo-mysql-dev", { "visible": false } ], [ ".", "DatabaseConnections", ".", ".", { "stat": "Maximum" } ], [ ".", "AbortedClients", ".", ".", { "visible": false } ]], "view": "timeSeries", "stacked": true, "region": "us-east-1", "period": 60, "stat": "Average" }}, { "type": "metric","x": 0,"y": 39,"width": 24,"height": 6, "properties": { "view": "timeSeries", "stacked": true, "metrics": [ [ "AWS/RDS", "CPUUtilization", "DBClusterIdentifier", "demo-mysql-dev" ]], "region": "us-east-1", "title": "DB CPUUtilization", "period": 60 }}, { "type": "metric","x": 0,"y": 45,"width": 24,"height": 6, "properties": { "view": "timeSeries", "stacked": true, "metrics": [ [ "AWS/RDS", "DMLThroughput", "DBClusterIdentifier", "demo-mysql-dev" ], [ ".", "DeleteThroughput", ".", "." ], [ ".", "InsertThroughput", ".", "." ], [ ".", "UpdateThroughput", ".", "." ], [ ".", "SelectThroughput", ".", "." ], [ ".", "CommitThroughput", ".", "." ]], "region": "us-east-1", "title": "DB workLoad - CommitThroughput, DMLThroughput, DeleteThroughput, InsertThroughput, SelectThroughput, UpdateThroughput" }}, { "type": "log","x": 0,"y": 12,"width": 24,"height": 6, "properties": { "query": "SOURCE '/aws/rds/cluster/demo-mysql-dev/error' | fields @messagen| sort @timestamp descn| limit 200", "region": "us-east-1", "title": "Top 200 lines of Error Log", "view": "table" } } ] }' Viewing the CloudWatch dashboard To view your dashboard, complete the following steps: On the CloudWatch console, make sure you’re in the right Region. On the navigation pane, choose Dashboards. Choose your dashboard (for this post, demo-mysql-dev). You can see multiple sections on this page, as in the following screenshot. The top section is the information of your slow queries; the second is the top slow queries sorted by query time. You can view the slow queries at any range of UTC time. To see the query in the default time ranges, choose your preferred time on the navigation bar. To create new time ranges, choose custom. Choose your preferred time range. Summary Manually viewing Amazon RDS for MySQL or Aurora MySQL slow query or error logs for specific errors on a daily basis is time consuming. Creating a dashboard to expose these logs makes monitoring errors or slow queries easy. This dashboard also shows how to complement log file monitoring with CloudWatch and Performance Insights data on one CloudWatch dashboard for the database, as well as optionally adding in application servers. About the Authors Shunan Xiang is a Database Consultant with the Professional Services team at Amazon Web Services. He works as a database migration specialist to provide technical guidance and help Amazon customers to migrate their on-premises databases to AWS. Baji Shaik is a Consultant with AWS ProServe, GCC India. His background spans a wide depth and breadth of expertise and experience in SQL/NoSQL database technologies. He is a Database Migration Expert and has developed many successful database solutions addressing challenging business requirements for moving databases from on-premises to Amazon RDS and Aurora PostgreSQL. He is an eminent author, having written several books on PostgreSQL. A few of his recent works include “PostgreSQL Configuration“, “Beginning PostgreSQL on the Cloud”, and “PostgreSQL Development Essentials“. Furthermore, he has delivered several conference and workshop sessions. https://aws.amazon.com/blogs/database/creating-an-amazon-cloudwatch-dashboard-to-monitor-amazon-rds-and-amazon-aurora-mysql/
0 notes
Text
Making better decisions about Amazon RDS with Amazon CloudWatch metrics
If you are using Amazon Relational Database Service (RDS), you may wonder about how to determine the best time to modify instance configurations. This may include determining configurations such as instance class, storage size, or storage type. Amazon RDS supports various database engines, including MySQL, PostgreSQL, SQL Server, Oracle, and Amazon Aurora. Amazon CloudWatch can monitor all these engines. These CloudWatch metrics not only guide you to select the optimal instance class, but also help you choose appropriate storage sizes and types. This post discusses how to use CloudWatch metrics to determine Amazon RDS modifications for optimal database performance. CPU and memory consumption In Amazon RDS, you can monitor CPU by using CloudWatch metrics CPUUtilization, CPUCreditUsage, and CPUCreditBalance. All Amazon RDS instance types support CPUUtilization. CPUCreditUsage and CPUCreditBalance are only applicable to burstable general-purpose performance instances. CPUCreditUsage is defined as the number of CPU credits spent by the instance for CPU utilization. CPU credits govern the ability to burst above baseline level for burstable performance instances. A CPU credit provides the performance of a full CPU core running at 100% utilization for one minute. CPUUtilization shows the percent utilization of CPU at the instance. Random spikes in CPU consumption may not hamper database performance, but sustained high CPU can hinder upcoming database requests. Depending on the overall database workload, high CPU (70%–90%) at your Amazon RDS instance can degrade the overall performance. If a bad or unexpected query, or unusually high workload, causes a high value of CPUUtilization, you might move to a larger instance class. Amazon RDS Performance Insights help to detect bad SQL queries that consume a high amount of CPU. For more information, see Using Performance Insights to Analyze Performance of Amazon Aurora PostgreSQL on YouTube. The following CloudWatch graph shows a pattern of high CPU consumption. The CPU is consistently high for a long duration. This sawtooth pattern is a good indication that you should upgrade the Amazon RDS instance to a higher instance class. Memory is another important metric that determines the performance of the Amazon RDS and helps to make decisions regarding Amazon RDS configurations. Amazon RDS supports the following memory-related metrics: FreeableMemory – The amount of physical memory the system isn’t using and the total amount of buffer or page cache memory that is free and available. If you have configured the database workload optimally and one or more bad queries are not causing low FreeableMemory, a pattern of low FreeableMemory suggests you should scale up the Amazon RDS instance class to a higher memory allocation. When making decisions based on FreeableMemory, it’s important to look at enhanced monitoring metrics, especially Free and Cached. For more information, see Enhanced Monitoring. SwapUsage – The amount of swap space used on the DB instance. In Linux-hosted databases, a high value of SwapUsage typically suggests that the instance is memory-deficient. Disk space consumption Amazon RDS Oracle, MySQL, MariaDB, and PostgreSQL engines support 64 TiB of storage space. Amazon Aurora storage automatically grows in 10 GB increments up to 64 TB. Amazon RDS engines also support storage auto scaling. This option automatically increases the storage by 5 GiB or 10% of currently allocated storage, whichever is higher. The CloudWatch metric FreeStorageSpace measures the amount of available storage space of an instance. If the amount of data increases, you see a decline on the FreeStorageSpace graph. If FreeStorageSpace is around 10%–15%, it’s a good time to scale storage. A sudden spike in storage consumption suggests that you should look at the database workload. Heavy write activity, detailed logging, or large numbers of transactional logs are significant contributors to lower free storage. The following graph shows an Amazon RDS PostgreSQL instance’s FreeStorageSpace metric. It shows that free storage dropped approximately 90% in 20 minutes. While troubleshooting this issue, the parameter log_min_duration_statement was set to 0. This means each SQL statement was being logged and filling transactional log files. These troubleshooting steps, and CloudWatch graphs help you to decide when to tune the database engine or scale out instance storage. Database connections The DatabaseConnections metric determines the number of database connections in use. For an optimal workload, the number of current connections should not exceed approximately 80% of your maximum connections. The max_connections parameter determines the maximum number of connections permitted in Amazon RDS. You can modify this in the parameter group. For more information, see Working with DB Parameter Groups. The default value of this parameter depends on the total RAM of the instance. For example, for Amazon RDS MySQL instances, the default value is derived by the formula {DBInstanceClassMemory/12582880}. You should move to an Amazon RDS instance class with higher RAM if the number of database connections is consistently around 80% max_connections. This ensures that Amazon RDS can have a higher number of database connections. For example, an Amazon RDS PostgreSQL instance is hosted on an db.t2.small instance, and the formula LEAST({DBInstanceClassMemory/9531392},5000) sets max_connections to 198 by default. You can find the value of this instance through the Amazon RDS console and AWS Command Line Interface (CLI) or by connecting to the instance. See the following code: postgres=> SHOW max_connections; max_connections ----------------- 198 The following CloudWatch graph shows that database connections have exceeded the max_connections value multiple times between 12:40 and 12:48. When the number of DB connections exceeds max_connections, you receive the following error message: FATAL: remaining connection slots are reserved for non-replication superuser connections In this situation, you should determine the reason behind the high number of database connections. If there are no errors in the workload, you should consider scaling up to an Amazon RDS instance class with more memory. I/O operations per second (IOPS) metrics Storage type and size govern IOPS allocation in Amazon RDS SQL Server, Oracle, MySQL, MariaDB, and PostgreSQL instances. With General Purpose SSD storage, baseline IOPS are calculated as three times the amount of storage in GiB. For optimal instance performance, the sum of ReadIOPS and WriteIOPS should be less than the allocated IOPS. Beyond burst capacity, increased usage of IOPS may lead to performance degradation, which manifests in increased ReadLatency, WriteLatency, and DiskQueueDepth. If the total IOPS workload is consistently 80%–90% of the baseline IOPS, consider modifying the instance and choosing a higher IOPS capacity. You can achieve this through a few different methods: increasing General Purpose SSD storage, changing the storage type to Provisioned IOPS, or using Aurora. Increasing General Purpose SSD storage You can increase your General Purpose SSD storage so that the instance gets three times the amount of storage in GiB. Though you can create MySQL, MariaDB, Oracle, and PostgreSQL Amazon RDS DB instances with up to 64-TiB of storage, the max baseline performance you can achieve is 16,000 IOPS. This means that for 5.34-TiB to 64-TiB storage volume, the instance has a maximum 16,000 IOPS baseline performance. If you see that ReadIOPS are contributing more toward total IOPS consumption, you should move to a higher instance class with more RAM. If the database working set is almost all in memory, the ReadIOPS should be small and stable. In the following example, an Amazon RDS PostgreSQL instance is configured with 100 GiB GP2 storage. This storage provides 300 IOPS capacity with burst capability for an extended period. As the following graphs show, at 03/27 12:48, WriteIOPS was at 480 and ReadIOPS was 240. The total sum of these (720) was far beyond the baseline capacity of 300. This caused high WriteLatecny and high DiskQueueDepth. The following graph shows the WriteIOPS value as 480 at 12:48. The following graph shows the ReadIOPS value as 240 at 12:48. The following graph shows a high WriteLatency of 78 ms at 12:48. The following graph shows a high DiskQueueDepth of 38 at 12:48. Provisioned IOPS If the instance requires more than 16,000 baseline IOPS or low I/O latency and consistent I/O throughput, consider changing your storage type to Provisioned IOPS. For MariaDB, MySQL, Oracle, and PostgreSQL, you can choose PIOPS in the 1000–80,000 range. Amazon Aurora Consider using Amazon Aurora if the database IOPS performance isn’t limited to a certain number. Limits may be governed by size or type of storage volume. Aurora doesn’t have the same type of IOPS limit; you don’t have to manage, provision, or expand IOPS capacity. Instance size primarily determines the transactional and compute performance of an Aurora workload. The maximum number of IOPS depends on the read/write throughput limit of the Aurora instance. You are not throttled due to the IOPS, but due to the instance’s throughput limit. For more information, please see Choosing the DB Instance Class. Aurora is a MySQL and PostgreSQL compatible relational database solution with a distributed, fault-tolerant, and self-healing storage system. The Aurora storage automatically scales up to 64 TiB. Aurora offers up to 15 Read Replicas, compared to Amazon RDS engines, which provide up to five replicas in a Region. Throughput limits An Amazon RDS instance has two types of throughput limits: Instance level and EBS volume level limits. You can monitor instance level throughput with the metrics WriteThroughput and ReadThroughput. WriteThroughput is the average number of bytes written to disk per second. ReadThroughput is the average number of bytes read from disk per second. For example, a db.m4.16xlarge instance class supports 1,250-MB/s maximum throughput. The EBS volume throughput limit is 250 MiB/S for GP2 storage based on 16 KiB I/O size, and 1,000 MiB/s for Provisioned IOPS storage type. If you experience degraded performance due to a throughput bottleneck, you should validate both of these limits and modify the instance as needed. Amazon RDS Performance Insights Performance Insights monitors your database instance workload so you can monitor and troubleshoot database performance. With the help of database load and wait event data, you get a complete picture of the state of the instance. You can use this data for modifications in the instance or workload for overall better database performance. The following CloudWatch graph shows a high CommitLatency in an Aurora PostgreSQL instance. The CommitLatency is 68 ms at 15:33. The following graph shows a high IO:XactSync between 15:33 and 15:45. Looking at Performance Insights, you see that at the time of high CommitLatency, the wait event IO:XactSync was high too. This wait event associates with the CommitLatency and is the time spent waiting for the commit of the transaction to be durable. It happens when a session is waiting for writes to stable storage. This wait most often arises when there is a high rate of commit activity on the system. During this latency, Aurora is waiting for Aurora storage to acknowledge persistence. In this case, the storage persistence might be competing for CPU with CPU-intensive database workloads. To alleviate this scenario, you can reduce those workloads or scale up to a DB instance with more vCPUs. Summary This post discussed CloudWatch metrics related to Amazon RDS and Performance Insights, and how you can use those to make decisions about your database. These metrics help you decide on compute and storage scaling, database engine performance tuning, and workload modifications. The post also reviewed various storage classes that Amazon RDS offers and how Amazon Aurora works differently compared to Amazon RDS instances with EBS volumes. This knowledge can help you to troubleshoot, evaluate, and decide on Amazon RDS modifications. For more information, see How to use CloudWatch metrics to decide between General Purpose or Provisioned IOPS for your RDS database and Using Amazon RDS Performance Insights. About the Author Vivek Singh is a Senior Database Specialist with AWS focusing on Amazon RDS/Aurora PostgreSQL engines. He works with enterprise customers providing technical assistance on PostgreSQL operational performance and sharing database best practices. https://probdm.com/site/MTkxNDA
0 notes
Video
youtube
import java.sql.Connection; import java.sql.PreparedStatement; import java.sql.ResultSet; import java.sql.SQLException; import java.util.logging.Level; import java.util.logging.Logger; import javax.swing.table.DefaultTableModel;
/* * To change this license header, choose License Headers in Project Properties. * To change this template file, choose Tools | Templates * and open the template in the editor. */
/** * * @author Authentic */ public class populateJTableJavaPostgresqlDatabase extends javax.swing.JFrame { PreparedStatement pst = null; ResultSet rst = null; Connection connDbc = null; databaseConnection dbc = new databaseConnection(); public populateJTableJavaPostgresqlDatabase() { initComponents(); connDbc = dbc.databaseConn(); populateJtable(); }
public void populateJtable() {
try { String sqlSelectDataFromDatabase = "select * from customers limit 10"; pst = connDbc.prepareStatement(sqlSelectDataFromDatabase); rst = pst.executeQuery();
while(rst.next()) { String copanyname = rst.getString("CompanyName"); String ContactName = rst.getString("ContactName"); String ContactTitle = rst.getString("ContactTitle"); String Address = rst.getString("Address");
DefaultTableModel dftable = (DefaultTableModel) jTablePopulateData.getModel(); Object[] obj = {copanyname,ContactName,ContactTitle,Address}; dftable.addRow(obj); }
} catch (SQLException ex) { Logger.getLogger(populateJTableJavaPostgresqlDatabase.class.getName()).log(Level.SEVERE, null, ex); }
} @SuppressWarnings("unchecked") // <editor-fold defaultstate="collapsed" desc="Generated Code"> private void initComponents() {
jScrollPane1 = new javax.swing.JScrollPane(); jTablePopulateData = new javax.swing.JTable();
setDefaultCloseOperation(javax.swing.WindowConstants.EXIT_ON_CLOSE);
jTablePopulateData.setModel(new javax.swing.table.DefaultTableModel( new Object [][] {
}, new String [] { "Title 1", "Title 2", "Title 3", "Title 4" } )); jScrollPane1.setViewportView(jTablePopulateData);
javax.swing.GroupLayout layout = new javax.swing.GroupLayout(getContentPane()); getContentPane().setLayout(layout); layout.setHorizontalGroup( layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING) .addGroup(layout.createSequentialGroup() .addContainerGap() .addComponent(jScrollPane1, javax.swing.GroupLayout.PREFERRED_SIZE, 375, javax.swing.GroupLayout.PREFERRED_SIZE) .addContainerGap(19, Short.MAX_VALUE)) ); layout.setVerticalGroup( layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING) .addGroup(layout.createSequentialGroup() .addComponent(jScrollPane1, javax.swing.GroupLayout.PREFERRED_SIZE, 275, javax.swing.GroupLayout.PREFERRED_SIZE) .addGap(0, 25, Short.MAX_VALUE)) );
pack(); }// </editor-fold>
/** * @param args the command line arguments */ public static void main(String args[]) { /* Set the Nimbus look and feel */ //<editor-fold defaultstate="collapsed" desc=" Look and feel setting code (optional) "> /* If Nimbus (introduced in Java SE 6) is not available, stay with the default look and feel. * For details see http://download.oracle.com/javase/tutorial/uiswing/lookandfeel/plaf.html */ try { for (javax.swing.UIManager.LookAndFeelInfo info : javax.swing.UIManager.getInstalledLookAndFeels()) { if ("Nimbus".equals(info.getName())) { javax.swing.UIManager.setLookAndFeel(info.getClassName()); break; } } } catch (ClassNotFoundException ex) { java.util.logging.Logger.getLogger(populateJTableJavaPostgresqlDatabase.class.getName()).log(java.util.logging.Level.SEVERE, null, ex); } catch (InstantiationException ex) { java.util.logging.Logger.getLogger(populateJTableJavaPostgresqlDatabase.class.getName()).log(java.util.logging.Level.SEVERE, null, ex); } catch (IllegalAccessException ex) { java.util.logging.Logger.getLogger(populateJTableJavaPostgresqlDatabase.class.getName()).log(java.util.logging.Level.SEVERE, null, ex); } catch (javax.swing.UnsupportedLookAndFeelException ex) { java.util.logging.Logger.getLogger(populateJTableJavaPostgresqlDatabase.class.getName()).log(java.util.logging.Level.SEVERE, null, ex); } //</editor-fold>
/* Create and display the form */ java.awt.EventQueue.invokeLater(new Runnable() { public void run() { new populateJTableJavaPostgresqlDatabase().setVisible(true); } }); }
// Variables declaration - do not modify private javax.swing.JScrollPane jScrollPane1; private javax.swing.JTable jTablePopulateData; // End of variables declaration }
0 notes
Video
youtube
import java.sql.Connection; import java.sql.DriverManager; import java.sql.SQLException; import java.util.logging.Level; import java.util.logging.Logger; import javax.swing.JOptionPane;
public class databaseConnection { Connection Conn = null; String url = "jdbc:postgresql://localhost:5432/northwind"; String user = "postgres"; String password = "postgres"; public Connection databaseConn() {
try { Class.forName("org.postgresql.Driver"); } catch (ClassNotFoundException ex) { Logger.getLogger(databaseConnection.class.getName()).log(Level.SEVERE, null, ex); }
try { Conn = DriverManager.getConnection(url, user, password);
JOptionPane.showMessageDialog(null, "Connected");
} catch (SQLException ex) { Logger.getLogger(databaseConnection.class.getName()).log(Level.SEVERE, null, ex); } return Conn; }
public static void main(String[] args) { databaseConnection connDatabase = new databaseConnection(); connDatabase.databaseConn(); }
}
0 notes