#microservices postgres
Explore tagged Tumblr posts
Video
youtube
Run Nestjs Microservices in Docker Container with Postgresql Database Full Video Link - https://youtu.be/HPvpxzagsNg Check out this new video about Running Nesjs Microservices in Docker Container with Postgresql DB on the CodeOneDigest YouTube channel! Learn to setup nestjs project with dependencies. Learn to create docker image of nestjs project. Learn to connect nestjs application with postgresql database. #postgresql #nestjs #docker #dockerfile #microservices #codeonedigest@nestframework @nodejs @typescript @Docker @PostgreSQL @typeormjs @JavaScript @dotenvx @npmjs @vscodetips @getpostman
#youtube#nestjs microservice#nestjs tutorial#nestjs full tutorial#nestjs complete course#nestjs microservice with postgres#run nestjs microservice project in docker#nestjs docker postgres#nestjs postgresql
1 note
·
View note
Text
if my goal with this project was just "make a website" I would just slap together some html, css, and maybe a little bit of javascript for flair and call it a day. I'd probably be done in 2-3 days tops. but instead I have to practice and make myself "employable" and that means smashing together as many languages and frameworks and technologies as possible to show employers that I'm capable of everything they want and more. so I'm developing apis in java that fetch data from a postgres database using spring boot with authentication from spring security, while coding the front end in typescript via an angular project served by nginx with https support and cloudflare protection, with all of these microservices running in their own docker containers.
basically what that means is I get to spend very little time actually programming and a whole lot of time figuring out how the hell to make all these things play nice together - and let me tell you, they do NOT fucking want to.
but on the bright side, I do actually feel like I'm learning a lot by doing this, and hopefully by the time I'm done, I'll have something really cool that I can show off
8 notes
·
View notes
Text
Using Docker in Software Development
Docker has become a vital tool in modern software development. It allows developers to package applications with all their dependencies into lightweight, portable containers. Whether you're building web applications, APIs, or microservices, Docker can simplify development, testing, and deployment.
What is Docker?
Docker is an open-source platform that enables you to build, ship, and run applications inside containers. Containers are isolated environments that contain everything your app needs—code, libraries, configuration files, and more—ensuring consistent behavior across development and production.
Why Use Docker?
Consistency: Run your app the same way in every environment.
Isolation: Avoid dependency conflicts between projects.
Portability: Docker containers work on any system that supports Docker.
Scalability: Easily scale containerized apps using orchestration tools like Kubernetes.
Faster Development: Spin up and tear down environments quickly.
Basic Docker Concepts
Image: A snapshot of a container. Think of it like a blueprint.
Container: A running instance of an image.
Dockerfile: A text file with instructions to build an image.
Volume: A persistent data storage system for containers.
Docker Hub: A cloud-based registry for storing and sharing Docker images.
Example: Dockerizing a Simple Python App
Let’s say you have a Python app called app.py: # app.py print("Hello from Docker!")
Create a Dockerfile: # Dockerfile FROM python:3.10-slim COPY app.py . CMD ["python", "app.py"]
Then build and run your Docker container: docker build -t hello-docker . docker run hello-docker
This will print Hello from Docker! in your terminal.
Popular Use Cases
Running databases (MySQL, PostgreSQL, MongoDB)
Hosting development environments
CI/CD pipelines
Deploying microservices
Local testing for APIs and apps
Essential Docker Commands
docker build -t <name> . — Build an image from a Dockerfile
docker run <image> — Run a container from an image
docker ps — List running containers
docker stop <container_id> — Stop a running container
docker exec -it <container_id> bash — Access the container shell
Docker Compose
Docker Compose allows you to run multi-container apps easily. Define all your services in a single docker-compose.yml file and launch them with one command: version: '3' services: web: build: . ports: - "5000:5000" db: image: postgres
Start everything with:docker-compose up
Best Practices
Use lightweight base images (e.g., Alpine)
Keep your Dockerfiles clean and minimal
Ignore unnecessary files with .dockerignore
Use multi-stage builds for smaller images
Regularly clean up unused images and containers
Conclusion
Docker empowers developers to work smarter, not harder. It eliminates "it works on my machine" problems and simplifies the development lifecycle. Once you start using Docker, you'll wonder how you ever lived without it!
0 notes
Text
“I’ve built and maintained microservices using Java and Spring Boot, focusing on modular, testable code and RESTful API development.”
Python: “I’ve used Python for data processing pipelines, automation scripts, and integrating with financial APIs. It’s especially useful for tasks like parsing, enrichment, and lightweight services.”
Kafka / Messaging (if applicable): “In prior roles, I integrated reactive streams using Kafka and MicroProfile, handling asynchronous messaging for payment and trading systems.”
Databases: “I’m experienced with both SQL (Postgres, Oracle) and NoSQL (Redis, MongoDB) – particularly in designing schemas for real-time and high-volume environments.”
💼 Finance / Fixed Income Familiarity “I understand the importance of latency and reliability in financial applications, especially when dealing with trade orders or market data ingestion.”
“In fixed income, pricing models and risk calculations often rely on real-time and historical data – I’ve worked on systems that support high-throughput data streams and enrichment pipelines.”
🧠 Problem Solving & Impact “At my last job, I optimized a slow-performing Spring Boot service by refactoring the database queries and implementing caching with Redis – improving response time by over 40%.”
“I helped build a real-time enrichment system that took in raw Kafka messages, enriched them via Redis lookups, transformed them into domain-specific models, and produced the enriched message back into a Kafka topic.”
🤝 Collaboration & Agile “I’ve worked closely with QA, product, and other developers in Agile teams – regularly participating in sprint planning, retrospectives, and demos.”
“Used GitHub, Jira, Jenkins, and CI/CD pipelines to ensure smooth deployments and collaborative coding.”
🌍 Why PGIM Fixed Income “I’m drawn to PGIM’s reputation for managing high-quality fixed income portfolios and their push toward modernizing technology stacks.”
“The blend of financial services and cutting-edge software engineering at PGIM aligns with my career goals – delivering value while building scalable systems.”
0 notes
Text
Karthik Ranganathan, Co-Founder and Co-CEO of Yugabyte – Interview Series
New Post has been published on https://thedigitalinsider.com/karthik-ranganathan-co-founder-and-co-ceo-of-yugabyte-interview-series/
Karthik Ranganathan, Co-Founder and Co-CEO of Yugabyte – Interview Series
Karthik Ranganathan is co-founder and co-CEO of Yugabyte, the company behind YugabyteDB, the open-source, high-performance distributed PostgreSQL database. Karthik is a seasoned data expert and former Facebook engineer who founded Yugabyte alongside two of his Facebook colleagues to revolutionize distributed databases.
What inspired you to co-found Yugabyte, and what gaps in the market did you see that led you to create YugabyteDB?
My co-founders, Kannan Muthukkaruppan, Mikhail Bautin, and I, founded Yugabyte in 2016. As former engineers at Meta (then called Facebook), we helped build popular databases including Apache Cassandra, HBase, and RocksDB – as well as running some of these databases as managed services for internal workloads.
We created YugabyteDB because we saw a gap in the market for cloud-native transactional databases for business-critical applications. We built YugabyteDB to cater to the needs of organizations transitioning from on-premises to cloud-native operations and combined the strengths of non-relational databases with the scalability and resilience of cloud-native architectures. While building Cassandra and HBase at Facebook (which was instrumental in addressing Facebook’s significant scaling needs), we saw the rise of microservices, containerization, high availability, geographic distribution, and Application Programming Interfaces (API). We also recognized the impact that open-source technologies have in advancing the industry.
People often think of the transactional database market as crowded. While this has traditionally been true, today Postgres has become the default API for cloud-native transactional databases. Increasingly, cloud-native databases are choosing to support the Postgres protocol, which has been ingrained into the fabric of YugabyteDB, making it the most Postgres-compatible database on the market. YugabyteDB retains the power and familiarity of PostgreSQL while evolving it to an enterprise-grade distributed database suitable for modern cloud-native applications. YugabyteDB allows enterprises to efficiently build and scale systems using familiar SQL models.
How did your experiences at Facebook influence your vision for the company?
In 2007, I was considering whether to join a small but growing company–Facebook. At the time, the site had about 30 to 40 million users. I thought it might double in size, but I couldn’t have been more wrong! During my over five years at Facebook, the user base grew to 2 billion. What attracted me to the company was its culture of innovation and boldness, encouraging people to “fail fast” to catalyze innovation.
Facebook grew so large that the technical and intellectual challenges I craved were no longer present. For many years I had aspired to start my own company and tackle problems facing the common user–this led me to co-create Yugabyte.
Our mission is to simplify cloud-native applications, focusing on three essential features crucial for modern development:
First, applications must be continuously available, ensuring uptime regardless of backups or failures, especially when running on commodity hardware in the cloud.
Second, the ability to scale on demand is crucial, allowing developers to build and release quickly without the delay of ordering hardware.
Third, with numerous data centers now easily accessible, replicating data across regions becomes vital for reliability and performance.
These three elements empower developers by providing the agility and freedom they need to innovate, without being constrained by infrastructure limitations.
Could you share the journey from Yugabyte’s inception in 2016 to its current status as a leader in distributed SQL databases? What were some key milestones?
At Facebook, I often talked with developers who needed specific features, like secondary indexes on SQL databases or occasional multi-node transactions. Unfortunately, the answer was usually “no,” because existing systems weren’t designed for those requirements.
Today, we are experiencing a shift towards cloud-native transactional applications that need to address scale and availability. Traditional databases simply can’t meet these needs. Modern businesses require relational databases that operate in the cloud and offer the three essential features: high availability, scalability, and geographic distribution, while still supporting SQL capabilities. These are the pillars on which we built YugabyteDB and the database challenges we’re focused on solving.
In February 2016, the founders began developing YugabyteDB, a global-scale distributed SQL database designed for cloud-native transactional applications. In July 2019, we made an unprecedented announcement and released our previously commercial features as open source. This reaffirmed our commitment to open-source principles and officially launched YugabyteDB as a fully open-source relational database management system (RDBMS) under an Apache 2.0 license.
The latest version of YugabyteDB (unveiled in September) features enhanced Postgres compatibility. It includes an Adaptive Cost-Based Optimizer (CBO) that optimizes query plans for large-scale, multi-region applications, and Smart Data Distribution that automatically determines whether to store tables together for lower latency, or to shard and distribute data for greater scalability. These enhancements allow developers to run their PostgreSQL applications on YugabyteDB efficiently and scale without the need for trade-offs or complex migrations.
YugabyteDB is known for its compatibility with PostgreSQL and its Cassandra-inspired API. How does this multi-API approach benefit developers and enterprises?
YugabyteDB’s multi-API approach benefits developers and enterprises by combining the strengths of a high-performance SQL database with the flexibility needed for global, internet-scale applications.
It supports scale-out RDBMS and high-volume Online Transaction Processing (OLTP) workloads, while maintaining low query latency and exceptional resilience. Compatibility with PostgreSQL allows for seamless lift-and-shift modernization of existing Postgres applications, requiring minimal changes.
In the latest version of the distributed database platform, released in September 2024, features like the Adaptive CBO and Smart Data Distribution enhance performance by optimizing query plans and automatically managing data placement. This allows developers to achieve low latency and high scalability without compromise, making YugabyteDB ideal for rapidly growing, cloud-native applications that require reliable data management.
AI is increasingly being integrated into database systems. How is Yugabyte leveraging AI to enhance the performance, scalability, and security of its SQL systems?
We are leveraging AI to enhance our distributed SQL database by addressing performance and migration challenges. Our upcoming Performance Copilot, an enhancement to our Performance Advisor, will simplify troubleshooting by analyzing query patterns, detecting anomalies, and providing real-time recommendations to troubleshoot database performance issues.
We are also integrating AI into YugabyteDB Voyager, our database migration tool that simplifies migrations from PostgreSQL, MySQL, Oracle, and other cloud databases to YugabyteDB. We aim to streamline transitions from legacy systems by automating schema conversion, SQL translation, and data transformation, with proactive compatibility checks. These innovations focus on making YugabyteDB smarter, more efficient, and easier for modern, distributed applications to use.
What are the key advantages of using an open-source SQL system like YugabyteDB in cloud-native applications compared to traditional proprietary databases?
Transparency, flexibility, and robust community support are key advantages when using an open-source SQL system like YugabyteDB in cloud-native applications. When we launched YugabyteDB, we recognized the skepticism surrounding open-source models. We engaged with users, who expressed a strong preference for a fully open database to trust with their critical data.
We initially ran on an open-core model, but rapidly realized it needed to be a completely open solution. Developers increasingly turn to PostgreSQL as a logical Oracle alternative, but PostgreSQL was not built for dynamic cloud platforms. YugabyteDB fills this gap by supporting PostgreSQL’s feature depth for modern cloud infrastructures. By being 100% open source, we remove roadblocks to adoption.
This makes us very attractive to developers building business-critical applications and to operations engineers running them on cloud-native platforms. Our focus is on creating a database that is not only open, but also easy to use and compatible with PostgreSQL, which remains a developer favorite due to its mature feature set and powerful extensions.
The demand for scalable and adaptable SQL solutions is growing. What trends are you observing in the enterprise database market, and how is Yugabyte positioned to meet these demands?
Larger scale in enterprise databases often leads to increased failure rates, especially as organizations deal with expanded footprints and higher data volumes. Key trends shaping the database landscape include the adoption of DBaaS, and a shift back from public cloud to private cloud environments. Additionally, the integration of generative AI brings opportunities and challenges, requiring automation and performance optimization to manage the growing data load.
Organizations are increasingly turning to DBaaS to streamline operations, despite initial concerns about control and security. This approach improves efficiency across various infrastructures, while the focus on private cloud solutions helps businesses reduce costs and enhance scalability for their workloads.
YugabyteDB addresses these evolving demands by combining the strengths of relational databases with the scalability of cloud-native architectures. Features like Smart Data Distribution and an Adaptive CBO, enhance performance and support a large number of database objects. This makes it a competitive choice for running a wide range of applications.
Furthermore, YugabyteDB allows enterprises to migrate their PostgreSQL applications while maintaining similar performance levels, crucial for modern workloads. Our commitment to open-source development encourages community involvement and provides flexibility for customers who want to avoid vendor lock-in.
With the rise of edge computing and IoT, how does YugabyteDB address the challenges posed by these technologies, particularly regarding data distribution and latency?
YugabyteDB’s distributed SQL architecture is designed to meet the challenges posed by the rise of edge computing and IoT by providing a scalable and resilient data layer that can operate seamlessly in both cloud and edge contexts. Its ability to automatically shard and replicate data ensures efficient distribution, enabling quick access and real-time processing. This minimizes latency, allowing applications to respond swiftly to user interactions and data changes.
By offering the flexibility to adapt configurations based on specific application requirements, YugabyteDB ensures that enterprises can effectively manage their data needs as they evolve in an increasingly decentralized landscape.
As Co-CEO, how do you balance the dual roles of leading technological innovation and managing company growth?
Our company aims to simplify cloud-native applications, compelling me to stay on top of technology trends, such as generative AI and context switches. Following innovation demands curiosity, a desire to make an impact, and a commitment to continuous learning.
Balancing technological innovation and company growth is fundamentally about scaling–whether it’s scaling systems or scaling impact. In distributed databases, we focus on building technologies that scale performance, handle massive workloads, and ensure high availability across a global infrastructure. Similarly, scaling Yugabyte means growing our customer base, enhancing community engagement, and expanding our ecosystem–while maintaining operational excellence.
All this requires a disciplined approach to performance and efficiency.
Technically, we optimize query execution, reduce latency, and improve system throughput; organizationally, we streamline processes, scale teams, and enhance cross-functional collaboration. In both cases, success comes from empowering teams with the right tools, insights, and processes to make smart, data-driven decisions.
How do you see the role of distributed SQL databases evolving in the next 5-10 years, particularly in the context of AI and machine learning?
In the next few years, distributed SQL databases will evolve to handle complex data analysis, enabling users to make predictions and detect anomalies with minimal technical expertise. There is an immense amount of database specialization in the context of AI and machine learning, but that is not sustainable. Databases will need to evolve to meet the demands of AI. This is why we’re iterating and enhancing capabilities on top of pgvector, ensuring developers can use Yugabyte for their AI database needs.
Additionally, we can expect an ongoing commitment to open source in AI development. Five years ago, we made YugabyteDB fully open source under the Apache 2.0 license, reinforcing our dedication to an open-source framework and proactively building our open-source community.
Thank you for all of your detailed responses, readers who wish to learn more should visit YugabyteDB.
#2024#adoption#ai#AI development#Analysis#anomalies#Apache#Apache 2.0 license#API#applications#approach#architecture#automation#backups#billion#Building#Business#CEO#Cloud#cloud solutions#Cloud-Native#Collaboration#Community#compromise#computing#containerization#continuous#curiosity#data#data analysis
0 notes
Text
SDE 3 - Backend (Node.js)
Job Title: SDE 3 – Backend (Node.js)Location: Noida (Open to Delhi/Gurgaon candidates)Experience: 6-9 yearsSkills required- Node.js, MongoDB/Postgres SQL, Microservices, Rest APIsIndividual Contribution roleJob OverviewWe are seeking a SDE 3 (Backend) for a Blockchain product company in Noida. This role involves spearheading backend development efforts using Node.js and mentoring a team to…
0 notes
Text
Java Backend Entwickler [mwd] [Hiring] [Remote] [Germany]
Im Auftrag unseres Kunden, suchen wir passende Kandidaten, die bereit sind, ihre Karriere auf die nächste Stufe zu heben.
Starte deine neue Reise mit den Tech Recruiter!Bewerben
Teile diesen Job
https://dietechrecruiter.de/jobs/java-backend-entwickler-vac-2125/
AbteilungEntwicklung
LocationRemote / Hamburg / Europe
Gehalt75.000€ - 85.000€
Position
Java Backend Entwickler [mwd]
Vertragsart
Festanstellung
Referenz
VAC-2125
Job Beschreibung
Wir suchen im Auftrag unseres Partners, ein innovativen Softwareentwicklungsunternehmens, einen Java Backend Developer für den Bereich Energieverbrauch und unterjährige Verbrauchsinformation (UVI). Werde Teil des Teams aus Produkt- und IT-Experten, um den Energieverbrauch in Gebäuden für die Bewohner transparent zu machen, die gesetzlichen Berichtspflichten zu erfüllen und die Berichterstattungsfrequenz zu erhöhen.
Deine Mission
Gemeinsam mit anderen Softwareentwicklern und einem Product Owner ist es deine Mission, den Energieverbrauch in Gebäuden für die Bewohner transparent zu machen. Dies umfasst die gesetzlich vorgeschriebene zweiwöchentliche Berichterstattung mit dem Ziel, die end-to-end Frequenz zu erhöhen.
Enge Zusammenarbeit mit verschiedenen Teams, um eine nahtlose Integration der von dir entwickelten Produkte im Ökosystem sicherzustellen.
Schwerpunkt auf Java Backend-Entwicklung.
Teilnahme an allen Phasen des Softwareentwicklungsprozesses: Anforderungsanalyse, Design, Entwicklung, Test, Bereitstellung, Wartung, Betrieb und kontinuierliche Verbesserung basierend auf Benutzerfeedback und Nutzungsmetriken.
Über das Team
Besteht aus 7 Mitgliedern, darunter ein Product Owner, ein UI/UX-Designer, Entwickler und ein Scrum Master.
Nutzt einen modernen Tech-Stack: Java 21, Spring Boot, Kubernetes, Kafka, Postgres, GitLab CI.
Organisiert die Arbeit mit Kanban, führt regelmäßige persönliche Retrospektiven durch und praktiziert Pair Programming.
Dein Skillset
Fundierte Kenntnisse und Beherrschung unseres Tech-Stacks. Du kannst selbstständig mit den Technologien arbeiten und die Frameworks nach deinen Bedürfnissen nutzen.
Mindestens 4 Jahre Berufserfahrung als Softwareentwickler.
Motivation durch die Lieferung von Endergebnissen und Kundenfunktionen, mit einem Fokus auf einfache Lösungen und der Nutzung von Technologie als Werkzeug.
Praktische Erfahrung in agilen Entwicklungsteams.
Deutschkenntnisse auf C2-Niveau (für die Zusammenarbeit mit deutschsprachigen Kollegen).
Englischkenntnisse auf B2-Niveau, C1 bevorzugt (für die Kommunikation im internationalen Team).
Bereitschaft, einmal im Monat zu unserem Büro in Hamburg zu reisen.
Neugier und schnelle Lernfähigkeit.
Motivation zur iterativen Weiterentwicklung von Organisationsprozessen.
Eine problemlösungsorientierte Denkweise.
Wünschenswerte Skills
Kenntnisse in der Webentwicklung mit einem modernen Frontend-Framework und TypeScript.
Bachelor- oder Masterabschluss in Informatik oder verwandten Bereichen.
Erfahrung mit Test-Driven Development.
Erfahrung mit Microservices-basierten Architekturen und Cloud-Service-Providern.
Benefits
Moderne Hardware und die Möglichkeit, mit Spitzentechnologien zu arbeiten.
Fast 100% Remote-Arbeit, ein Büro in Hamburg und Co-Working-Spaces in ganz Europa für persönliche Zusammenarbeit.
Raum und Budget für persönliches Wachstum.
Flexible Arbeitszeiten.
30 Tage Urlaub.
Kostenlose Getränke im Büro.
JobRad-Leasing und Corporate Benefits.
Mitarbeiterevents zur Feier von Erfolgen und zur Förderung des Teamgeists.
Community of Practice und Knowledge Nuggets zur Verbesserung deiner Fähigkeiten.
Bereit, in einem dynamischen und unterstützenden Umfeld zu wachsen und zu glänzen? Bewirb dich jetzt und werde Teil des Teams, um gemeinsam die Zukunft der Softwareentwicklung zu gestalten!
1 note
·
View note
Text
Senior Fullstack engineer - Remote, Worldwide
Company: Sourceter We are looking for a Senior Full-stack Developer for our client's core team. Our client is a SOC Platform that empowers security teams to automatically identify and respond to security incidents across their entire attack surface. Platform enables vendor-agnostic data ingestion and normalization at a predictable cost. Our built-in detection engineering, data correlation, and automatic investigation helps teams overcome volume, complexity, and false positives. Product mitigates real threats faster and more reliably than SIEMs, ultimately reducing customers' overall security risk. Enterprises like Booking.com, Snowflake, Mars and Cimpress leverage the client's SOC Platform to empower their security teams. Company is backed by leading VCs and strategic investors including Stripes, YL Ventures, DTCP, Cisco Investments, etc. Responsibilities: - Develop a complex expert system for managing, analyzing, and investigating security alerts using React & NodeJS. - Build advanced, unique user flows and UI components that can handle all possible types of Cyber security scenarios and attack stories. - Work with large scale, big data cloud warehouses and databases (such as Snowflake, Postgres, MongoDB and more) for optimized time sensitive queries. - Design and implement new E2E pipelines and components in a microservices environment. - Come up with beautiful and creative ways to visualize cybersecurity data Requirements: - Minimum 5 years of experience developing real-life large front-end systems - Proven experience with React - minimum 3 years - 5 years of experience in Node.js development - Backend experience using a modern server-side framework (Express, Django, Flask, etc.) - Experience and solid understanding of production environments in AWS, K8s - Experience with database architecture, designing and efficiently querying SQL & NoSQL databases Nice to have skills: - Experience with Angular, Vue.js - Experience with Kafka or a pipeline pub-sub architecture - Experience with Python We Offer: - Competitive market salary - Flexible working hours - Paid vacations - Being a part of team of professionals who knows how to build world-class products - Wide range of excellent opportunities for professional and personal growth. APPLY ON THE COMPANY WEBSITE To get free remote job alerts, please join our telegram channel “Global Job Alerts” or follow us on Twitter for latest job updates. Disclaimer: - This job opening is available on the respective company website as of 8thJuly 2023. The job openings may get expired by the time you check the post. - Candidates are requested to study and verify all the job details before applying and contact the respective company representative in case they have any queries. - The owner of this site has provided all the available information regarding the location of the job i.e. work from anywhere, work from home, fully remote, remote, etc. However, if you would like to have any clarification regarding the location of the job or have any further queries or doubts; please contact the respective company representative. Viewers are advised to do full requisite enquiries regarding job location before applying for each job. - Authentic companies never ask for payments for any job-related processes. Please carry out financial transactions (if any) at your own risk. - All the information and logos are taken from the respective company website. Read the full article
0 notes
Text
i would love for 'hand optimized sql' to inspire the same amount of concern and worry as 'hand optimized assembly' would today (at least in an enterprise environment) but alas
#staring at a 500 line diff that's just postgres functions#screaming#i guess this is what happens when you have performance issues with haskell microservices
2 notes
·
View notes
Text
Simplifying “No Data” Applications - DZone Database
Simplifying “No Data” Applications – DZone Database
What Is a “No Data” Application There are a ton of information-pushed traces of business enterprise purposes that will by no means have a million rows right before they get changed by anything else. I get in touch with these “no info” programs due to the fact they have so very little info the databases server will in no way call for quite a few optimizations if any. The default configurations are…
View On WordPress
0 notes
Text
Slow database? It might not be your fault
<rant>
Okay, it usually is your fault. If you logged the SQL your ORM was generating, or saw how you are doing joins in code, or realised what that indexed UUID does to your insert rate etc you’d probably admit it was all your fault. And the fault of your tooling, of course.
In my experience, most databases are tiny. Tiny tiny. Tables with a few thousand rows. If your web app is slow, its going to all be your fault. Stop building something webscale with microservices and just get things done right there in your database instead. Etc.
But, quite often, each company has one or two databases that have at least one or two large tables. Tables with tens of millions of rows. I work on databases with billions of rows. They exist. And that’s the kind of database where your database server is underserving you. There could well be a metric ton of actual performance improvements that your database is leaving on the table. Areas where your database server hasn’t kept up with recent (as in the past 20 years) of regular improvements in how programs can work with the kernel, for example.
Over the years I’ve read some really promising papers that have speeded up databases. But as far as I can tell, nothing ever happens. What is going on?
For example, your database might be slow just because its making a lot of syscalls. Back in 2010, experiments with syscall batching improved MySQL performance by 40% (and lots of other regular software by similar or better amounts!). That was long before spectre patches made the costs of syscalls even higher.
So where are our batched syscalls? I can’t see a downside to them. Why isn’t linux offering them and glib using them, and everyone benefiting from them? It’ll probably speed up your IDE and browser too.
Of course, your database might be slow just because you are using default settings. The historic defaults for MySQL were horrid. Pretty much the first thing any innodb user had to do was go increase the size of buffers and pools and various incantations they find by googling. I haven’t investigated, but I’d guess that a lot of the performance claims I’ve heard about innodb on MySQL 8 is probably just sensible modern defaults.
I would hold tokudb up as being much better at the defaults. That took over half your RAM, and deliberately left the other half to the operating system buffer cache.
That mention of the buffer cache brings me to another area your database could improve. Historically, databases did ‘direct’ IO with the disks, bypassing the operating system. These days, that is a metric ton of complexity for very questionable benefit. Take tokudb again: that used normal buffered read writes to the file system and deliberately left the OS half the available RAM so the file system had somewhere to cache those pages. It didn’t try and reimplement and outsmart the kernel.
This paid off handsomely for tokudb because they combined it with absolutely great compression. It completely blows the two kinds of innodb compression right out of the water. Well, in my tests, tokudb completely blows innodb right out of the water, but then teams who adopted it had to live with its incomplete implementation e.g. minimal support for foreign keys. Things that have nothing to do with the storage, and only to do with how much integration boilerplate they wrote or didn’t write. (tokudb is being end-of-lifed by percona; don’t use it for a new project 😞)
However, even tokudb didn’t take the next step: they didn’t go to async IO. I’ve poked around with async IO, both for networking and the file system, and found it to be a major improvement. Think how quickly you could walk some tables by asking for pages breath-first and digging deeper as soon as the OS gets something back, rather than going through it depth-first and blocking, waiting for the next page to come back before you can proceed.
I’ve gone on enough about tokudb, which I admit I use extensively. Tokutek went the patent route (no, it didn’t pay off for them) and Google released leveldb and Facebook adapted leveldb to become the MySQL MyRocks engine. That’s all history now.
In the actual storage engines themselves there have been lots of advances. Fractal Trees came along, then there was a SSTable+LSM renaissance, and just this week I heard about a fascinating paper on B+ + LSM beating SSTable+LSM. A user called Jules commented, wondered about B-epsilon trees instead of B+, and that got my brain going too. There are lots of things you can imagine an LSM tree using instead of SSTable at each level.
But how invested is MyRocks in SSTable? And will MyRocks ever close the performance gap between it and tokudb on the kind of workloads they are both good at?
Of course, what about Postgres? TimescaleDB is a really interesting fork based on Postgres that has a ‘hypertable’ approach under the hood, with a table made from a collection of smaller, individually compressed tables. In so many ways it sounds like tokudb, but with some extra finesse like storing the min/max values for columns in a segment uncompressed so the engine can check some constraints and often skip uncompressing a segment.
Timescaledb is interesting because its kind of merging the classic OLAP column-store with the classic OLTP row-store. I want to know if TimescaleDB’s hypertable compression works for things that aren’t time-series too? I’m thinking ‘if we claim our invoice line items are time-series data…’
Compression in Postgres is a sore subject, as is out-of-tree storage engines generally. Saying the file system should do compression means nobody has big data in Postgres because which stable file system supports decent compression? Postgres really needs to have built-in compression and really needs to go embrace the storage engines approach rather than keeping all the cool new stuff as second class citizens.
Of course, I fight the query planner all the time. If, for example, you have a table partitioned by day and your query is for a time span that spans two or more partitions, then you probably get much faster results if you split that into n queries, each for a corresponding partition, and glue the results together client-side! There was even a proxy called ShardQuery that did that. Its crazy. When people are making proxies in PHP to rewrite queries like that, it means the database itself is leaving a massive amount of performance on the table.
And of course, the client library you use to access the database can come in for a lot of blame too. For example, when I profile my queries where I have lots of parameters, I find that the mysql jdbc drivers are generating a metric ton of garbage in their safe-string-split approach to prepared-query interpolation. It shouldn’t be that my insert rate doubles when I do my hand-rolled string concatenation approach. Oracle, stop generating garbage!
This doesn’t begin to touch on the fancy cloud service you are using to host your DB. You’ll probably find that your laptop outperforms your average cloud DB server. Between all the spectre patches (I really don’t want you to forget about the syscall-batching possibilities!) and how you have to mess around buying disk space to get IOPs and all kinds of nonsense, its likely that you really would be better off perforamnce-wise by leaving your dev laptop in a cabinet somewhere.
Crikey, what a lot of complaining! But if you hear about some promising progress in speeding up databases, remember it's not realistic to hope the databases you use will ever see any kind of benefit from it. The sad truth is, your database is still stuck in the 90s. Async IO? Huh no. Compression? Yeah right. Syscalls? Okay, that’s a Linux failing, but still!
Right now my hopes are on TimescaleDB. I want to see how it copes with billions of rows of something that aren’t technically time-series. That hybrid row and column approach just sounds so enticing.
Oh, and hopefully MyRocks2 might find something even better than SSTable for each tier?
But in the meantime, hopefully someone working on the Linux kernel will rediscover the batched syscalls idea…? ;)
2 notes
·
View notes
Text
youtube
#nest js#nestjs framework#nestjs project#nestjs development#nestjs#typescript development#typescript class#typescript tutorial#typescript#youtube#video#codeonedigest#microservices#microservice#typeorm#orm#object relation mapping#postgres tutorial#postgres database#postgresql#postgres#Youtube
0 notes
Text
Egybell is hiring Java developer for multinational company Skills and Qualifications · 2+ years of software development experience with strong java/jee/springboot/vertx/quarkus development frameworks. · BSc in Computer Science, Engineering or relevant field · Experience as a DevOps or Software Engineer or similar software engineering role · Proficient with git and git workflows · Good knowledge of linux, java, JSF or angular or other front-end technology · Demonstrated implementation of microservices, container and cloud-native application development. · Hands-on experience with Docker, Kubernetes or Openshift and related container platform ecosystems. · Experience with two or more database technologies such as Oracle, MySQL or Postgres, MongoDB. If you are interested apply with your CV.
0 notes
Text
Gifts of Formstack Rather Than Jotform
At Salesforce, we face a couple of remarkable troubles around sorting out our clients. In any case, many (while maybe not most) of our clients have revamped their Salesforce model: every client sees a substitute type of our response. We in like manner have a sweeping, complex, and dis collected client base: our clients impact Salesforce in a lot of adventures, regions, and market sections.
So how should we make huge judgments about our clients, at scale? How should we surface encounters that are both exact (arranged in authentic data) and critical (truly accommodating for heaps of gatherings). How should we better fathom your clients without financial planning such a ton of energy and effort that our disclosures are unusable and out of date? We’ve shown up on data driven personas as a viable, research-drove method for better getting a handle on our clients. Client personas considering data give our organizers, thing bosses, doc writers, subject matter experts, and bosses with a regular shorthand for our clients that is both easy to use and is arranged in current data about our client base. This is the manner in which we get it going.
Heroku Point of correspondence enhances it for you to gather Heroku applications that share data with your Salesforce sending. Using bi-directional synchronization among Salesforce and Heroku Postgres, Heroku Accessory joins the data in your Postgres illuminating grouping with the contacts, accounts and other custom articles in the Salesforce instructive combination. Genuinely organized with a point and snap UI, it’s very simple to set up the assistance with minutes — no coding or complex blueprint is required, check out Salesforce consulting companies in USA.
Heroku Point of cooperation works on it for you to construct Heroku applications that share information with your Salesforce sending. Utilizing bi-directional synchronization among CRM with Salesforce and Heroku Postgres, Heroku Accomplice joins the information in your Postgres enlightening assortment with the contacts, accounts and other custom articles in the Salesforce educational assortment. Truly arranged with a point and snap UI, it’s easy to set up the help with minutes — no coding or complex game plan is required.
Produce applications that length Heroku and Salesforce
With your app cloud Salesforce in Heroku Postgres, you can beyond a shadow of a doubt join the constraints of the Lightning Stage and Heroku. Applications made utilizing standard open source stacks, similar to Rails, Node.js and Python, accomplice locally to Postgres — and through outsider incorporation administrations, straightforwardly back to Salesforce application advancement. Making applications that grow your information and cycles straightforwardly to your clients is as of now by and large around as fundamental as making SQL.
Explanations at the rear of Arranging Salesforce and Heroku
Current endeavor frameworks are created from a stupendous a large number with restrictive areas of collaboration for various assortments of clients. These signs of association frequently coordinate measurements from restrictive insights sources. The microservices planning has arisen as a methodology for decoupling the segments of a machine into additional really useful, wholeheartedly deployable organizations to offer endpoints that be essential for exceptional frameworks. Heroku coordination with salesforce enormous spot to run projects and microservices that you might use with Salesforce through a relationship of mix procedures.
Four conventional motivations to solidify programs on Heroku with Salesforce are: 1. Information replication 2. Server farm people 3. Custom UIs 4. Outside processes
We analyze each one in more significance a while later withinside the module.
Man-made comprehension
PC principally based absolutely insight has found out about people from one feature of the planet to the next. In any case, withinside application cloud salesforce the business endeavor world, the speed of collecting of counterfeit pondering has packed at the rear of the degree of leisure activity through 2019. No matter what the way that we focus that most extreme business endeavor pioneers famous imitated discernment offers a benefit, up until this point, a couple of big business watchers have consistent undertaking collecting at underneath 20%.
Regardless, as we input 2020, we are sorting out an augmentation, in leisure activity notwithstanding in programmed thinking amassing. In like manner, that augmentation is reaffirmed through some other widespread outline charged through IBM. The framework, From Road obstruction to Scale: The General Run Towards imitated skill, concentrated on extra than 4,500 improvement supervisors. We expected to obviously check the present day and fate areas of fake skill sport plan all through the U.S., Europe and China, to all of the additional immediately deal with the scene and the difficulties. As you will see, it will unequivocally change.
How does Salesforce answer?
Salesforce offers a substitute supporting of programming things needed to help groups from different undertakings — including moving, gives, IT, business and client support , build salesforce app— partner with their clients. For example, by getting to the Salesforce Client 360 application, packs across an entire association can give and offer a singular point of view on client data on an organized stage.
Salesforce heroku third party integration services partner gives steady encounters into client lead and needs through client data evaluation. By moving past the openings between data extra rooms from different divisions, Salesforce gives a wide viewpoint on every client relationship with a brand.
0 notes
Text
hi
“I’ve built and maintained microservices using Java and Spring Boot, focusing on modular, testable code and RESTful API development.”
Python: “I’ve used Python for data processing pipelines, automation scripts, and integrating with financial APIs. It’s especially useful for tasks like parsing, enrichment, and lightweight services.”
Kafka / Messaging (if applicable): “In prior roles, I integrated reactive streams using Kafka and MicroProfile, handling asynchronous messaging for payment and trading systems.”
Databases: “I’m experienced with both SQL (Postgres, Oracle) and NoSQL (Redis, MongoDB) – particularly in designing schemas for real-time and high-volume environments.”
💼 Finance / Fixed Income Familiarity
“I understand the importance of latency and reliability in financial applications, especially when dealing with trade orders or market data ingestion.”
“In fixed income, pricing models and risk calculations often rely on real-time and historical data – I’ve worked on systems that support high-throughput data streams and enrichment pipelines.”
🧠 Problem Solving & Impact
“At my last job, I optimized a slow-performing Spring Boot service by refactoring the database queries and implementing caching with Redis – improving response time by over 40%.”
“I helped build a real-time enrichment system that took in raw Kafka messages, enriched them via Redis lookups, transformed them into domain-specific models, and produced the enriched message back into a Kafka topic.”
🤝 Collaboration & Agile
“I’ve worked closely with QA, product, and other developers in Agile teams – regularly participating in sprint planning, retrospectives, and demos.”
“Used GitHub, Jira, Jenkins, and CI/CD pipelines to ensure smooth deployments and collaborative coding.”
🌍 Why PGIM Fixed Income
“I’m drawn to PGIM’s reputation for managing high-quality fixed income portfolios and their push toward modernizing technology stacks.”
“The blend of financial services and cutting-edge software engineering at PGIM aligns with my career goals – delivering value while building scalable systems.”
0 notes
Text
The journey into the land of microservices where most people concur that it is paved with the warmth of glory can be hard to go through. As the philosophies, tools, skills and technologies continue to mature, the path to the “land of glory” becomes clearer and clearer as the thicket lightens and the lights shines the way. To add another candle into the already bright illumination, we take this chance to compare k0s vs k3s vs microk8s Kubernetes Distributions that continue to inspire more adoption of Kubernetes/Microservices into organizations as well as personal projects. Anyone who has been keen on learning Kubernetes and all matters Microservices can attest that they have come across those three names before and in case they caused whirlpools of dubiety, this article attempts to clear off smog off. We are going to compare k0s, k3s and microk8s by looking at what they are offering and what they are about in general. So pick your favorite beverage and keep sipping it as we roll down this hill step by step. Let us first define them and then take a close look at what their pockets are carrying. Here we go. k0s Kubernetes Distribution From its main website, k0s is the simple, solid & certified Kubernetes distribution that works on any infrastructure: public & private clouds, on-premises, edge & hybrid. It’s 100% open source & free. The zero in its name comes from the point that developer friction is reduced to zero, allowing anyone, with no special skills or expertise in Kubernetes to easily get started. k3sKubernetes Distribution Adapted from Rancher, K3s is an official CNCF sandbox project that delivers a lightweight yet powerful certified Kubernetes distribution designed for production workloads across resource-restrained, remote locations or on IoT devices. Kubernetes is a 10-letter word stylized as K8s. So something half as big as Kubernetes would be a 5-letter word stylized as K3s. With that, the 3 in the name is explained thus. Microk8sKubernetes Distribution Derived from MicroK8s main docs web page, MicroK8s is the smallest, fastest, fully-conformant Kubernetes that tracks upstream releases and makes clustering trivial. MicroK8s is great for offline development, prototyping, and testing. Use it on a VM as a small, cheap, reliable k8s for CI/CD. It is also the best production grade Kubernetes for appliances. Develop IoT apps for k8s and deploy them to MicroK8s on your Linux boxes. k0s vs k3s vs microk8s – Detailed Comparison Table For ease of reading, we decided that a table will be good for easier comparison. Let us take a look at what they have to offer below. Feature k0s k3s microk8s Licensing Completely Open Source Completely Open Source Completely Open Source Packaging k0s is distributed as a single binary with minimal host OS dependencies besides the host OS kernel Packaged as a single binary. MicroK8s is a Kubernetes cluster delivered as a single snap package. Kubernetes Versions v1.20 and v1.21 Latest release updates Kubernetes to v1.22.1 Kubernetes v1.22, v1.21 Container Runtime ContainerD (default) ContainerD (default) ContainerD is the container runtime used by MicroK8s Supported Host OS Linux (kernel v3.10 or newer) Windows Server 2019 (experimental) K3s is expected to work on most modern Linux systems Windows 10, Linux, macOS Control Plane Storage Options In-Cluster Elastic Etcd with TLS (default), In-Cluster SQLite (default for single node), External PostgreSQL, External MySQL sqlite3 is the default storage mechanism. etcd3, MySQL, Postgres also still available MicroK8s supports high availability using Dqlite as the datastore for cluster state. Built-In Security Features RBAC, Support OpenID Providers, Pod Security Policies, Network Policies, Micro VM Runtimes (coming soon), Control Plane Isolation Secure by default with reasonable defaults for lightweight environments Secure by default with reasonable defaults for lightweight environments
Supported CNI Providers Kube-Router (default), Calico or Custom K3s will run with flannel by default as the CNI, using VXLAN as the default backend. Custom supported as well Flanneld runs if ha-cluster is not enabled. If ha-cluster is enabled, calico is run instead. Supported Machine Architectures x86-64, ARM64, ARMv7 Latest release supports x86_64, ARMv7, and ARM64 x86_64, ARMv7, and ARM64 Backing Company Mirantis Rancher Canonical Addons Minimum Addons Traefik, Helm,LB Dashboard, Ingress, DNS, and more Conclusion Kubernetes has managed to do what most platforms in the past have been trying to accomplish. It has made the deployment of applications as easy and as flexible as possible. It has other added advantages and benefits such as self-healing, ease of monitoring, ease of deployment, complex configuration, orchestration, security, deployment at scale among many other benefits. k0s, k3s and microk8s have come in full force to make the deployment of Kubernetes as easy for every developer out there who would wish to start deploying microservices. We encourage you to give the three Kubernetes distributions a try and get a feel of what works for you. In the end, we would wish to thank all of you for your generosity, your support and feedback. We wish you the best in your pursuits. Guides you might enjoy How To Install MicroK8s Kubernetes Cluster on CentOS 8 Deploy Lightweight Kubernetes with MicroK8s and Snap Deploy Kubernetes Cluster on Linux With k0s Install Kubernetes Cluster on Ubuntu using K3s Create Kubernetes Service / User Account restricted to one Namespace
0 notes