Tumgik
#oplog
okama-kaizoku · 4 years
Text
APPROACHING THE FINAL HOURS TO VOTE!!!!
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
2 notes · View notes
annoyinglylefttiger · 7 years
Text
MongoDB : History tracking micro service
MongoDB : History tracking micro service
Tumblr media
Mongo DB local -> oplog.$main collection keeps records of all changes which happen in the primary database, which are eventually read by secondaries to catch up data or syncup data in the mongo replica set.
A micro service can be written, which can pull up data / delta form oplog.$main based on the name space ( defined db and collection) and can save that data in destination Db or audit DB.
Later…
View On WordPress
0 notes
mastomysowner · 3 years
Photo
Tumblr media
From OPLOG② by 久住
40 notes · View notes
technologictr · 7 years
Text
Bulutla lojistikte maliyet avantajı sağlıyor
Nakliye, depolama, dağıtım, araç takibi ve finans gibi birçok sürecin aynı anda yönetildiği lojistik sektörünün kontrolü, bulut teknolojisinin sağladığı avantajlarla kolaylaştı. Türkiye’nin teknoloji tabanlı ilk lojistik şirketi olan OPLOG, bulut teknolojisini sektöre uyumlu bir şekilde entegre ederek firmaların operasyonlarında daha güvenli hareket etmesini sağlıyor.
Lojistik sektöründe bilişim…
View On WordPress
0 notes
cejna · 2 years
Text
Yerli depo robotu TARQAN dünyaya açılıyor
Yerli depo robotu TARQAN dünyaya açılıyor
Yerli teknoloji şirketi OPLOG tarafından geliştirilen depo robotu TARQAN, yurt dışına açılıyor. OPLOG CEO’su Halit Develioğlu, yurt dışı satışına hazırlandıkları TARQAN ile Türkiye’nin yüksek katma bedelli teknoloji ihracatına katkıda bulunmayı hedeflediklerini belirtti. TARQAN, e-ticaret depolarında fullfilment işlerde kullanılmak için geliştirildi. Eserlerin hazırlanması, paketlenmesi ve…
Tumblr media
View On WordPress
0 notes
karonbill · 4 years
Text
MongoDB Certified DBA Associate C100DBA Exam Questions
If you are having problems in passing your C100DBA MongoDB Certified DBA Associate Exam. PassQuestion has the right solutions for you to pass your C100DBA Exam with confidence. PassQuestion MongoDB Certified DBA Associate C100DBA Exam Questions can make you successful obtain your MongoDB Certified DBA Associate certification. We provide you with the best MongoDB Certified DBA Associate C100DBA Exam Questions, covering the topics of the C100DBA certification exam. We offer MongoDB Certified DBA Associate C100DBA Exam Questions with different ways to let you easily understand the content and information of C100DBA Exam.
MongoDB Certified DBA Associate Level Exam is for administrators with knowledge of the concepts and mechanics of MongoDB. We recommend this certification for operations professionals who know the fundamentals and have some professional experience administering MongoDB.
Exam Format & Grading:
MongoDB certification exams are delivered online using a web proctoring solution. You have 90 minutes to complete an exam. Exam question types are multiple choice and check all that apply. There is no penalty for incorrect answers.
What are the Benefits?Rewards Badge
Increase the Value of Your Skills MongoDB ranks as one of the hottest job trends and one of the most highly compensated technology skills.
Travel Bag
Get Hired or Promoted Increase your visibility among hiring managers and recruiters.
User Programming
Demonstrate Professional Credibility Foster credibility with your employer and peers in the developer and operations communities.
Required MongoDB Knowledge
Philosophy & Features: performance, JSON, BSON, fault tolerance, disaster recovery, horizontal scaling, and the Mongo shell CRUD: Create, Read, Update, and Delete operations Indexing: single key, compound, multi-key, mechanics, and performance Replication: configuration, oplog concepts, write concern, elections, failover, and deployment to multiple data centers Sharding: components, when to shard, balancing, shard keys, and hashed shard keys Application Administration: data files, journaling, authentication, and authorization Server Administration: performance analysis, storage engines, diagnostics and debugging, maintenance, backup, and recovery
Required General IT Knowledge
Fundamental database concepts Fundamentals of system programming Basic JavaScript programming
View Online MongoDB Certified DBA Associate Exam C100DBA Free Questions
1. Which of the following node is used during election in a replication cluster? A. primary B. arbiter C. hidden D. secondary Answer: B
2. The________operator can be used to identify an element in the array to be updated without explicitly specifying the position of the element. A. Updating an array field without knowing its index is not possible. B. $ elemMatch C. $slice D. $ Answer: D
3. Which option should be used to update all the documents with the specified condition in the MongoDB query? A. updateAII instead of update B. specify {all: true} as the third parameter of update command C. specify {updateAII: true} as the third parameter of update command D. specify {multi : true} as the third parameter of update command Answer: D
4. What tool would you use if you want to save a gif file in mongo? Answer: mongofile
5. What does the totalKeysExamined field returned by the explain method indicate? A. Number of documents that match the query condition B. Number of index entries scanned C. Number of documents scanned D. Details the completed execution of the winning plan as a tree of stages Answer: B
0 notes
prevajconsultants · 5 years
Text
Change Streams allow apps to stream real-time data changes without the complexity or risk of tailing the oplog. Learn how to use change streams with the MongoDB #Java Driver from Developer Advocate @MBeugnet: https://t.co/WGSIfvcws8
Change Streams allow apps to stream real-time data changes without the complexity or risk of tailing the oplog. Learn how to use change streams with the MongoDB #Java Driver from Developer Advocate @MBeugnet: https://t.co/WGSIfvcws8
— MongoDB (@MongoDB) March 6, 2020
MongoDB
0 notes
phonoselect · 5 years
Photo
Tumblr media
Now Playing... Absolute Elsewhere-In Search Of Ancient God’s -1976 ...groovy 70’s Prog w/Bill Bruford on drums. OPEN EVERYDAY 12-6pm Call the shop if you have any questions. 916-400-3164 @phonoselect #recordstore #usedrecordstore #vinylrecords #usedrecords #phonoselect #phonoselectrecords #sacramento #recordstoresacramento #recordstorenorcal #absoluteelsewhere #erichvondaniken #billbruford (at Phono Select Records) https://www.instagram.com/p/B2uad-opLOg/?igshid=wm122nubopjj
0 notes
sololinuxes · 5 years
Text
Instalar un servidor Rocket.Chat en Ubuntu
Tumblr media
Instalar un servidor Rocket.Chat en Ubuntu y derivados. Rocket.Chat es un excelente servidor de web-chat opensource, y actualmente es la mejor alternativa auto alojada a Slack. Ofrece un montón de opciones como por ejemplo, chat, vídeo, llamadas de voz, intercambio de archivos y un gran sistema de ayuda. Algunas de las mejores características: Sistema de traducción en tiempo real. Integraciones entrantes y salientes de WebHook. Chat en vivo / Centro de llamadas / Llamadas de audio. APIs muy potentes. Permite subir y compartir archivos con el resto de usuarios. Tiene aplicación web, también para escritorio Linux, Android, iOS, Windows, Mac. App para IOS y Android. Monitoreo remoto de vídeo. Temas personalizados, emojis, sonidos y cualquier activo de tu empresa. etc… Para realizar este articulo usamos un servidor con Ubuntu 18.04 instalado.  
Instalar un servidor Rocket.Chat
Antes de instalar el servidor Rocket.Chat actualizamos nuestro sistema. apt update && apt upgrade Rocket.Chat requiere un servidor de base de datos MongoDB, versión 3.2 o superior. Para instalar la ultima versión de MongoDB debemos agregar el repositorio correspondiente. apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 9DA31620334BD75D9DCB49F368818C72E52529D4 echo "deb http://repo.mongodb.org/apt/ubuntu trusty/mongodb-org/4.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-4.0.list Instalamos MongoDB. apt update && apt install -y mongodb-org curl graphicsmagick Ahora debemos crear un archivo especifico para que MongoDB inicie como servicio. nano /etc/systemd/system/mongodb.service Copia y pega en el archivo lo siguiente. Description=High-performance, schema-free document-oriented database server After=network.target User=mongodb ExecStart=/usr/bin/mongod --quiet --config /etc/mongod.conf WantedBy=multi-user.target Guarda el archivo y cierra el editor nano. Recargamos los demonios. systemctl daemon-reload Iniciamos MongoDB y habilitamos su inicio con el sistema. systemctl start mongodb systemctl enable mongodb   Bueno... ya tenemos el servidor de bases de datos en marcha, así que continuamos. Rocket.Chat requiere que tengamos instalado Node.js (se recomienda la versión 8.11.3) y el gestor de paquetes npm, procedemos. apt -y install node.js apt install npm build-essential npm install -g inherits n && n 8.11.3 Agregamos un nuevo usuario (rocketchat) para no utilizar Rocket.Chat como root. useradd -m -U -r -d /opt/rocketchat rocketchat Cambiamos al usuario que acabamos de crear. su - rocketchat Descargamos la última versión estable de Rocket.Chat. curl -L https://releases.rocket.chat/latest/download -o rocket.chat.tgz Descomprime el paquete. tar zxvf rocket.chat.tgz Cambiamos el nombre de la carpeta de la aplicación. mv bundle rocketchat Nos movemos de directorio e instalamos las dependencias necesarias. cd rocketchat/programs/server npm install Establecemos las siguientes variables (con tu dominio real). cd /opt/rocketchat/rocketchat export ROOT_URL=http://tudominio.es:3000/ export MONGO_URL=mongodb://localhost:27017/rocketchat export PORT=3000   En este momento ya tenemos todo preparado para iniciar nuestro servidor chat. node main.js Si todo a salido bien, veras una pantalla similar a esta... +-----------------------------------------------------------------------+ | SERVER RUNNING | +-----------------------------------------------------------------------+ | | | Rocket.Chat Version: 1.2.1 | | NodeJS Version: 8.11.3 - x64 | | Platform: linux | | Process Port: 3000 | | Site URL: http://tudominio.com:3000/ | | ReplicaSet OpLog: Disabled | | Commit Hash: 202a465f1c | | Commit Branch: HEAD | | | +-----------------------------------------------------------------------+ Para que Rocket.Chat se inicie como servicio sigue los pasos siguientes. nano /etc/systemd/system/rocketchat.service Agregamos lo siguiente (asegúrate de insertar el nombre de tu dominio). Description=RocketChat Server After=network.target remote-fs.target nss-lookup.target mongod.target ExecStart=/usr/local/bin/node /opt/rocketchat/rocketchat/main.js Restart=always RestartSec=10 StandardOutput=syslog StandardError=syslog SyslogIdentifier=nodejs-example #User= #Group= Environment=NODE_ENV=production PORT=3000 ROOT_URL=http://tudominio.com MONGO_URL=mongodb://localhost:27017/rocketchat WantedBy=multi-user.target Guarda el archivo y cierra el editor. Reinicia el demonio. systemctl daemon-reload Iniciamos Rocket.Chat y habilitamos su inicio con el sistema. systemctl enable rocketchat systemctl start rocketchat Ya podemos acceder a Rocket.Chat y completar la instalación. Desde tu navegador web favorito accede a la siguiente url. http://tudominio.com:3000 El asistente de instalación te guiará a través de la configuración desde su usuario administrador. Una vez ya lo tengas todo instalado y configurado, te recomiendo que descargues la aplicación de escritorio para interactuar con el resto de usuarios. Al iniciar la aplicación de escritorio por primera vez te pedirá la url del servidor de chat al que te quieres conectar. La url "https://open.rocket.chat" es el chat de la comunidad Rocket.Chat.
Tumblr media
Inicio Rocket.chat URL del servidor Creamos un usuario.
Tumblr media
Registro de usuario en Rocketchat Chat demo.
Tumblr media
Rocketchat demo Para más información sobre cómo usar y configurar Rocket.Chat, revisa la documentación oficial.   En Sololinux.es seguimos creciendo gracias a nuestros lectores, puedes colaborar con el simple gesto de compartir nuestros artículos en tu sitio web, blog,  foro o redes sociales.   Read the full article
0 notes
delki8 · 4 years
Text
redis-oplog
redis-oplog is a re-implementation of Meteor's MongoDB oplog. With it, instead of persisting your oplog (operation log) on mongo, you persist it and recover it from redis, keeping your mongo free to do some other work while reactivity happens with the client querying redis directly.
https://github.com/cult-of-coders/redis-oplog/blob/master/README.md https://github.com/cult-of-coders/redis-oplog/blob/master/docs/how_it_works.md
0 notes
globalmediacampaign · 4 years
Text
Migrating to Amazon DocumentDB with the online method
Amazon DocumentDB (with MongoDB compatibility) is a fast, scalable, highly available, and fully managed document database service that supports MongoDB workloads. You can use the same MongoDB 3.6 application code, drivers, and tools to run, manage, and scale workloads on Amazon DocumentDB without having to worry about managing the underlying infrastructure. As a document database, Amazon DocumentDB makes it easy to store, query, and index JSON data. There are three primary approaches for migrating from MongoDB to Amazon DocumentDB: offline, online, and hybrid. For more information, see Migrating to Amazon DocumentDB. This post discusses how to use the online approach to migrate self-managed MongoDB clusters that are hosted premises or on EC2 to Amazon DocumentDB. The online approach minimizes downtime because DMS continually reads from the source MongoDB oplog and applies those changes in near-real time on the source Amazon DocumentDB cluster. For a demo of the online method, see Video: Live migration to Amazon DocumentDB. The online method is the best option if you want to minimize downtime and your source dataset is small (less than 1 TB). If your dataset is larger than 1 TB, you should use the hybrid or offline approach to take advantage of parallelization and the speed that you can achieve with mongorestore. For more information about migrating with the offline method, see Migrate from MongoDB to Amazon DocumentDB using the offline method. This post shows you how to use the online approach to migrate data from a MongoDB replica set hosted on Amazon EC2 to an Amazon DocumentDB cluster. Prerequisites Before you start your migration, complete the following prerequisites: Verify your source version and configuration Set up and choose the size of your Amazon Document DB cluster Set up an EC2 instance Verifying your source version and configuration If your MongoDB source uses a version of MongoDB earlier than 3.6, you should upgrade your source deployment and your application drivers. They should be compatible with MongoDB 3.6 to migrate to Amazon DocumentDB.You can determine the version of your source deployment by entering the following code in the mongo shell: mongoToDocumentDBOnlineSet1:PRIMARY> db.version() 3.4.4 Also, verify that the source MongoDB cluster (or instance) is configured as a replica set. You can determine if a MongoDB cluster is configured as a replica set with the following code: db.adminCommand( { replSetGetStatus : 1 } ) If the output is an error message similar to “”errmsg” : “not running with –replSet””, the cluster is not configured as a replica set. Setting up and sizing your source Amazon DocumentDB cluster For this post, your target Amazon DocumentDB cluster is a replica set that you create with a single db.r5.large instance. When you size your cluster, choose the instance type that is suitable for your production cluster. For more information about Amazon DocumentDB instances and costs, see Amazon DocumentDB (with MongoDB compatibility) pricing. Setting up an EC2 instance To connect to the Amazon DocumentDB cluster to migrate indexes and for other tasks during the migration, create an EC2 instance in the same VPC as your cluster and install the mongo shell. For instructions, see Getting Started with Amazon DocumentDB. To verify the connection to Amazon DocumentDB, enter the following CLI command: [ec2]$ mongo --ssl --host docdb-cluster-endpoint --sslCAFile rds-ca-2019-root.pem --username myuser --password mypassword … rs0:PRIMARY> db.runCommand('ping') { "ok" : 1 } If you have trouble connecting to either your source instance or Amazon DocumentDB cluster, check the security group configurations for both to make sure that the EC2 instance has permission to connect to each on the correct port (27017 by default). For more information about troubleshooting, see Troubleshooting Amazon DocumentDB. Amazon DocumentDB uses Transport Layer Security (TLS) encryption by default. To connect over a TLS-encrypted collection, download the certificate authority (CA) file to use the mongo shell to connect. See the following code: [ec2 ]$ curl -O https://s3.amazonaws.com/rds-downloads/rds-ca-2019-root.pem You can also disable TLS. For more information, see Encrypting Data in Transit. Online migration steps The following diagram illustrates the five steps of the online migration process. The steps are as follows: Application continues to write to source Dump indexes using the Amazon DocumentDB Index Tool Restore indexes using the Amazon DocumentDB Index Tool Perform full load and replicate data with AWS DMS Change application endpoint to Amazon DocumentDB cluster Step 1: Application continues to writing to source When you use the online method to migrate to Amazon DocumentDB, your application continues to write to the source MongoDB database. Step 5 discusses ceasing writes to the source database and changing the application to point to the target Amazon DocumentDB cluster. Step 2: Dumping indexes using the Amazon DocumentDB Index Tool Before you begin your migration, create the same indexes on your target Amazon DocumentDB cluster that you have on your source MongoDB cluster. Although AWS DMS handles the migration of data, it does not migrate indexes. To migrate the indexes, on the EC2 instance that you created as a prerequisite, use the Amazon DocumentDB Index Tool to export indexes from the MongoDB cluster. You can get the tool by creating a clone of the Amazon DocumentDB tools GitHub repo and following the instructions in README.md. The following code dumps indexes from your source MongoDB cluster to a directory on your EC2 instance (the sample user names and passwords provided in this post are for illustrative purposes only, you should always choose strong passwords): python migrationtools/documentdb_index_tool.py --dump-indexes --dir ~/index.js/ --host ec2-user.us-west-2.compute.amazonaws.com --auth-db admin --username user --password password 2020-02-11 21:46:50,432: Successfully authenticated to database: admin 2020-02-11 21:46:50,432: Successfully connected to instance ec2-user.us-west-2.compute.amazonaws.com:27017 2020-02-11 21:46:50,432: Retrieving indexes from server... 2020-02-11 21:46:50,440: Completed writing index metadata to local folder: /home/ec2-user/index.js/ After the successful export of the indexes, the next step is to restore those indexes in your Amazon DocumentDB cluster. Step 3: Restoring indexes using the Amazon DocumentDB Index Tool To restore the indexes that you exported in your target cluster in the preceding step, use the Amazon DocumentDB Index Tool. The following code restores the indexes in your Amazon DocumentDB cluster from your EC2 instance: python migrationtools/documentdb_index_tool.py --restore-indexes --dir ~/index.js/ --host docdb-2x2x-02-02-19-07-xx.cluster-xxxxxxxx.us-west-2.docdb.amazonaws.com:27017 --tls --tls-ca-file ~/rds-ca-2019-root.pem --username user --password password 2020-02-11 21:51:23,245: Successfully authenticated to database: admin 2020-02-11 21:51:23,245: Successfully connected to instance docdb-2x2x-02-02-19-07-xx.cluster-xxxxxxxx.us-west-2.docdb.amazonaws.com:27017 2020-02-11 21:51:23,264: zips-db.zips: added index: _id To confirm that you restored the indexes correctly, connect to your Amazon DocumentDB cluster with the mongo shell and list the indexes for a given collection. See the following code: mongo --ssl --host docdb-2020.cluster-xxxxxxxx.us-west-2.docdb.amazonaws.com:27017 --sslCAFile rds-ca-2019-root.pem --username documentdb --password documentdb db.zips.getIndexes() Step 4: Performing full load and replicating data with AWS DMS AWS DMS is a managed service that helps you migrate databases to AWS services efficiently and securely. AWS DMS enables database migration using two methods: full data load and change data capture (CDC). The online migration approach uses AWS DMS to perform a full data copy and uses CDC to replicate changes to Amazon DocumentDB. For more information about using AWS DMS, see AWS Database Migration Service Step-by-Step Walkthroughs. To perform the online migration, complete the following steps: Create an AWS DMS replication instance. For instructions, see Working with an AWS DMS Replication Instance. For data migration, this post uses the dms.t2.medium instance type. AWS DMS uses the replication instance to run the task that migrates data from your MongoDB source to the Amazon DocumentDB target cluster. Additionally, AWS DMS offers free replication instances for up to six months for certain instance types and migration targets. For more information, see AWS Database Migration Service: Free DMS. Create the MongoDB source and Amazon DocumentDB target endpoints. For more information, see Working with AWS DMS Endpoints. The following screenshot shows the endpoints for this post for the MongoDB cluster and target Amazon DocumentDB cluster. Create a replication task to migrate the data between the source and target endpoints. Choose the task type Full data load followed by ongoing data replication. Enable Start task on create. Your replication begins immediately after task creation. The following screenshot shows the status of a database migration task that has completed the full load and is currently performing ongoing replication. If you choose the task mongodbtodocumentbd-online-fullandongoing, you can review more specific details. In the Table statistics section, the task shows the statistics of full data load, followed by the ongoing replication between the source and destination databases. See the following screenshot. To verify that the number of documents matches in each, run the command db.collection.count() in your source and target databases. You can also monitor the migration’s status as an Amazon CloudWatch metric and create a dashboard to show progress. The following screen shows the rate of incoming CDC changes from the source database. Step 5: Changing the application endpoint to an Amazon DocumentDB cluster After the full load is complete and the CDC process is replicating continuously, you are ready to change your application’s database connection string to use your Amazon DocumentDB cluster. For more information, see Understanding Amazon DocumentDB Endpoints and Best Practices for Amazon DocumentDB. Summary This post described how to migrate data from MongoDB to Amazon DocumentDB by using the online method. For more information, see Migrate from MongoDB to Amazon DocumentDB using the offline method and Ramping up on Amazon DocumentDB (with MongoDB compatibility). If you have any questions or comments, please leave your thoughts in the comments section.   About the Authors   Vijay Injam is a NoSQL Data Architect at Amazon Web Services.         Jeff Duffy is a Sr NoSQL Specialist Solutions Architect at Amazon Web Services.         Joseph Idziorek is a Principal Product Manager at Amazon Web Services.         https://probdm.com/site/MTkxOTA
0 notes
fbreschi · 5 years
Text
Processing MongoDB Oplog
http://bit.ly/2tm1ddw
0 notes
yenigiris044 · 5 years
Text
TEKNOLOJİ
Tempo’un son teknoloji ürünü depolama, yazılım ve operasyon yönetimi sistemleriyle, operasyonlarınızın en ince detayına kadar her zaman senkronize olmasını ve zamanında yürütülmesini sağlayın. Kompleks lojistik ve tedarik zinciri yönetimi ihtiyaçlarınıza yeni ve etkili çözümler getiren OPLOG, depo yönetim sistemi (WMS), envanter düzeylerini ve stok lokasyonlarını takip etme gibi görevlerin merkezi olarak kontrol edilmesini sağlar ve bu sayede iş ağınız ne kadar geniş olursa olsun, deponuzdaki günlük operasyonları destekleyen esnek, ihtiyaçlarınıza göre ayarlanabilen bir teknoloji sunar. Sizinle birlikte gelişerek kaynaklarınızı optimize edecek ve kâr marjlarınızı artıracak sezgisel bir sistemle çok-kanallı pazarların dinamik taleplerini takip edin ve rekabette öne geçin.
0 notes
ankaapmo · 6 years
Text
Change Streams With MongoDB - #Ankaa
Change Streams With MongoDB MongoDB is always one step ahead of other database solutions in providing user-friendly suplog port with advance features rolled out to ease operations. The OpLog feature was used extensively by MongoDB connectors to pull out data updates and generate stream. OpLog feature banked on... https://ankaa-pmo.com/change-streams-with-mongodb/ #Change_Streams #Database #Integration #Mongodb_36 #Tutorial
0 notes
t-baba · 6 years
Photo
Tumblr media
Case Study: AO.com Builds Single Customer View with MongoDB
This article was originally published on MongoDB. Thank you for supporting the partners who make SitePoint possible.
Transforms customer experience, fights fraud, and meets the demands of the GDPR with MongoDB Atlas and Apache Kafka running on AWS.
In a world where standards and speed in online retail are getting ever higher, one retailer, AO.com has consistently been able to differentiate itself around its core focus: customer experience.
At the beginning of 2017 it was clear there was a big opportunity to use customer data more effectively, to drive:
Continued improvement to the customer experience
Faster identification of fraud
Compliance with the EU’s new GDPR personal privacy regulations
AO responded by starting a new team, giving them the mandate to build a 360-degree single view platform of all its customer data.
To ensure they could move quickly and keep the team focused on business goals, the team chose to use the MongoDB Atlas database service in the cloud. We spoke with Jon Vines, Software Development Team Lead at AO.com, about the experience of building the single customer view application, his development philosophy, and the impact it’s having at AO.
Can you start by telling us about AO.com?
AO.com is one of the UK’s leading online electrical retailers. At AO, what really drives us and makes us special is how obsessed we are with the customer’s experience. We make sure that our employees are empowered to make the call on what’s best for each and every customer, that means no scripts or rules in the contact centre, we just do what we think is right. Our guide has always been to ask whether our mum would be proud of the decisions we make every day and whether we feel we would have treated our own Nan in the same way.
That philosophy extends right through to the core technology decisions we make. For instance, the third of our three core business model pillars is infrastructure. We know it’s incredibly important to have systems that are scalable and extendable while making sure we get all the benefit of operational gearing and pace from our investments and long-term growth.
With quality infrastructure as such a key part of AO’s strategy, what was your team trying to achieve?
We were tasked with pulling together the many sources of customer information within AO, to build a single, holistic view of each of our customers. Our data is spread across many different departments, each using their own technology.
The ultimate goal is to deliver one source of truth for all customer data – to drive a host of new and enhanced applications and business processes. This enables our staff, with the appropriate permissions, to access all the useful data we have at the company, from one single place, and all from one easy-to-consume operational data layer.
Can you tell us a little bit about what apps will consume the single view?
There are three core apps we are serving today:
Call center: For us, it is all about the customer and relentlessly striving to make our customers happy. Evaluating our contact centre systems showed us that we could enhance our customer journey by exposing more data points that were not currently available to our agents. This allows us to provide a better level of customer experience.
Fraud: Anything in the fraud grey area gets passed to our fraud team for the personal touch. As our company grows, we’re constantly looking for ways to be more efficient and effective. This is where the single view comes in. The high-throughput, low-latency capabilities and the aggregation of multiple disparate data sources makes it the perfect vehicle to provide additional decision support and anomaly detection for our fraud team.
GDPR: We need to be able to tell customers what personal data we store about them. Our marketing teams need to be able to see customer’s preferences in relation to communications with us. The single view makes all of this much easier.
This is just the start. There are many more projects that are under development.
So how did you get started with the single view project?
The team was formed in May 2017, and our first task was to identify source data and define the domain we were operating in. We have a lot of historical data assets that need to be blended with real-time data. This meant working with multiple instances of Microsoft SQL Server, and various data repositories and message queues hosted on Amazon Web Services (AWS), including SQS. We had to figure out a way to extract that data cleanly, without impacting source systems.
We spent several months cataloging our data assets, before turning to prototyping and technology selection. We started development in October 2017 and went into production just three months later in January 2018. The key to this development velocity was the underlying technology we selected to power the single view platform.
What are you using to move data from your source systems?
As a business we were already running in AWS, so we first looked at Kinesis, but decided on using Apache Kafka and Confluent Open Source. We can use Kafka’s Connect API to extract data via the Change Data Capture streams on existing data sources, with Kafka streaming and transforming the source data into our single view data model. This allows us to extract data without creating dependencies on other teams who we would otherwise have to rely upon to publish this data, or give us access to their source systems.
Once the data is in Kafka, it opens up a multitude of potential downstream applications. The single customer view is just the first of these. We can also use the oplog in MongoDB to extract data into Kafka, which allows us to decouple our microservices architecture in a very natural way.
How about the database layer?
It was clear that legacy relational databases would never give the schema flexibility we needed, so we explored more modern database options.
The post Case Study: AO.com Builds Single Customer View with MongoDB appeared first on SitePoint.
by SitePoint Team via SitePoint https://ift.tt/2PG4omX
0 notes
Text
一文读懂GaussDB(for Mongo)的计算存储分离架构
1.摘要
GaussDB(for Mongo)是华为云自主研发兼容MongoDB4.0接口的文档数据库。基于共享存储的存算分离架构,对于传统MongoDB社区版有如下优势:
秒级添加Secondary节点(相比社区版Mongo小时级添加Secondary节点)
基于WAL复制, Secondary节点无写IO,从根本上解决社区版Seconary节点Oplog脱节问题
Primary/Seconary无任何IO交互,Secondary节点个数理论无上限, 支持百万OPS的读事务能力
LSMTree Compaction
… from 一文读懂GaussDB(for Mongo)的计算存储分离架构 via KKNEWS
0 notes