#only to get a problem where a json response actually has a function inside the payload and it's bricking itself because of syntax
Explore tagged Tumblr posts
Text
"Sooo, this barrier anomaly is there because of a parsing problem? That doesn't sound too complicated, even I can fix it."
"Well if you're feeling that confident in handling payloads of this nature, have at it. Don't let me interfere you."
--- 1 hour later. ---
"...Chen? I do not mean to press, but have you done any progress?"
"Well...I think the parsing stuff has a syntax error of some sorts, but the payload is so large I can't find the culprit."
"I see. But may I ask why you're writing the code on paper?"
"Do I look like VSC○de to you? I'm checking line by line!"
#so fun things are happening with my assignment#I thought I knew most of the gist with json stuff#only to get a problem where a json response actually has a function inside the payload and it's bricking itself because of syntax#that json has like fucking 4k lines of code and I have no clue where to begin bc it's highlighting everything#it even has html and js! it has an 'init' section! this payload is a goliath for me!#so here I am breaking snippet by snipper in hopes I might get it#also vscode is not telling me shit the syntax checkers online are highlighting something so vague#another problem is that this json is construed dynamically so no clue if it's gonna solve all syntax errors
1 note
·
View note
Text
AsyncIO for the working PyGame programmer (part III) - Pitfalls
Table of Contents
Race Conditions
Resource Conflicts
Architecture and Composability
Future-Proofing
Best Practices
Race Conditions
Here is a simple multithreaded python program to demonstrate race conditions. One thread sequentially increments my_numbers["b"], while two more threads increment my_numbers["a"] in parallel. Can you guess what this program will print out?
import threading my_numbers=dict(a=0, b=0) def incrementer(key, times): for i in range(times): my_numbers[key]+=1 thread_1=threading.Thread(target=incrementer, args=("a", 5_000_000)) thread_2=threading.Thread(target=incrementer, args=("a", 5_000_000)) thread_3=threading.Thread(target=incrementer, args=("b", 10_000_000)) thread_1.start() thread_2.start() thread_3.start() thread_1.join() thread_2.join() thread_3.join() print(my_numbers)
Even a simple line of code like my_numbers[key]+=1 is made up of multiple operations in the python interpreter. Getting the value of my_numbers, resolving the __getitem__ method, getting the value, incrementing, et cetera. CPython's GIL guarantees that all this will never lead to segmentation faults, premature garbage collection, or corrupted memory. It does not guarantee that everything executes in a sensible order. Too often, both threads read my_numbers[key], increment the value at the same time, and write my_numbers[key]+1 back, so one incrementing operation was superfluous. Even worse, there could be a situation where one thread wants to write my_numbers[key]+1 back, but it gets pre-empted. The other thread runs on, incrementing the value hundreds of times, until it itself gets pre-empted, and the first thread finally writes my_numbers[key]+1 back, even though at this time the number is much less that what the other thread had already computed.
It's not as easy to make the same mistake with asyncio, but if you try hard enough, you can also shoot yourself in the foot. At least, you can spot the bug in this version of the code right away: There is an await in between reading a value and writing back the result.
import asyncio import random my_numbers=dict(a=0, b=0) async def bad_async_incrementer(key, times): await asyncio.sleep(random.random()) for i in range(times): x=my_numbers[key] await asyncio.sleep(0.01) my_numbers[key]=x+1 coro1=bad_async_incrementer("a", 50) coro2=bad_async_incrementer("a", 50) coro3=bad_async_incrementer("b", 100) loop=asyncio.get_event_loop() loop.run_until_complete(asyncio.gather(coro1,coro2,coro3)) print(my_numbers)
I had to add random sleep times to overcome the somewhat deterministic nature of the event loop scheduling to make this example work, but in a real-world application, the timing of network input won't be deterministic either, and you can have multiple await statements in one coroutine.
It's probably best not to read and write global state in threads or async tasks. You should rather use local or thread-local variables. If you really have to modify global state, wrap all your critical sections in locks!
Resource Conflicts
I don't know what happens when two threads are trying to read from or write to the same socket in a multi-threaded python program. Maybe the input gets duplicated, maybe the output is interleaved, maybe there is some low-level locking. Even if the underlying OS operations are atomic, I have no clue if the Python standard library maybe splits the data up and processes it via multiple system calls. If you write code on top of well-behaved libraries (which correctly lock a socket before while reading or writing) your code could still invoke multiple of these "atomic" function calls in a row from different threads, leading to interleaved input or output.
When two coroutine tasks inside the same event loop await input from the same source at the same time, the result is usually this: The event loop does not know which one task to wake up next, and raises an exception.
If you have multiple tasks that want to send requests or do back-and-forth handshakes over the same connection, they each need to acquire a lock on the connection before they can communicate, and release it afterwards. Otherwise, your scheduler might raise an exception if you are lucky, or as above, the data coming from different threads could be interleaved or switched around! You must avoid resource conflicts by correctly using locks, and incorrect use of locks can cause deadlocks.
This might not look like a common occurrence in client software, but every time you have a bi-directional request-response protocol, like JSON-RPC, or a chat service where multiple users can send messages, the protocol is already more complicated than the simple, deterministic lock-step of HTTP.
Even though asyncio is completely single-threaded, it still provides concurrency. You will fave to deal with the familiar problems you know and hate from multi-threaded concurrency: Race conditions and deadlocks. The asyncio module does not help you with timing and correctness, is not a way to sidestep the GIL for CPU-heavy multi-core parallel computation, and will not make your programs easier to read. It is just a standardised API for concurrent, single-threaded, asynchronous I/O.
Architecture and Composability
Broadly speaking, you can't call asynchronous code, as in code marked with async def from regular synchronous code. That is also true in other asynchronous, event-based architectures like node.js, and shown with a table like this:
in synch code in async code can call synch code yes yes can call async code NO yes
This is technically correct, but at the same time somewhat misleading. While you can call synchronous code from async code, and you cannot use await outside of an async function, in practice you cannot just invoke any old bit of synchronous code from async code running inside an event loop. Any blocking I/O inside a coroutine will cause the whole event loop to block. That won't be a problem if you debug your application with a single client, or a single task in your event loop, but it will come back to haunt you in production. Python doesn't track which functions are pure functions, which functions have side effects, and whether they cause I/O or other system calls.
If your "synchronous" code is actually creating threads, you have to be extra careful, as Pythons thread synchronisation mechanisms and data structures in Python are different from those in asyncio. If your async code calls synchronous code that invokes callbacks, you might need to make sure that these callbacks are scheduled as tasks in your event loop again. Probably that means using as callbacks synchronous stubs that create coroutines and submit them to your event loop to be executed as tasks. It gets hairy fast.
Similarly, you can just evaluate a coroutine inside synchronous code if you really want to, by creating a new event loop and using loop.run_until_complete(coroutine). But you probably shouldn't. If you are inside async code, you should just use await instead of messing with the loop. The worst-case scenario here is creating new loop from synchronous code that was called from async code, causing the outer event loop to hang until the inner event loop has been completed. This way, as long as any task in the inner loop is waiting, no task in the outer loop can do anything, even if there is input available.
As a general rule, your asyncio programs should use one thread and one event loop. If you absolutely have to use multiple threads, you should still only use one loop, and have that run in its own thread, and use thread-safe methods to submit tasks to it. Don't have multiple loops in multiple threads, and don't call loop.run_until_complete(coroutine) or loop.run_forever() from different threads.
If you are using a GUI toolkit with asyncio, you will probably have to spawn one GUI thread, and one "worker" thread with the event loop. You could also try handling UI events inside the event loop, or writing your own GUI mainloop that periodically calls the event loop (like my own approach to using asyncio with pygame at 30/60 FPS).
You get the most out of asyncio if you run a single-threaded server with only one event loop that handles all concurrency for you, and any long-running computations are either done by a fancy message queue middleware, or with concurrent.futures.ProcessPoolExecutor.
Further Reading
What Color Is Your Function? - Explanation of what can call what in node.js
I Don't Understand asyncio - Outsider view of the asyncio ecosystem from 2016
Controlling Python Async Creep - Critical view of the asyncio architecture and ecosystem in 2018
Future-Proofing
It gets worse. Application code might do weird things like calling your async library from a thread, or calling your threaded code from a coroutine, but application code might also use something like gevent, twisted or tornado - async frameworks from the Python 2 era. Gevent adds green threads to the Python interpreter, and monkey-patches common i/o operations to allow task switching while waiting on blocking input. Twisted implemented its own event loop, but can now be used on top of a Python 3.6 SelectorEventLoop. The same goes for Tornado.
If your library monkey-patches i/o operations, that could create problems when you use more than one of them. If your library uses its own event loop, or even worse, implements its own event-based scheduling, then two different event-based libraries will block each other.
If you use a non-standard event loop/scheduler, it better be compatible with asyncio coroutines, and you better make sure to schedule all tasks in the same loop. If one library uses the default asyncio loop via get_event_loop(), and another creates a new one with Twisted or Tornado, you have a problem, and the tasks from these two libraries will not be interleaved.
In the future, we might get API improvements for=asyncio, but for backward compatibility, everybody has to support the old API. Somebody might write a nice, stable, thread-like interface on top of asyncio, like thredo (https://github.com/dabeaz/thredo), but if such an interface will need to have a way to call async code that uses the low-level asyncio module. Otherwise, you will get the same problems again, with the event loop and coroutines inaccessible and hidden behind a swanky interface.
Best Practices
If you are doing request-response-request-response things, wrap every request-response cycle in a lock to make sure another task doesn't accidentally get the response to your request.
If you read multiple times from a socket, for instance a first you read a length field, and then you read until you have read that many bytes, then wrap the whole thing in a lock.
If you read and write global or shared data structures, then make sure to either use locks, or to not await in between reading and writing.
If you are writing a library, don't hide away the async behind a synchronous API. Just write coroutines, and let the use create the event loop and schedule tasks.
If you provide a synchronous API, at least allow the user to pass in an event loop.
Do not create more than one event loop per program. You can have one per thread, but even that defeats the purpose of the event loop.
If you write your own high-level asyncio wrapper, please make it interoperable with plain asyncio.
2 notes
·
View notes
Text
Your personal DIY image search
Hi everyone, it’s been a while! I bet you forgot this blog even existed. I happen to be a big supporter of quality over quantity, so while my work on parsing Japanese counters earlier this year was pretty interesting, I already wrote way too many articles about Ichiran/ichi.moe so I decided to keep it to myself. Recently I’ve been working on a little side-project and now that it finally works, I think it deserves a full-fledged blog post.
For a bit of a nostalgia trip, let's go back to the early 00s. Remember when TinEye first appeared? It was amazing. For the first time you could easily find where that one image you once saved from some random phpBB forum is really from. It didn't matter if your image was resized, or slightly edited from the original, it still worked. That shit was magic, my friends. Of course these days nobody is impressed by this stuff. Google Image Search indexes pretty much anything that exists on the Internet and even uses neural networks to identify content of an image.
Back to the present day. I discovered I have an image hoarding problem. Over the years of using the Intertubes, I have accumulated a massive number of images on my hard drive. When I see an image I like my first thought is "do I have this one saved already?" because how could I possibly remember? At this point I need my own personal Google Image Search. And (spoiler alert) now I have one.
First of all, I needed an actual image matching technology. These days the cloud is all the rage, so I definitely wanted to have this thing running in the cloud (as opposed to my local PC) so that I could search my images from anywhere in the world. After a cursory search, my eyes fell on a thing called Pavlov Match which runs from a Docker container, so should be pretty easy to install. I installed docker and docker-compose on my VPS, and then git-cloned Match and ran make dev according to instructions. This will actually run an Elasticsearch instance on the same VPS, and apparently the damn thing eats memory for breakfast, at least with the default settings. I'm using a cheap 2GB RAM Linode, so the memory is actually a very finite resource here, as I will find out later. The default settings will also completely expose your match installation AND elasticsearch to the world. But don't worry, I figured this out so that you don't have to. Let's edit docker-compose.yml from match repository as follows:
version: '2' services: match: image: pavlov/match:latest ports: - 127.0.0.1:8888:8888 command: ["/wait-for-it.sh", "-t", "60", "elasticsearch:9200", "--", "gunicorn", "-b", "0.0.0.0:8888", "-w", "4", "--preload", "server:app"] links: - elasticsearch elasticsearch: image: elasticsearch environment: - "ES_JAVA_OPTS=-Xms256m -Xmx256m" - bootstrap.mlockall=true expose: - "9200"
This will make match server only available on local network within the VPS on port 8888, and elasticsearch only available to these two docker containers. It will also restrict elasticsearch RAM consumption to 512mb and --preload flag reduces the amount of memory gunicorn workers consume.
To make match server available from outside I recommend proxying it through nginx or some other proper web server. You can also add authentication/IP whitelist in nginx because the match server has no authentication features whatsoever, so anyone will be able to search/add/delete the data on it.
That was the backend part. No programming required here! But this is a Lisp blog, so the next step is writing a Lisp client that can communicate with this server. The first step is reading the match API documentation. You might notice it's a bit... idiosyncratic. I guess REST is out of fashion these days. Anyway, I started implementing a client using the trusty drakma, but I quickly hit a limitation: match expects all parameters to be sent encoded as form data, but drakma can only encode POST parameters as form data and not, say, DELETE parameters. Not to be foiled by a badly designed API, I tried dexador, and while dex:delete does not encode parameters as form data, dex:request is flexible enough to do so. Each response (a JSON string) is parsed using jsown.
(defun parse-request (&rest args) (when *auth* (setf args `(,@args :basic-auth ,*auth*))) (multiple-value-bind (content return-code) (handler-bind ((dex:http-request-failed #'dex:ignore-and-continue)) (apply 'dex:request args)) (cond ((<= 400 return-code 499) (jsown:new-js ("status" "fail") ("error" content) ("code" return-code))) (t (let ((obj (jsown:parse content))) (jsown:extend-js obj ("code" return-code))))))) (defun add-local (file &key path (metadata "{}")) "Add local image to Match server" (parse-request (api-url "/add") :method :post :content `(("image" . ,(pathname file)) ("filepath" . ,(or path file)) ("metadata" . ,metadata))))
With this basic client in place, I can add and delete individual images, but it would be incredibly cumbersome to manage thousands of images with it. I had to write some code that would scan specified directories for images, track any changes and then add/update/delete information from Match server as needed. I already wrote something like this before, so this was pretty easy. Of course SBCL's "sb-posix:stat doesn't work on Unicode filenames" bug has reared its head again, but I already knew the workaround. This time I completely relied on UIOP for recursively walking directories (uiop:subdirectories and uiop:directory-files are your friends). Each image file is represented as CLOS object and saved into a hash-table which is serialized to a file using CL-STORE. The object has a status attribute which can be :new, :update, :delete, :ok and so on. Based on status, an action needs to be performed, such as uploading an image to Match server (for :new and :update).
Now, I could just send a bunch of requests one after another, but that would be a waste. Remember, we have 4 gunicorn workers running on our server! This clearly calls for a thread pool. I thought PCALL would be perfect for this, but nope. It uses sb-thread:interrupt-thread which is incredibly unsafe and the result is that you basically can't safely make http requests from thread workers. Debugging this took way too much time. In the end, I implemented a thread pool based on lparallel promises which is kind of an overkill for such a simple use case, but at least it worked.
(setf *cache* (update-cache)) (let ((lparallel:*kernel* (lparallel:make-kernel threads))) (unwind-protect (loop for value in (alexandria:hash-table-values *cache*) collect (worker value) into futures finally (map nil 'lparallel:force futures)) (lparallel:end-kernel))) (save-cache *cache*))
Note that you must be very careful when doing things that affect global state inside the threads. For example :delete action removes a key from the hash table *cache*. This is not guaranteed to be an atomic operation, so it's necessary to grab a global lock when doing it.
(defvar *cache-lock* (bordeaux-threads:make-lock "match-cache-lock")) ... (bordeaux-threads:with-lock-held (*cache-lock*) (remhash key *cache*))
Printing messages to REPL from inside threads also requires a separate lock and (force-output), otherwise it will look like a complete mess!
(defun format-msg (str &rest args) (bordeaux-threads:with-lock-held (*msg-lock*) (terpri) (apply 'format t str args) (force-output)))
Now that the required functionality is implemented, it's time to test upload a bunch of stuff... and get back a bunch of errors. It took some sleuthing to discover that gunicorn workers of my Match server are routinely getting killed by "OOM killer". Basically, the server runs out of memory and the system in desperation kills a process that it doesn't like. Remember, I only have 2Gb of memory there!
I figured out that it's images with very large dimensions that are the most problematic in terms of memory usage. If I were to resize these images to some reasonable size, the matching should still work pretty well. In order to execute this plan, I thought I'd use some Lisp to ImageMagick interface. There's in fact a pure Lisp solution called OptiCL but would it really handle any image? Remind me to test that later! Anyway, back to ImageMagick. Neither lisp-magick nor lisp-magick-wand would work with the most recent ImageMagick version (seems its API has changed a bit). However the last one I tried cl-graphicsmagick, which uses a fork of ImageMagick called GraphicsMagick, has unexpectedly worked (at least on my Windows laptop. Note that you need to install Microsoft Visual C Redistributable 2008 otherwise the library wouldn't load with CFFI) so I went with that.
Using very useful temporary files functionality of UIOP (uiop:with-temporary-file), I resize each oversized image to reasonable dimensions and save into a temporary file, which is then uploaded to Match server. I also send the file's original and resized dimensions as metadata. Thankfully this completely eradicated the memory issue. There's a minor problem where GraphicsMagick cannot do Unicode pathnames on Windows, so I copy the original image into a temporary file with ASCII-only name in that case.
(defun resize-image (input-path output-path &key (max-width *max-dimension*) (max-height *max-dimension*) (filter :%QuadraticFilter) (blur 1)) (gm::with-magick-wand (wand) (handler-case (gm::%MagickReadImage wand input-path) ;; graphicsmagick cannot read Unicode filenames on Windows so attempt to load a copy (gm::magick-error () (uiop:with-temporary-file (:pathname tmp :prefix "gm" :type (pathname-type input-path)) (uiop:copy-file input-path tmp) (setf wand (gm::%NewMagickWand)) (gm::%MagickReadImage wand (namestring tmp))))) (let ((w (gm::%MagickGetImageWidth wand)) (h (gm::%MagickGetImageHeight wand)) (res nil)) (multiple-value-bind (fw fh) (gm::fit-width-height w h max-width max-height) (unless (and (= w fw) (= h fh)) (gm::%MagickResizeImage wand fw fh filter blur) (gm::%MagickWriteImage wand output-path) (setf res output-path)) (values res w h fw fh)))))
Later I tested this code on an Ubuntu machine with GraphicsMagick installed from Apt repository and SBCL crashed into ldb debugger mode straight away... Welp. The helpful folks of #lisp told me the problem is with signal handlers established by GraphicsMagick library, somehow they confuse SBCL. Based on that advice, eventually I succeeded making this work. Uninstall apt Graphicsmagick and grab the sources. Find the file called magick.c and replace the line
InitializeMagickSignalHandlers(); /* Signal handlers */
with
// InitializeMagickSignalHandlers(); /* Signal handlers */
(commenting it out). Then do configure --enable-shared (see readme for possible options), make and sudo make install. This will make it work when called from SBCL on Linux.
Anyways, the full code of MATCH-CLIENT can be found at my Github. It's not installable from quicklisp for obvious reasons, in fact it's a complete pain to install as you might've already guessed, but if you wanna try it, you're welcome. The main two commands are update and match. The first is called to upload all images in your *root-dirs* to the server and then to update them if anything changes. match is used to match any image on the Internet (passed as URL string) or a local pathname (passed as pathname object) compared to the server. It returns a list of jsown objects (basically alists) that contain score (up to 100 for exact match), path (with "local tag" which can be different per device) and metadata containing original and resized dimensions.
((:OBJ ("score" . 96.00956) ("filepath" . "[HOME] d:/foo/bar/baz.jpg") ("metadata" :OBJ ("rw" . 1218) ("rh" . 2048) ("w" . 3413) ("h" . 5736))))
Anyway, this was a fun (although often frustrating) thing to build and ended up being quite useful! Thanks for reading and see you next time.
3 notes
·
View notes
Text
Understanding Client-Side GraphQl With Apollo-Client In React Apps
About The Author
Blessing Krofegha is a Software Engineer Based in Lagos Nigeria, with a burning desire to contribute to making the web awesome for all, by writing and building … More about Blessing …
Ever tried interacting with a GraphQL server in a client-side application and felt like giving up even before getting anywhere? Ever declined an invitation to join a code base that requires working with GraphQL API because you had no idea? Ever felt like the only front-end engineer who hasn’t learned how to consume GraphQL APIs? If you answered yes to any of these questions, then this tutorial is for you. We’ll be taking a closer look at a few basics of GraphQL and Apollo Client, as well as how to work with both of them. By the end, we’ll have built a pet shop app that uses Apollo Client. Then, you can go on to build your next project.
According to State of JavaScript 2019, 38.7% of developers would like to use GraphQL, while 50.8% of developers would like to learn GraphQL.
Being a query language, GraphQL simplifies the workflow of building a client application. It removes the complexity of managing API endpoints in client-side apps because it exposes a single HTTP endpoint to fetch the required data. Hence, it eliminates overfetching and underfetching of data, as in the case of REST.
But GraphQL is just a query language. In order to use it easily, we need a platform that does the heavy lifting for us. One such platform is Apollo.
The Apollo platform is an implementation of GraphQL that transfers data between the cloud (the server) to the UI of your app. When you use Apollo Client, all of the logic for retrieving data, tracking, loading, and updating the UI is encapsulated by the useQuery hook (as in the case of React). Hence, data fetching is declarative. It also has zero-configuration caching. Just by setting up Apollo Client in your app, you get an intelligent cache out of the box, with no additional configuration required.
Apollo Client is also interoperable with other frameworks, such as Angular, Vue.js, and React.
Note: This tutorial will benefit those who have worked with RESTful or other forms of APIs in the past on the client-side and want to see whether GraphQL is worth taking a shot at. This means you should have worked with an API before; only then will you be able to understand how beneficial GraphQL could be to you. While we will be covering a few basics of GraphQL and Apollo Client, a good knowledge of JavaScript and React Hooks will come in handy.
GraphQL Basics
This article isn’t a complete introduction to GraphQL, but we will define a few conventions before continuing.
What Is GraphQL?
GraphQL is a specification that describes a declarative query language that your clients can use to ask an API for the exact data they want. This is achieved by creating a strong type schema for your API, with ultimate flexibility. It also ensures that the API resolves data and that client queries are validated against a schema. This definition means that GraphQL contains some specifications that make it a declarative query language, with an API that is statically typed (built around Typescript) and making it possible for the client to leverage those type systems to ask the API for the exact data it wants.
So, if we created some types with some fields in them, then, from the client-side, we could say, “Give us this data with these exact fields”. Then the API will respond with that exact shape, just as if we were using a type system in a strongly typed language. You can learn more in my Typescript article.
Let’s look at some conventions of GraphQl that will help us as we continue.
The Basics
Operations In GraphQL, every action performed is called an operation. There are a few operations, namely:
Query This operation is concerned with fetching data from the server. You could also call it a read-only fetch.
Mutation This operation involves creating, updating, and deleting data from a server. It is popularly called a CUD (create, update, and delete) operation.
Subscriptions This operation in GraphQL involves sending data from a server to its clients when specific events take place. They are usually implemented with WebSockets.
In this article, we will be dealing only with query and mutation operations.
Operation names There are unique names for your client-side query and mutation operations.
Variables and arguments Operations can define arguments, very much like a function in most programming languages. Those variables can then be passed to query or mutation calls inside the operation as arguments. Variables are expected to be given at runtime during the execution of an operation from your client.
Aliasing This is a convention in client-side GraphQL that involves renaming verbose or vague field names with simple and readable field names for the UI. Aliasing is necessary in use cases where you don’t want to have conflicting field names.
GraphQL basic conventions. (Large preview)
What Is Client-Side GraphQL?
When a front-end engineer builds UI components using any framework, like Vue.js or (in our case) React, those components are modeled and designed from a certain pattern on the client to suit the data that will be fetched from the server.
One of the most common problems with RESTful APIs is overfetching and underfetching. This happens because the only way for a client to download data is by hitting endpoints that return fixed data structures. Overfetching in this context means that a client downloads more information than is required by the app.
In GraphQL, on the other hand, you’d simply send a single query to the GraphQL server that includes the required data. The server would then respond with a JSON object of the exact data you’ve requested — hence, no overfetching. Sebastian Eschweiler explains the differences between RESTful APIs and GraphQL.
Client-side GraphQL is a client-side infrastructure that interfaces with data from a GraphQL server to perform the following functions:
It manages data by sending queries and mutating data without you having to construct HTTP requests all by yourself. You can spend less time plumbing data and more time building the actual application.
It manages the complexity of a cache for you. So, you can store and retrieve the data fetched from the server, without any third-party interference, and easily avoid refetching duplicate resources. Thus, it identifies when two resources are the same, which is great for a complex app.
It keeps your UI consistent with Optimistic UI, a convention that simulates the results of a mutation (i.e. the created data) and updates the UI even before receiving a response from the server. Once the response is received from the server, the optimistic result is thrown away and replaced with the actual result.
For further information about client-side GraphQL, spare an hour with the cocreator of GraphQL and other cool folks on GraphQL Radio.
What Is Apollo Client?
Apollo Client is an interoperable, ultra-flexible, community-driven GraphQL client for JavaScript and native platforms. Its impressive features include a robust state-management tool (Apollo Link), a zero-config caching system, a declarative approach to fetching data, easy-to-implement pagination, and the Optimistic UI for your client-side application.
Apollo Client stores not only the state from the data fetched from the server, but also the state that it has created locally on your client; hence, it manages state for both API data and local data.
It’s also important to note that you can use Apollo Client alongside other state-management tools, like RedUX, without conflict. Plus, it’s possible to migrate your state management from, say, Redux to Apollo Client (which is beyond the scope of this article). Ultimately, the main purpose of Apollo Client is to enable engineers to query data in an API seamlessly.
Features of Apollo Client
Apollo Client has won over so many engineers and companies because of its extremely helpful features that make building modern robust applications a breeze. The following features come baked in:
Caching Apollo Client supports caching on the fly.
Optimistic UI Apollo Client has cool support for the Optimistic UI. It involves temporarily displaying the final state of an operation (mutation) while the operation is in progress. Once the operation is complete, the real data replaces the optimistic data.
Pagination Apollo Client has built-in functionality that makes it quite easy to implement pagination in your application. It takes care of most of the technical headaches of fetching a list of data, either in patches or at once, using the fetchMore function, which comes with the useQuery hook.
In this article, we will look at a selection of these features.
Enough of the theory. Tighten your seat belt and grab a cup of coffee to go with your pancakes, as we get our hands dirty.
Building Our Web App
This project is inspired by Scott Moss.
We will be building a simple pet shop web app, whose features include:
fetching our pets from the server-side;
creating a pet (which involves creating the name, type of pet, and image);
using the Optimistic UI;
using pagination to segment our data.
To begin, clone the repository, ensuring that the starter branch is what you’ve cloned.
Getting Started
Install the Apollo Client Developer Tools extension for Chrome.
Using the command-line interface (CLI), navigate to the directory of the cloned repository, and run the command to get all dependencies: npm install.
Run the command npm run app to start the app.
While still in the root folder, run the command npm run server. This will start our back-end server for us, which we’ll use as we proceed.
The app should open up in a configured port. Mine is http://localhost:1234/; yours is probably something else.
If everything worked well, your app should look like this:
Cloned starter branch UI. (Large preview)
You’ll notice that we’ve got no pets to display. That’s because we haven’t created such functionality yet.
If you’ve installed Apollo Client Developer Tools correctly, open up the developer tools and click on the tray icon. You’ll see “Apollo” and something like this:
Apollo Client Developer Tools. (Large preview)
Like the RedUX and React developer tools, we will be using Apollo Client Developer Tools to write and test our queries and mutations. The extension comes with the GraphQL Playground.
Fetching Pets
Let’s add the functionality that fetches pets. Move over to client/src/client.js. We’ll be writing Apollo Client, linking it to an API, exporting it as a default client, and writing a new query.
Copy the following code and paste it in client.js:
import { ApolloClient } from 'apollo-client' import { InMemoryCache } from 'apollo-cache-inmemory' import { HttpLink } from 'apollo-link-http' const link = new HttpLink({ uri: 'https://localhost:4000/' }) const cache = new InMemoryCache() const client = new ApolloClient({ link, cache }) export default client
Here’s an explanation of what is happening above:
ApolloClient This will be the function that wraps our app and, thus, interfaces with the HTTP, caches the data, and updates the UI.
InMemoryCache This is the normalized data store in Apollo Client that helps with manipulating the cache in our application.
HttpLink This is a standard network interface for modifying the control flow of GraphQL requests and fetching GraphQL results. It acts as middleware, fetching results from the GraphQL server each time the link is fired. Plus, it’s a good substitute for other options, like Axios and window.fetch.
We declare a link variable that is assigned to an instance of HttpLink. It takes a uri property and a value to our server, which is https://localhost:4000/.
Next is a cache variable that holds the new instance of InMemoryCache.
The client variable also takes an instance of ApolloClient and wraps the link and cache.
Lastly, we export the client so that we can use it across the application.
Before we get to see this in action, we’ve got to make sure that our entire app is exposed to Apollo and that our app can receive data fetched from the server and that it can mutate that data.
To achieve this, let’s head over to client/src/index.js:
import React from 'react' import ReactDOM from 'react-dom' import { BrowserRouter } from 'react-router-dom' import { ApolloProvider } from '@apollo/react-hooks' import App from './components/App' import client from './client' import './index.css' const Root = () => ( <BrowserRouter> <ApolloProvider client={client}> <App /> </ApolloProvider> </BrowserRouter> ); ReactDOM.render(<Root />, document.getElementById('app')) if (module.hot) { module.hot.accept() }
As you’ll notice in the highlighted code, we’ve wrapped the App component in ApolloProvider and passed the client as a prop to the client. ApolloProvider is similar to React’s Context.Provider. It wraps your React app and places the client in context, which allows you to access it from anywhere in your component tree.
To fetch our pets from the server, we need to write queries that request the exact fields that we want. Head over to client/src/pages/Pets.js, and copy and paste the following code into it:
import React, {useState} from 'react' import gql from 'graphql-tag' import { useQuery, useMutation } from '@apollo/react-hooks' import PetsList from '../components/PetsList' import NewPetModal from '../components/NewPetModal' import Loader from '../components/Loader' const GET_PETS = gql` query getPets { pets { id name type img } } `; export default function Pets () { const [modal, setModal] = useState(false) const { loading, error, data } = useQuery(GET_PETS); if (loading) return <Loader />; if (error) return <p>An error occured!</p>; const onSubmit = input => { setModal(false) } if (modal) { return <NewPetModal onSubmit={onSubmit} onCancel={() => setModal(false)} /> } return ( <div className="page pets-page"> <section> <div className="row betwee-xs middle-xs"> <div className="col-xs-10"> <h1>Pets</h1> </div> <div className="col-xs-2"> <button onClick={() => setModal(true)}>new pet</button> </div> </div> </section> <section> <PetsList pets={data.pets}/> </section> </div> ) }
With a few bits of code, we are able to fetch the pets from the server.
What Is gql?
It’s important to note that operations in GraphQL are generally JSON objects written with graphql-tag and with backticks.
gql tags are JavaScript template literal tags that parse GraphQL query strings into the GraphQL AST (abstract syntax tree).
Query operations In order to fetch our pets from the server, we need to perform a query operation.
Because we’re making a query operation, we needed to specify the type of operation before naming it.
The name of our query is GET_PETS. It’s a naming convention of GraphQL to use camelCase for field names.
The name of our fields is pets. Hence, we specify the exact fields that we need from the server (id, name, type, img).
useQuery is a React hook that is the basis for executing queries in an Apollo application. To perform a query operation in our React component, we call the useQuery hook, which was initially imported from @apollo/react-hooks. Next, we pass it a GraphQL query string, which is GET_PETS in our case.
When our component renders, useQuery returns an object response from Apollo Client that contains loading, error, and data properties. Thus, they are destructured, so that we can use them to render the UI.
useQuery is awesome. We don’t have to include async-await. It’s already taken care of in the background. Pretty cool, isn’t it?
loading This property helps us handle the loading state of the application. In our case, we return a Loader component while our application loads. By default, loading is false.
error Just in case, we use this property to handle any error that might occur.
data This contains our actual data from the server.
Lastly, in our PetsList component, we pass the pets props, with data.pets as an object value.
At this point, we have successfully queried our server.
To start our application, let’s run the following command:
Start the client app. Run the command npm run app in your CLI.
Start the server. Run the command npm run server in another CLI.
VScode CLI partitioned to start both the client and the server. (Large preview)
If all went well, you should see this:
Pets queried from the server.
Mutating Data
Mutating data or creating data in Apollo Client is almost the same as querying data, with very slight changes.
Still in client/src/pages/Pets.js, let’s copy and paste the highlighted code:
.... const GET_PETS = gql` query getPets { pets { id name type img } } `; const NEW_PETS = gql` mutation CreateAPet($newPet: NewPetInput!) { addPet(input: $newPet) { id name type img } } `; const Pets = () => { const [modal, setModal] = useState(false) const { loading, error, data } = useQuery(GET_PETS); const [createPet, newPet] = useMutation(NEW_PETS); const onSubmit = input => { setModal(false) createPet({ variables: { newPet: input } }); } if (loading || newPet.loading) return <Loader />; if (error || newPet.error) return <p>An error occured</p>; if (modal) { return <NewPetModal onSubmit={onSubmit} onCancel={() => setModal(false)} /> } return ( <div className="page pets-page"> <section> <div className="row betwee-xs middle-xs"> <div className="col-xs-10"> <h1>Pets</h1> </div> <div className="col-xs-2"> <button onClick={() => setModal(true)}>new pet</button> </div> </div> </section> <section> <PetsList pets={data.pets}/> </section> </div> ) } export default Pets
To create a mutation, we would take the following steps.
1. mutation
To create, update, or delete, we need to perform the mutation operation. The mutation operation has a CreateAPet name, with one argument. This argument has a $newPet variable, with a type of NewPetInput. The ! means that the operation is required; thus, GraphQL won’t execute the operation unless we pass a newPet variable whose type is NewPetInput.
2. addPet
The addPet function, which is inside the mutation operation, takes an argument of input and is set to our $newPet variable. The field sets specified in our addPet function must be equal to the field sets in our query. The field sets in our operation are:
id
name
type
img
3. useMutation
The useMutation React hook is the primary API for executing mutations in an Apollo application. When we need to mutate data, we call useMutation in a React component and pass it a GraphQL string (in our case, NEW_PETS).
When our component renders useMutation, it returns a tuple (that is, an ordered set of data constituting a record) in an array that includes:
a mutate function that we can call at any time to execute the mutation;
an object with fields that represent the current status of the mutation’s execution.
The useMutation hook is passed a GraphQL mutation string (which is NEW_PETS in our case). We destructured the tuple, which is the function (createPet) that will mutate the data and the object field (newPets).
4. createPet
In our onSubmit function, shortly after the setModal state, we defined our createPet. This function takes a variable with an object property of a value set to { newPet: input }. The input represents the various input fields in our form (such as name, type, etc.).
With that done, the outcome should look like this:
Mutation without instant update.
If you observe the GIF closely, you’ll notice that our created pet doesn’t show up instantly, only when the page is refreshed. However, it has been updated on the server.
The big question is, why doesn’t our pet update instantly? Let’s find out in the next section.
Caching In Apollo Client
The reason our app doesn’t update automatically is that our newly created data doesn’t match the cache data in Apollo Client. So, there is a conflict as to what exactly it needs to be updated from the cache.
Simply put, if we perform a mutation that updates or deletes multiple entries (a node), then we are responsible for updating any queries referencing that node, so that it modifies our cached data to match the modifications that a mutation makes to our back-end data.
Keeping Cache In Sync
There are a few ways to keep our cache in sync each time we perform a mutation operation.
The first is by refetching matching queries after a mutation, using the refetchQueries object property (the simplest way).
Note: If we were to use this method, it would take an object property in our createPet function called refetchQueries, and it would contain an array of objects with a value of the query: refetchQueries: [{ query: GET_PETS }].
Because our focus in this section isn’t just to update our created pets in the UI, but to manipulate the cache, we won’t be using this method.
The second approach is to use the update function. In Apollo Client, there’s an update helper function that helps modify the cache data, so that it syncs with the modifications that a mutation makes to our back-end data. Using this function, we can read and write to the cache.
Updating The Cache
Copy the following highlighted code, and paste it in client/src/pages/Pets.js:
...... const Pets = () => { const [modal, setModal] = useState(false) const { loading, error, data } = useQuery(GET_PETS); const [createPet, newPet] = useMutation(NEW_PETS, { update(cache, { data: { addPet } }) { const data = cache.readQuery({ query: GET_PETS }); cache.writeQuery({ query: GET_PETS, data: { pets: [addPet, ...data.pets] }, }); }, } ); .....
The update function receives two arguments:
The first argument is the cache from Apollo Client.
The second is the exact mutation response from the server. We destructure the data property and set it to our mutation (addPet).
Next, to update the function, we need to check for what query needs to be updated (in our case, the GET_PETS query) and read the cache.
Secondly, we need to write to the query that was read, so that it knows we’re about to update it. We do so by passing an object that contains a query object property, with the value set to our query operation (GET_PETS), and a data property whose value is a pet object and that has an array of the addPet mutation and a copy of the pet’s data.
If you followed these steps carefully, you should see your pets update automatically as you create them. Let’s take a look at the changes:
Pets updates instantly.
Optimistic UI
A lot of people are big fans of loaders and spinners. There’s nothing wrong with using a loader; there are perfect use cases where a loader is the best option. I’ve written about loaders versus spinners and their best use cases.
Loaders and spinners indeed play an important role in UI and UX design, but the arrival of Optimistic UI has stolen the spotlight.
What Is Optimistic UI?
Optimistic UI is a convention that simulates the results of a mutation (created data) and updates the UI before receiving a response from the server. Once the response is received from the server, the optimistic result is thrown away and replaced with the actual result.
In the end, an optimistic UI is nothing more than a way to manage perceived performance and avoid loading states.
Apollo Client has a very interesting way of integrating the Optimistic UI. It gives us a simple hook that allows us to write to the local cache after mutation. Let’s see how it works!
Step 1
Head over to client/src/client.js, and add only the highlighted code.
import { ApolloClient } from 'apollo-client' import { InMemoryCache } from 'apollo-cache-inmemory' import { HttpLink } from 'apollo-link-http' import { setContext } from 'apollo-link-context' import { ApolloLink } from 'apollo-link' const http = new HttpLink({ uri: "http://localhost:4000/" }); const delay = setContext( request => new Promise((success, fail) => { setTimeout(() => { success() }, 800) }) ) const link = ApolloLink.from([ delay, http ]) const cache = new InMemoryCache() const client = new ApolloClient({ link, cache }) export default client
The first step involves the following:
We import setContext from apollo-link-context. The setContext function takes a callback function and returns a promise whose setTimeout is set to 800ms, in order to create a delay when a mutation operation is performed.
The ApolloLink.from method ensures that the network activity that represents the link (our API) from HTTP is delayed.
Step 2
The next step is using the Optimistic UI hook. Slide back to client/src/pages/Pets.js, and add only the highlighted code below.
..... const Pets = () => { const [modal, setModal] = useState(false) const { loading, error, data } = useQuery(GET_PETS); const [createPet, newPet] = useMutation(NEW_PETS, { update(cache, { data: { addPet } }) { const data = cache.readQuery({ query: GET_PETS }); cache.writeQuery({ query: GET_PETS, data: { pets: [addPet, ...data.pets] }, }); }, } ); const onSubmit = input => { setModal(false) createPet({ variables: { newPet: input }, optimisticResponse: { __typename: 'Mutation', addPet: { __typename: 'Pet', id: Math.floor(Math.random() * 10000 + ''), name: input.name, type: input.type, img: 'https://via.placeholder.com/200' } } }); } .....
The optimisticResponse object is used if we want the UI to update immediately when we create a pet, instead of waiting for the server response.
The code snippets above include the following:
__typename is injected by Apollo into the query to fetch the type of the queried entities. Those types are used by Apollo Client to build the id property (which is a symbol) for caching purposes in apollo-cache. So, __typename is a valid property of the query response.
The mutation is set as the __typename of optimisticResponse.
Just as earlier defined, our mutation’s name is addPet, and the __typename is Pet.
Next are the fields of our mutation that we want the optimistic response to update:
id Because we don’t know what the ID from the server will be, we made one up using Math.floor.
name This value is set to input.name.
type The type’s value is input.type.
img Now, because our server generates images for us, we used a placeholder to mimic our image from the server.
This was indeed a long ride. If you got to the end, don’t hesitate to take a break from your chair with your cup of coffee.
Let’s take a look at our outcome. The supporting repository for this project is on GitHub. Clone and experiment with it.
Final result of our app.
Conclusion
The amazing features of Apollo Client, such as the Optimistic UI and pagination, make building client-side apps a reality.
While Apollo Client works very well with other frameworks, such as Vue.js and Angular, React developers have Apollo Client Hooks, and so they can’t help but enjoy building a great app.
In this article, we’ve only scratched the surface. Mastering Apollo Client demands constant practice. So, go ahead and clone the repository, add pagination, and play around with the other features it offers.
Please do share your feedback and experience in the comments section below. We can also discuss your progress on Twitter. Cheers!
References
(ks, ra, al, yk, il)
Website Design & SEO Delray Beach by DBL07.co
Delray Beach SEO
source http://www.scpie.org/understanding-client-side-graphql-with-apollo-client-in-react-apps/ source https://scpie.tumblr.com/post/625015311855419392
0 notes
Text
Understanding Client-Side GraphQl With Apollo-Client In React Apps
About The Author
Blessing Krofegha is a Software Engineer Based in Lagos Nigeria, with a burning desire to contribute to making the web awesome for all, by writing and building … More about Blessing …
Ever tried interacting with a GraphQL server in a client-side application and felt like giving up even before getting anywhere? Ever declined an invitation to join a code base that requires working with GraphQL API because you had no idea? Ever felt like the only front-end engineer who hasn’t learned how to consume GraphQL APIs? If you answered yes to any of these questions, then this tutorial is for you. We’ll be taking a closer look at a few basics of GraphQL and Apollo Client, as well as how to work with both of them. By the end, we’ll have built a pet shop app that uses Apollo Client. Then, you can go on to build your next project.
According to State of JavaScript 2019, 38.7% of developers would like to use GraphQL, while 50.8% of developers would like to learn GraphQL.
Being a query language, GraphQL simplifies the workflow of building a client application. It removes the complexity of managing API endpoints in client-side apps because it exposes a single HTTP endpoint to fetch the required data. Hence, it eliminates overfetching and underfetching of data, as in the case of REST.
But GraphQL is just a query language. In order to use it easily, we need a platform that does the heavy lifting for us. One such platform is Apollo.
The Apollo platform is an implementation of GraphQL that transfers data between the cloud (the server) to the UI of your app. When you use Apollo Client, all of the logic for retrieving data, tracking, loading, and updating the UI is encapsulated by the useQuery hook (as in the case of React). Hence, data fetching is declarative. It also has zero-configuration caching. Just by setting up Apollo Client in your app, you get an intelligent cache out of the box, with no additional configuration required.
Apollo Client is also interoperable with other frameworks, such as Angular, Vue.js, and React.
Note: This tutorial will benefit those who have worked with RESTful or other forms of APIs in the past on the client-side and want to see whether GraphQL is worth taking a shot at. This means you should have worked with an API before; only then will you be able to understand how beneficial GraphQL could be to you. While we will be covering a few basics of GraphQL and Apollo Client, a good knowledge of JavaScript and React Hooks will come in handy.
GraphQL Basics
This article isn’t a complete introduction to GraphQL, but we will define a few conventions before continuing.
What Is GraphQL?
GraphQL is a specification that describes a declarative query language that your clients can use to ask an API for the exact data they want. This is achieved by creating a strong type schema for your API, with ultimate flexibility. It also ensures that the API resolves data and that client queries are validated against a schema. This definition means that GraphQL contains some specifications that make it a declarative query language, with an API that is statically typed (built around Typescript) and making it possible for the client to leverage those type systems to ask the API for the exact data it wants.
So, if we created some types with some fields in them, then, from the client-side, we could say, “Give us this data with these exact fields”. Then the API will respond with that exact shape, just as if we were using a type system in a strongly typed language. You can learn more in my Typescript article.
Let’s look at some conventions of GraphQl that will help us as we continue.
The Basics
Operations In GraphQL, every action performed is called an operation. There are a few operations, namely:
Query This operation is concerned with fetching data from the server. You could also call it a read-only fetch.
Mutation This operation involves creating, updating, and deleting data from a server. It is popularly called a CUD (create, update, and delete) operation.
Subscriptions This operation in GraphQL involves sending data from a server to its clients when specific events take place. They are usually implemented with WebSockets.
In this article, we will be dealing only with query and mutation operations.
Operation names There are unique names for your client-side query and mutation operations.
Variables and arguments Operations can define arguments, very much like a function in most programming languages. Those variables can then be passed to query or mutation calls inside the operation as arguments. Variables are expected to be given at runtime during the execution of an operation from your client.
Aliasing This is a convention in client-side GraphQL that involves renaming verbose or vague field names with simple and readable field names for the UI. Aliasing is necessary in use cases where you don’t want to have conflicting field names.
GraphQL basic conventions. (Large preview)
What Is Client-Side GraphQL?
When a front-end engineer builds UI components using any framework, like Vue.js or (in our case) React, those components are modeled and designed from a certain pattern on the client to suit the data that will be fetched from the server.
One of the most common problems with RESTful APIs is overfetching and underfetching. This happens because the only way for a client to download data is by hitting endpoints that return fixed data structures. Overfetching in this context means that a client downloads more information than is required by the app.
In GraphQL, on the other hand, you’d simply send a single query to the GraphQL server that includes the required data. The server would then respond with a JSON object of the exact data you’ve requested — hence, no overfetching. Sebastian Eschweiler explains the differences between RESTful APIs and GraphQL.
Client-side GraphQL is a client-side infrastructure that interfaces with data from a GraphQL server to perform the following functions:
It manages data by sending queries and mutating data without you having to construct HTTP requests all by yourself. You can spend less time plumbing data and more time building the actual application.
It manages the complexity of a cache for you. So, you can store and retrieve the data fetched from the server, without any third-party interference, and easily avoid refetching duplicate resources. Thus, it identifies when two resources are the same, which is great for a complex app.
It keeps your UI consistent with Optimistic UI, a convention that simulates the results of a mutation (i.e. the created data) and updates the UI even before receiving a response from the server. Once the response is received from the server, the optimistic result is thrown away and replaced with the actual result.
For further information about client-side GraphQL, spare an hour with the cocreator of GraphQL and other cool folks on GraphQL Radio.
What Is Apollo Client?
Apollo Client is an interoperable, ultra-flexible, community-driven GraphQL client for JavaScript and native platforms. Its impressive features include a robust state-management tool (Apollo Link), a zero-config caching system, a declarative approach to fetching data, easy-to-implement pagination, and the Optimistic UI for your client-side application.
Apollo Client stores not only the state from the data fetched from the server, but also the state that it has created locally on your client; hence, it manages state for both API data and local data.
It’s also important to note that you can use Apollo Client alongside other state-management tools, like RedUX, without conflict. Plus, it’s possible to migrate your state management from, say, Redux to Apollo Client (which is beyond the scope of this article). Ultimately, the main purpose of Apollo Client is to enable engineers to query data in an API seamlessly.
Features of Apollo Client
Apollo Client has won over so many engineers and companies because of its extremely helpful features that make building modern robust applications a breeze. The following features come baked in:
Caching Apollo Client supports caching on the fly.
Optimistic UI Apollo Client has cool support for the Optimistic UI. It involves temporarily displaying the final state of an operation (mutation) while the operation is in progress. Once the operation is complete, the real data replaces the optimistic data.
Pagination Apollo Client has built-in functionality that makes it quite easy to implement pagination in your application. It takes care of most of the technical headaches of fetching a list of data, either in patches or at once, using the fetchMore function, which comes with the useQuery hook.
In this article, we will look at a selection of these features.
Enough of the theory. Tighten your seat belt and grab a cup of coffee to go with your pancakes, as we get our hands dirty.
Building Our Web App
This project is inspired by Scott Moss.
We will be building a simple pet shop web app, whose features include:
fetching our pets from the server-side;
creating a pet (which involves creating the name, type of pet, and image);
using the Optimistic UI;
using pagination to segment our data.
To begin, clone the repository, ensuring that the starter branch is what you’ve cloned.
Getting Started
Install the Apollo Client Developer Tools extension for Chrome.
Using the command-line interface (CLI), navigate to the directory of the cloned repository, and run the command to get all dependencies: npm install.
Run the command npm run app to start the app.
While still in the root folder, run the command npm run server. This will start our back-end server for us, which we’ll use as we proceed.
The app should open up in a configured port. Mine is http://localhost:1234/; yours is probably something else.
If everything worked well, your app should look like this:
Cloned starter branch UI. (Large preview)
You’ll notice that we’ve got no pets to display. That’s because we haven’t created such functionality yet.
If you’ve installed Apollo Client Developer Tools correctly, open up the developer tools and click on the tray icon. You’ll see “Apollo” and something like this:
Apollo Client Developer Tools. (Large preview)
Like the RedUX and React developer tools, we will be using Apollo Client Developer Tools to write and test our queries and mutations. The extension comes with the GraphQL Playground.
Fetching Pets
Let’s add the functionality that fetches pets. Move over to client/src/client.js. We’ll be writing Apollo Client, linking it to an API, exporting it as a default client, and writing a new query.
Copy the following code and paste it in client.js:
import { ApolloClient } from 'apollo-client' import { InMemoryCache } from 'apollo-cache-inmemory' import { HttpLink } from 'apollo-link-http' const link = new HttpLink({ uri: 'https://localhost:4000/' }) const cache = new InMemoryCache() const client = new ApolloClient({ link, cache }) export default client
Here’s an explanation of what is happening above:
ApolloClient This will be the function that wraps our app and, thus, interfaces with the HTTP, caches the data, and updates the UI.
InMemoryCache This is the normalized data store in Apollo Client that helps with manipulating the cache in our application.
HttpLink This is a standard network interface for modifying the control flow of GraphQL requests and fetching GraphQL results. It acts as middleware, fetching results from the GraphQL server each time the link is fired. Plus, it’s a good substitute for other options, like Axios and window.fetch.
We declare a link variable that is assigned to an instance of HttpLink. It takes a uri property and a value to our server, which is https://localhost:4000/.
Next is a cache variable that holds the new instance of InMemoryCache.
The client variable also takes an instance of ApolloClient and wraps the link and cache.
Lastly, we export the client so that we can use it across the application.
Before we get to see this in action, we’ve got to make sure that our entire app is exposed to Apollo and that our app can receive data fetched from the server and that it can mutate that data.
To achieve this, let’s head over to client/src/index.js:
import React from 'react' import ReactDOM from 'react-dom' import { BrowserRouter } from 'react-router-dom' import { ApolloProvider } from '@apollo/react-hooks' import App from './components/App' import client from './client' import './index.css' const Root = () => ( <BrowserRouter> <ApolloProvider client={client}> <App /> </ApolloProvider> </BrowserRouter> ); ReactDOM.render(<Root />, document.getElementById('app')) if (module.hot) { module.hot.accept() }
As you’ll notice in the highlighted code, we’ve wrapped the App component in ApolloProvider and passed the client as a prop to the client. ApolloProvider is similar to React’s Context.Provider. It wraps your React app and places the client in context, which allows you to access it from anywhere in your component tree.
To fetch our pets from the server, we need to write queries that request the exact fields that we want. Head over to client/src/pages/Pets.js, and copy and paste the following code into it:
import React, {useState} from 'react' import gql from 'graphql-tag' import { useQuery, useMutation } from '@apollo/react-hooks' import PetsList from '../components/PetsList' import NewPetModal from '../components/NewPetModal' import Loader from '../components/Loader' const GET_PETS = gql` query getPets { pets { id name type img } } `; export default function Pets () { const [modal, setModal] = useState(false) const { loading, error, data } = useQuery(GET_PETS); if (loading) return <Loader />; if (error) return <p>An error occured!</p>; const onSubmit = input => { setModal(false) } if (modal) { return <NewPetModal onSubmit={onSubmit} onCancel={() => setModal(false)} /> } return ( <div className="page pets-page"> <section> <div className="row betwee-xs middle-xs"> <div className="col-xs-10"> <h1>Pets</h1> </div> <div className="col-xs-2"> <button onClick={() => setModal(true)}>new pet</button> </div> </div> </section> <section> <PetsList pets={data.pets}/> </section> </div> ) }
With a few bits of code, we are able to fetch the pets from the server.
What Is gql?
It’s important to note that operations in GraphQL are generally JSON objects written with graphql-tag and with backticks.
gql tags are JavaScript template literal tags that parse GraphQL query strings into the GraphQL AST (abstract syntax tree).
Query operations In order to fetch our pets from the server, we need to perform a query operation.
Because we’re making a query operation, we needed to specify the type of operation before naming it.
The name of our query is GET_PETS. It’s a naming convention of GraphQL to use camelCase for field names.
The name of our fields is pets. Hence, we specify the exact fields that we need from the server (id, name, type, img).
useQuery is a React hook that is the basis for executing queries in an Apollo application. To perform a query operation in our React component, we call the useQuery hook, which was initially imported from @apollo/react-hooks. Next, we pass it a GraphQL query string, which is GET_PETS in our case.
When our component renders, useQuery returns an object response from Apollo Client that contains loading, error, and data properties. Thus, they are destructured, so that we can use them to render the UI.
useQuery is awesome. We don’t have to include async-await. It’s already taken care of in the background. Pretty cool, isn’t it?
loading This property helps us handle the loading state of the application. In our case, we return a Loader component while our application loads. By default, loading is false.
error Just in case, we use this property to handle any error that might occur.
data This contains our actual data from the server.
Lastly, in our PetsList component, we pass the pets props, with data.pets as an object value.
At this point, we have successfully queried our server.
To start our application, let’s run the following command:
Start the client app. Run the command npm run app in your CLI.
Start the server. Run the command npm run server in another CLI.
VScode CLI partitioned to start both the client and the server. (Large preview)
If all went well, you should see this:
Pets queried from the server.
Mutating Data
Mutating data or creating data in Apollo Client is almost the same as querying data, with very slight changes.
Still in client/src/pages/Pets.js, let’s copy and paste the highlighted code:
.... const GET_PETS = gql` query getPets { pets { id name type img } } `; const NEW_PETS = gql` mutation CreateAPet($newPet: NewPetInput!) { addPet(input: $newPet) { id name type img } } `; const Pets = () => { const [modal, setModal] = useState(false) const { loading, error, data } = useQuery(GET_PETS); const [createPet, newPet] = useMutation(NEW_PETS); const onSubmit = input => { setModal(false) createPet({ variables: { newPet: input } }); } if (loading || newPet.loading) return <Loader />; if (error || newPet.error) return <p>An error occured</p>; if (modal) { return <NewPetModal onSubmit={onSubmit} onCancel={() => setModal(false)} /> } return ( <div className="page pets-page"> <section> <div className="row betwee-xs middle-xs"> <div className="col-xs-10"> <h1>Pets</h1> </div> <div className="col-xs-2"> <button onClick={() => setModal(true)}>new pet</button> </div> </div> </section> <section> <PetsList pets={data.pets}/> </section> </div> ) } export default Pets
To create a mutation, we would take the following steps.
1. mutation
To create, update, or delete, we need to perform the mutation operation. The mutation operation has a CreateAPet name, with one argument. This argument has a $newPet variable, with a type of NewPetInput. The ! means that the operation is required; thus, GraphQL won’t execute the operation unless we pass a newPet variable whose type is NewPetInput.
2. addPet
The addPet function, which is inside the mutation operation, takes an argument of input and is set to our $newPet variable. The field sets specified in our addPet function must be equal to the field sets in our query. The field sets in our operation are:
id
name
type
img
3. useMutation
The useMutation React hook is the primary API for executing mutations in an Apollo application. When we need to mutate data, we call useMutation in a React component and pass it a GraphQL string (in our case, NEW_PETS).
When our component renders useMutation, it returns a tuple (that is, an ordered set of data constituting a record) in an array that includes:
a mutate function that we can call at any time to execute the mutation;
an object with fields that represent the current status of the mutation’s execution.
The useMutation hook is passed a GraphQL mutation string (which is NEW_PETS in our case). We destructured the tuple, which is the function (createPet) that will mutate the data and the object field (newPets).
4. createPet
In our onSubmit function, shortly after the setModal state, we defined our createPet. This function takes a variable with an object property of a value set to { newPet: input }. The input represents the various input fields in our form (such as name, type, etc.).
With that done, the outcome should look like this:
Mutation without instant update.
If you observe the GIF closely, you’ll notice that our created pet doesn’t show up instantly, only when the page is refreshed. However, it has been updated on the server.
The big question is, why doesn’t our pet update instantly? Let’s find out in the next section.
Caching In Apollo Client
The reason our app doesn’t update automatically is that our newly created data doesn’t match the cache data in Apollo Client. So, there is a conflict as to what exactly it needs to be updated from the cache.
Simply put, if we perform a mutation that updates or deletes multiple entries (a node), then we are responsible for updating any queries referencing that node, so that it modifies our cached data to match the modifications that a mutation makes to our back-end data.
Keeping Cache In Sync
There are a few ways to keep our cache in sync each time we perform a mutation operation.
The first is by refetching matching queries after a mutation, using the refetchQueries object property (the simplest way).
Note: If we were to use this method, it would take an object property in our createPet function called refetchQueries, and it would contain an array of objects with a value of the query: refetchQueries: [{ query: GET_PETS }].
Because our focus in this section isn’t just to update our created pets in the UI, but to manipulate the cache, we won’t be using this method.
The second approach is to use the update function. In Apollo Client, there’s an update helper function that helps modify the cache data, so that it syncs with the modifications that a mutation makes to our back-end data. Using this function, we can read and write to the cache.
Updating The Cache
Copy the following highlighted code, and paste it in client/src/pages/Pets.js:
...... const Pets = () => { const [modal, setModal] = useState(false) const { loading, error, data } = useQuery(GET_PETS); const [createPet, newPet] = useMutation(NEW_PETS, { update(cache, { data: { addPet } }) { const data = cache.readQuery({ query: GET_PETS }); cache.writeQuery({ query: GET_PETS, data: { pets: [addPet, ...data.pets] }, }); }, } ); .....
The update function receives two arguments:
The first argument is the cache from Apollo Client.
The second is the exact mutation response from the server. We destructure the data property and set it to our mutation (addPet).
Next, to update the function, we need to check for what query needs to be updated (in our case, the GET_PETS query) and read the cache.
Secondly, we need to write to the query that was read, so that it knows we’re about to update it. We do so by passing an object that contains a query object property, with the value set to our query operation (GET_PETS), and a data property whose value is a pet object and that has an array of the addPet mutation and a copy of the pet’s data.
If you followed these steps carefully, you should see your pets update automatically as you create them. Let’s take a look at the changes:
Pets updates instantly.
Optimistic UI
A lot of people are big fans of loaders and spinners. There’s nothing wrong with using a loader; there are perfect use cases where a loader is the best option. I’ve written about loaders versus spinners and their best use cases.
Loaders and spinners indeed play an important role in UI and UX design, but the arrival of Optimistic UI has stolen the spotlight.
What Is Optimistic UI?
Optimistic UI is a convention that simulates the results of a mutation (created data) and updates the UI before receiving a response from the server. Once the response is received from the server, the optimistic result is thrown away and replaced with the actual result.
In the end, an optimistic UI is nothing more than a way to manage perceived performance and avoid loading states.
Apollo Client has a very interesting way of integrating the Optimistic UI. It gives us a simple hook that allows us to write to the local cache after mutation. Let’s see how it works!
Step 1
Head over to client/src/client.js, and add only the highlighted code.
import { ApolloClient } from 'apollo-client' import { InMemoryCache } from 'apollo-cache-inmemory' import { HttpLink } from 'apollo-link-http' import { setContext } from 'apollo-link-context' import { ApolloLink } from 'apollo-link' const http = new HttpLink({ uri: "http://localhost:4000/" }); const delay = setContext( request => new Promise((success, fail) => { setTimeout(() => { success() }, 800) }) ) const link = ApolloLink.from([ delay, http ]) const cache = new InMemoryCache() const client = new ApolloClient({ link, cache }) export default client
The first step involves the following:
We import setContext from apollo-link-context. The setContext function takes a callback function and returns a promise whose setTimeout is set to 800ms, in order to create a delay when a mutation operation is performed.
The ApolloLink.from method ensures that the network activity that represents the link (our API) from HTTP is delayed.
Step 2
The next step is using the Optimistic UI hook. Slide back to client/src/pages/Pets.js, and add only the highlighted code below.
..... const Pets = () => { const [modal, setModal] = useState(false) const { loading, error, data } = useQuery(GET_PETS); const [createPet, newPet] = useMutation(NEW_PETS, { update(cache, { data: { addPet } }) { const data = cache.readQuery({ query: GET_PETS }); cache.writeQuery({ query: GET_PETS, data: { pets: [addPet, ...data.pets] }, }); }, } ); const onSubmit = input => { setModal(false) createPet({ variables: { newPet: input }, optimisticResponse: { __typename: 'Mutation', addPet: { __typename: 'Pet', id: Math.floor(Math.random() * 10000 + ''), name: input.name, type: input.type, img: 'https://via.placeholder.com/200' } } }); } .....
The optimisticResponse object is used if we want the UI to update immediately when we create a pet, instead of waiting for the server response.
The code snippets above include the following:
__typename is injected by Apollo into the query to fetch the type of the queried entities. Those types are used by Apollo Client to build the id property (which is a symbol) for caching purposes in apollo-cache. So, __typename is a valid property of the query response.
The mutation is set as the __typename of optimisticResponse.
Just as earlier defined, our mutation’s name is addPet, and the __typename is Pet.
Next are the fields of our mutation that we want the optimistic response to update:
id Because we don’t know what the ID from the server will be, we made one up using Math.floor.
name This value is set to input.name.
type The type’s value is input.type.
img Now, because our server generates images for us, we used a placeholder to mimic our image from the server.
This was indeed a long ride. If you got to the end, don’t hesitate to take a break from your chair with your cup of coffee.
Let’s take a look at our outcome. The supporting repository for this project is on GitHub. Clone and experiment with it.
Final result of our app.
Conclusion
The amazing features of Apollo Client, such as the Optimistic UI and pagination, make building client-side apps a reality.
While Apollo Client works very well with other frameworks, such as Vue.js and Angular, React developers have Apollo Client Hooks, and so they can’t help but enjoy building a great app.
In this article, we’ve only scratched the surface. Mastering Apollo Client demands constant practice. So, go ahead and clone the repository, add pagination, and play around with the other features it offers.
Please do share your feedback and experience in the comments section below. We can also discuss your progress on Twitter. Cheers!
References
(ks, ra, al, yk, il)
Website Design & SEO Delray Beach by DBL07.co
Delray Beach SEO
source http://www.scpie.org/understanding-client-side-graphql-with-apollo-client-in-react-apps/
0 notes
Text
Understanding Client-Side GraphQl With Apollo-Client In React Apps
About The Author
Blessing Krofegha is a Software Engineer Based in Lagos Nigeria, with a burning desire to contribute to making the web awesome for all, by writing and building … More about Blessing …
Ever tried interacting with a GraphQL server in a client-side application and felt like giving up even before getting anywhere? Ever declined an invitation to join a code base that requires working with GraphQL API because you had no idea? Ever felt like the only front-end engineer who hasn’t learned how to consume GraphQL APIs? If you answered yes to any of these questions, then this tutorial is for you. We’ll be taking a closer look at a few basics of GraphQL and Apollo Client, as well as how to work with both of them. By the end, we’ll have built a pet shop app that uses Apollo Client. Then, you can go on to build your next project.
According to State of JavaScript 2019, 38.7% of developers would like to use GraphQL, while 50.8% of developers would like to learn GraphQL.
Being a query language, GraphQL simplifies the workflow of building a client application. It removes the complexity of managing API endpoints in client-side apps because it exposes a single HTTP endpoint to fetch the required data. Hence, it eliminates overfetching and underfetching of data, as in the case of REST.
But GraphQL is just a query language. In order to use it easily, we need a platform that does the heavy lifting for us. One such platform is Apollo.
The Apollo platform is an implementation of GraphQL that transfers data between the cloud (the server) to the UI of your app. When you use Apollo Client, all of the logic for retrieving data, tracking, loading, and updating the UI is encapsulated by the useQuery hook (as in the case of React). Hence, data fetching is declarative. It also has zero-configuration caching. Just by setting up Apollo Client in your app, you get an intelligent cache out of the box, with no additional configuration required.
Apollo Client is also interoperable with other frameworks, such as Angular, Vue.js, and React.
Note: This tutorial will benefit those who have worked with RESTful or other forms of APIs in the past on the client-side and want to see whether GraphQL is worth taking a shot at. This means you should have worked with an API before; only then will you be able to understand how beneficial GraphQL could be to you. While we will be covering a few basics of GraphQL and Apollo Client, a good knowledge of JavaScript and React Hooks will come in handy.
GraphQL Basics
This article isn’t a complete introduction to GraphQL, but we will define a few conventions before continuing.
What Is GraphQL?
GraphQL is a specification that describes a declarative query language that your clients can use to ask an API for the exact data they want. This is achieved by creating a strong type schema for your API, with ultimate flexibility. It also ensures that the API resolves data and that client queries are validated against a schema. This definition means that GraphQL contains some specifications that make it a declarative query language, with an API that is statically typed (built around Typescript) and making it possible for the client to leverage those type systems to ask the API for the exact data it wants.
So, if we created some types with some fields in them, then, from the client-side, we could say, “Give us this data with these exact fields”. Then the API will respond with that exact shape, just as if we were using a type system in a strongly typed language. You can learn more in my Typescript article.
Let’s look at some conventions of GraphQl that will help us as we continue.
The Basics
Operations In GraphQL, every action performed is called an operation. There are a few operations, namely:
Query This operation is concerned with fetching data from the server. You could also call it a read-only fetch.
Mutation This operation involves creating, updating, and deleting data from a server. It is popularly called a CUD (create, update, and delete) operation.
Subscriptions This operation in GraphQL involves sending data from a server to its clients when specific events take place. They are usually implemented with WebSockets.
In this article, we will be dealing only with query and mutation operations.
Operation names There are unique names for your client-side query and mutation operations.
Variables and arguments Operations can define arguments, very much like a function in most programming languages. Those variables can then be passed to query or mutation calls inside the operation as arguments. Variables are expected to be given at runtime during the execution of an operation from your client.
Aliasing This is a convention in client-side GraphQL that involves renaming verbose or vague field names with simple and readable field names for the UI. Aliasing is necessary in use cases where you don’t want to have conflicting field names.
GraphQL basic conventions. (Large preview)
What Is Client-Side GraphQL?
When a front-end engineer builds UI components using any framework, like Vue.js or (in our case) React, those components are modeled and designed from a certain pattern on the client to suit the data that will be fetched from the server.
One of the most common problems with RESTful APIs is overfetching and underfetching. This happens because the only way for a client to download data is by hitting endpoints that return fixed data structures. Overfetching in this context means that a client downloads more information than is required by the app.
In GraphQL, on the other hand, you’d simply send a single query to the GraphQL server that includes the required data. The server would then respond with a JSON object of the exact data you’ve requested — hence, no overfetching. Sebastian Eschweiler explains the differences between RESTful APIs and GraphQL.
Client-side GraphQL is a client-side infrastructure that interfaces with data from a GraphQL server to perform the following functions:
It manages data by sending queries and mutating data without you having to construct HTTP requests all by yourself. You can spend less time plumbing data and more time building the actual application.
It manages the complexity of a cache for you. So, you can store and retrieve the data fetched from the server, without any third-party interference, and easily avoid refetching duplicate resources. Thus, it identifies when two resources are the same, which is great for a complex app.
It keeps your UI consistent with Optimistic UI, a convention that simulates the results of a mutation (i.e. the created data) and updates the UI even before receiving a response from the server. Once the response is received from the server, the optimistic result is thrown away and replaced with the actual result.
For further information about client-side GraphQL, spare an hour with the cocreator of GraphQL and other cool folks on GraphQL Radio.
What Is Apollo Client?
Apollo Client is an interoperable, ultra-flexible, community-driven GraphQL client for JavaScript and native platforms. Its impressive features include a robust state-management tool (Apollo Link), a zero-config caching system, a declarative approach to fetching data, easy-to-implement pagination, and the Optimistic UI for your client-side application.
Apollo Client stores not only the state from the data fetched from the server, but also the state that it has created locally on your client; hence, it manages state for both API data and local data.
It’s also important to note that you can use Apollo Client alongside other state-management tools, like RedUX, without conflict. Plus, it’s possible to migrate your state management from, say, Redux to Apollo Client (which is beyond the scope of this article). Ultimately, the main purpose of Apollo Client is to enable engineers to query data in an API seamlessly.
Features of Apollo Client
Apollo Client has won over so many engineers and companies because of its extremely helpful features that make building modern robust applications a breeze. The following features come baked in:
Caching Apollo Client supports caching on the fly.
Optimistic UI Apollo Client has cool support for the Optimistic UI. It involves temporarily displaying the final state of an operation (mutation) while the operation is in progress. Once the operation is complete, the real data replaces the optimistic data.
Pagination Apollo Client has built-in functionality that makes it quite easy to implement pagination in your application. It takes care of most of the technical headaches of fetching a list of data, either in patches or at once, using the fetchMore function, which comes with the useQuery hook.
In this article, we will look at a selection of these features.
Enough of the theory. Tighten your seat belt and grab a cup of coffee to go with your pancakes, as we get our hands dirty.
Building Our Web App
This project is inspired by Scott Moss.
We will be building a simple pet shop web app, whose features include:
fetching our pets from the server-side;
creating a pet (which involves creating the name, type of pet, and image);
using the Optimistic UI;
using pagination to segment our data.
To begin, clone the repository, ensuring that the starter branch is what you’ve cloned.
Getting Started
Install the Apollo Client Developer Tools extension for Chrome.
Using the command-line interface (CLI), navigate to the directory of the cloned repository, and run the command to get all dependencies: npm install.
Run the command npm run app to start the app.
While still in the root folder, run the command npm run server. This will start our back-end server for us, which we’ll use as we proceed.
The app should open up in a configured port. Mine is http://localhost:1234/; yours is probably something else.
If everything worked well, your app should look like this:
Cloned starter branch UI. (Large preview)
You’ll notice that we’ve got no pets to display. That’s because we haven’t created such functionality yet.
If you’ve installed Apollo Client Developer Tools correctly, open up the developer tools and click on the tray icon. You’ll see “Apollo” and something like this:
Apollo Client Developer Tools. (Large preview)
Like the RedUX and React developer tools, we will be using Apollo Client Developer Tools to write and test our queries and mutations. The extension comes with the GraphQL Playground.
Fetching Pets
Let’s add the functionality that fetches pets. Move over to client/src/client.js. We’ll be writing Apollo Client, linking it to an API, exporting it as a default client, and writing a new query.
Copy the following code and paste it in client.js:
import { ApolloClient } from 'apollo-client' import { InMemoryCache } from 'apollo-cache-inmemory' import { HttpLink } from 'apollo-link-http' const link = new HttpLink({ uri: 'https://localhost:4000/' }) const cache = new InMemoryCache() const client = new ApolloClient({ link, cache }) export default client
Here’s an explanation of what is happening above:
ApolloClient This will be the function that wraps our app and, thus, interfaces with the HTTP, caches the data, and updates the UI.
InMemoryCache This is the normalized data store in Apollo Client that helps with manipulating the cache in our application.
HttpLink This is a standard network interface for modifying the control flow of GraphQL requests and fetching GraphQL results. It acts as middleware, fetching results from the GraphQL server each time the link is fired. Plus, it’s a good substitute for other options, like Axios and window.fetch.
We declare a link variable that is assigned to an instance of HttpLink. It takes a uri property and a value to our server, which is https://localhost:4000/.
Next is a cache variable that holds the new instance of InMemoryCache.
The client variable also takes an instance of ApolloClient and wraps the link and cache.
Lastly, we export the client so that we can use it across the application.
Before we get to see this in action, we’ve got to make sure that our entire app is exposed to Apollo and that our app can receive data fetched from the server and that it can mutate that data.
To achieve this, let’s head over to client/src/index.js:
import React from 'react' import ReactDOM from 'react-dom' import { BrowserRouter } from 'react-router-dom' import { ApolloProvider } from '@apollo/react-hooks' import App from './components/App' import client from './client' import './index.css' const Root = () => ( <BrowserRouter> <ApolloProvider client={client}> <App /> </ApolloProvider> </BrowserRouter> ); ReactDOM.render(<Root />, document.getElementById('app')) if (module.hot) { module.hot.accept() }
As you’ll notice in the highlighted code, we’ve wrapped the App component in ApolloProvider and passed the client as a prop to the client. ApolloProvider is similar to React’s Context.Provider. It wraps your React app and places the client in context, which allows you to access it from anywhere in your component tree.
To fetch our pets from the server, we need to write queries that request the exact fields that we want. Head over to client/src/pages/Pets.js, and copy and paste the following code into it:
import React, {useState} from 'react' import gql from 'graphql-tag' import { useQuery, useMutation } from '@apollo/react-hooks' import PetsList from '../components/PetsList' import NewPetModal from '../components/NewPetModal' import Loader from '../components/Loader' const GET_PETS = gql` query getPets { pets { id name type img } } `; export default function Pets () { const [modal, setModal] = useState(false) const { loading, error, data } = useQuery(GET_PETS); if (loading) return <Loader />; if (error) return <p>An error occured!</p>; const onSubmit = input => { setModal(false) } if (modal) { return <NewPetModal onSubmit={onSubmit} onCancel={() => setModal(false)} /> } return ( <div className="page pets-page"> <section> <div className="row betwee-xs middle-xs"> <div className="col-xs-10"> <h1>Pets</h1> </div> <div className="col-xs-2"> <button onClick={() => setModal(true)}>new pet</button> </div> </div> </section> <section> <PetsList pets={data.pets}/> </section> </div> ) }
With a few bits of code, we are able to fetch the pets from the server.
What Is gql?
It’s important to note that operations in GraphQL are generally JSON objects written with graphql-tag and with backticks.
gql tags are JavaScript template literal tags that parse GraphQL query strings into the GraphQL AST (abstract syntax tree).
Query operations In order to fetch our pets from the server, we need to perform a query operation.
Because we’re making a query operation, we needed to specify the type of operation before naming it.
The name of our query is GET_PETS. It’s a naming convention of GraphQL to use camelCase for field names.
The name of our fields is pets. Hence, we specify the exact fields that we need from the server (id, name, type, img).
useQuery is a React hook that is the basis for executing queries in an Apollo application. To perform a query operation in our React component, we call the useQuery hook, which was initially imported from @apollo/react-hooks. Next, we pass it a GraphQL query string, which is GET_PETS in our case.
When our component renders, useQuery returns an object response from Apollo Client that contains loading, error, and data properties. Thus, they are destructured, so that we can use them to render the UI.
useQuery is awesome. We don’t have to include async-await. It’s already taken care of in the background. Pretty cool, isn’t it?
loading This property helps us handle the loading state of the application. In our case, we return a Loader component while our application loads. By default, loading is false.
error Just in case, we use this property to handle any error that might occur.
data This contains our actual data from the server.
Lastly, in our PetsList component, we pass the pets props, with data.pets as an object value.
At this point, we have successfully queried our server.
To start our application, let’s run the following command:
Start the client app. Run the command npm run app in your CLI.
Start the server. Run the command npm run server in another CLI.
VScode CLI partitioned to start both the client and the server. (Large preview)
If all went well, you should see this:
Pets queried from the server.
Mutating Data
Mutating data or creating data in Apollo Client is almost the same as querying data, with very slight changes.
Still in client/src/pages/Pets.js, let’s copy and paste the highlighted code:
.... const GET_PETS = gql` query getPets { pets { id name type img } } `; const NEW_PETS = gql` mutation CreateAPet($newPet: NewPetInput!) { addPet(input: $newPet) { id name type img } } `; const Pets = () => { const [modal, setModal] = useState(false) const { loading, error, data } = useQuery(GET_PETS); const [createPet, newPet] = useMutation(NEW_PETS); const onSubmit = input => { setModal(false) createPet({ variables: { newPet: input } }); } if (loading || newPet.loading) return <Loader />; if (error || newPet.error) return <p>An error occured</p>; if (modal) { return <NewPetModal onSubmit={onSubmit} onCancel={() => setModal(false)} /> } return ( <div className="page pets-page"> <section> <div className="row betwee-xs middle-xs"> <div className="col-xs-10"> <h1>Pets</h1> </div> <div className="col-xs-2"> <button onClick={() => setModal(true)}>new pet</button> </div> </div> </section> <section> <PetsList pets={data.pets}/> </section> </div> ) } export default Pets
To create a mutation, we would take the following steps.
1. mutation
To create, update, or delete, we need to perform the mutation operation. The mutation operation has a CreateAPet name, with one argument. This argument has a $newPet variable, with a type of NewPetInput. The ! means that the operation is required; thus, GraphQL won’t execute the operation unless we pass a newPet variable whose type is NewPetInput.
2. addPet
The addPet function, which is inside the mutation operation, takes an argument of input and is set to our $newPet variable. The field sets specified in our addPet function must be equal to the field sets in our query. The field sets in our operation are:
id
name
type
img
3. useMutation
The useMutation React hook is the primary API for executing mutations in an Apollo application. When we need to mutate data, we call useMutation in a React component and pass it a GraphQL string (in our case, NEW_PETS).
When our component renders useMutation, it returns a tuple (that is, an ordered set of data constituting a record) in an array that includes:
a mutate function that we can call at any time to execute the mutation;
an object with fields that represent the current status of the mutation’s execution.
The useMutation hook is passed a GraphQL mutation string (which is NEW_PETS in our case). We destructured the tuple, which is the function (createPet) that will mutate the data and the object field (newPets).
4. createPet
In our onSubmit function, shortly after the setModal state, we defined our createPet. This function takes a variable with an object property of a value set to { newPet: input }. The input represents the various input fields in our form (such as name, type, etc.).
With that done, the outcome should look like this:
Mutation without instant update.
If you observe the GIF closely, you’ll notice that our created pet doesn’t show up instantly, only when the page is refreshed. However, it has been updated on the server.
The big question is, why doesn’t our pet update instantly? Let’s find out in the next section.
Caching In Apollo Client
The reason our app doesn’t update automatically is that our newly created data doesn’t match the cache data in Apollo Client. So, there is a conflict as to what exactly it needs to be updated from the cache.
Simply put, if we perform a mutation that updates or deletes multiple entries (a node), then we are responsible for updating any queries referencing that node, so that it modifies our cached data to match the modifications that a mutation makes to our back-end data.
Keeping Cache In Sync
There are a few ways to keep our cache in sync each time we perform a mutation operation.
The first is by refetching matching queries after a mutation, using the refetchQueries object property (the simplest way).
Note: If we were to use this method, it would take an object property in our createPet function called refetchQueries, and it would contain an array of objects with a value of the query: refetchQueries: [{ query: GET_PETS }].
Because our focus in this section isn’t just to update our created pets in the UI, but to manipulate the cache, we won’t be using this method.
The second approach is to use the update function. In Apollo Client, there’s an update helper function that helps modify the cache data, so that it syncs with the modifications that a mutation makes to our back-end data. Using this function, we can read and write to the cache.
Updating The Cache
Copy the following highlighted code, and paste it in client/src/pages/Pets.js:
...... const Pets = () => { const [modal, setModal] = useState(false) const { loading, error, data } = useQuery(GET_PETS); const [createPet, newPet] = useMutation(NEW_PETS, { update(cache, { data: { addPet } }) { const data = cache.readQuery({ query: GET_PETS }); cache.writeQuery({ query: GET_PETS, data: { pets: [addPet, ...data.pets] }, }); }, } ); .....
The update function receives two arguments:
The first argument is the cache from Apollo Client.
The second is the exact mutation response from the server. We destructure the data property and set it to our mutation (addPet).
Next, to update the function, we need to check for what query needs to be updated (in our case, the GET_PETS query) and read the cache.
Secondly, we need to write to the query that was read, so that it knows we’re about to update it. We do so by passing an object that contains a query object property, with the value set to our query operation (GET_PETS), and a data property whose value is a pet object and that has an array of the addPet mutation and a copy of the pet’s data.
If you followed these steps carefully, you should see your pets update automatically as you create them. Let’s take a look at the changes:
Pets updates instantly.
Optimistic UI
A lot of people are big fans of loaders and spinners. There’s nothing wrong with using a loader; there are perfect use cases where a loader is the best option. I’ve written about loaders versus spinners and their best use cases.
Loaders and spinners indeed play an important role in UI and UX design, but the arrival of Optimistic UI has stolen the spotlight.
What Is Optimistic UI?
Optimistic UI is a convention that simulates the results of a mutation (created data) and updates the UI before receiving a response from the server. Once the response is received from the server, the optimistic result is thrown away and replaced with the actual result.
In the end, an optimistic UI is nothing more than a way to manage perceived performance and avoid loading states.
Apollo Client has a very interesting way of integrating the Optimistic UI. It gives us a simple hook that allows us to write to the local cache after mutation. Let’s see how it works!
Step 1
Head over to client/src/client.js, and add only the highlighted code.
import { ApolloClient } from 'apollo-client' import { InMemoryCache } from 'apollo-cache-inmemory' import { HttpLink } from 'apollo-link-http' import { setContext } from 'apollo-link-context' import { ApolloLink } from 'apollo-link' const http = new HttpLink({ uri: "http://localhost:4000/" }); const delay = setContext( request => new Promise((success, fail) => { setTimeout(() => { success() }, 800) }) ) const link = ApolloLink.from([ delay, http ]) const cache = new InMemoryCache() const client = new ApolloClient({ link, cache }) export default client
The first step involves the following:
We import setContext from apollo-link-context. The setContext function takes a callback function and returns a promise whose setTimeout is set to 800ms, in order to create a delay when a mutation operation is performed.
The ApolloLink.from method ensures that the network activity that represents the link (our API) from HTTP is delayed.
Step 2
The next step is using the Optimistic UI hook. Slide back to client/src/pages/Pets.js, and add only the highlighted code below.
..... const Pets = () => { const [modal, setModal] = useState(false) const { loading, error, data } = useQuery(GET_PETS); const [createPet, newPet] = useMutation(NEW_PETS, { update(cache, { data: { addPet } }) { const data = cache.readQuery({ query: GET_PETS }); cache.writeQuery({ query: GET_PETS, data: { pets: [addPet, ...data.pets] }, }); }, } ); const onSubmit = input => { setModal(false) createPet({ variables: { newPet: input }, optimisticResponse: { __typename: 'Mutation', addPet: { __typename: 'Pet', id: Math.floor(Math.random() * 10000 + ''), name: input.name, type: input.type, img: 'https://via.placeholder.com/200' } } }); } .....
The optimisticResponse object is used if we want the UI to update immediately when we create a pet, instead of waiting for the server response.
The code snippets above include the following:
__typename is injected by Apollo into the query to fetch the type of the queried entities. Those types are used by Apollo Client to build the id property (which is a symbol) for caching purposes in apollo-cache. So, __typename is a valid property of the query response.
The mutation is set as the __typename of optimisticResponse.
Just as earlier defined, our mutation’s name is addPet, and the __typename is Pet.
Next are the fields of our mutation that we want the optimistic response to update:
id Because we don’t know what the ID from the server will be, we made one up using Math.floor.
name This value is set to input.name.
type The type’s value is input.type.
img Now, because our server generates images for us, we used a placeholder to mimic our image from the server.
This was indeed a long ride. If you got to the end, don’t hesitate to take a break from your chair with your cup of coffee.
Let’s take a look at our outcome. The supporting repository for this project is on GitHub. Clone and experiment with it.
Final result of our app.
Conclusion
The amazing features of Apollo Client, such as the Optimistic UI and pagination, make building client-side apps a reality.
While Apollo Client works very well with other frameworks, such as Vue.js and Angular, React developers have Apollo Client Hooks, and so they can’t help but enjoy building a great app.
In this article, we’ve only scratched the surface. Mastering Apollo Client demands constant practice. So, go ahead and clone the repository, add pagination, and play around with the other features it offers.
Please do share your feedback and experience in the comments section below. We can also discuss your progress on Twitter. Cheers!
References
(ks, ra, al, yk, il)
Website Design & SEO Delray Beach by DBL07.co
Delray Beach SEO
source http://www.scpie.org/understanding-client-side-graphql-with-apollo-client-in-react-apps/ source https://scpie1.blogspot.com/2020/07/understanding-client-side-graphql-with.html
0 notes
Text
Inside Jidometa: Caching JSON responses
[Read part 1 of Inside Jidometa]
We recently implemented a feature which greatly improves the browser experience in Jidoteki Meta (Jidometa).
In the past...
The Jidoteki Meta UI is a simple JavaScript layer on top of a PicoLisp REST API. The API returns JSON responses for every successful request, but occasionally we would see rather slow navigation between sections of the UI.
Certain responses tend to take more time as the size of data increases. There is no way around this, since many tables and entries need to be queried on every request, or so we thought..
The present...
To “solve” this problem, we decided caching responses for common GET requests would be a good approach.
Our idea was to only cache the most queried items (the list of all builds, and the details for each build). By caching this data, it makes it possible to quickly navigate between different builds and the full list of builds, without making any calls to the API.
Caching gotchas
Before implementing any caching solution, it’s important to fully understand the gotchas that accompany it. In most cases, the most important requirement is to never serve stale data. To invalidate the cache requires us to ensure any cached elements which are affected by a change, will immediately be evicted from the cache.
Another gotcha, is avoiding the temptation to add yet another layer of complexity to the stack. We always aim to do things as simply as possible with the least amount of software and overhead.
We needed the ability to easily disable caching or wipe the cache if needed, and we needed to ensure the browser’s own caching features didn’t interfere with ours.
Caching as simply as possible
Since our Jidometa virtual appliance (as well as our customer appliances) run entirely in memory, it was easy for us to assign a temporary directory for storing cached responses.
We store responses in /tmp/jidometa/cache/ - which is not actually “on disk”, but rather an in-memory filesystem (tmpfs). Those responses are regenerated on every GET request to the API. Fortunately there’s no overhead for that.
Each cached response is a simple .json file which is absolutely identical to what would have been served by the API. The files are named according to the build ID they represent, so there’s only ever one cached response per build.
It’s hard to get any simpler than that.
Serving the cached responses
We use Nginx as a frontend for static files and the API. Below are examples of the two Nginx directives we added for serving the above cached files:
location /cache/ { add_header Cache-Control no-cache; alias /tmp/jidometa/cache/; try_files $uri @nocache; }
location @nocache { rewrite /cache/builds/(.*)-details.json /builds/$1/details break; rewrite /cache/builds.json /builds break; proxy_pass http://jidometa; }
As an example, a request will first try to serve the /cache/builds.json from /tmp/jidometa/cache/builds.json - if the file doesn’t exist, it tries the @nocache directive. That directive will then send the request to /builds on the API, which will return a fresh response (and generate a new cached builds.json).
We also send the Cache-Control: no-cache HTTP header to ensure browsers don’t cache the cached response... but here’s the thing, browsers WILL cache the response regardless, except the no-cache directive will force the browser to use the Last-Modified or ETag headers in future requests, to revalidate whether the cached entry was changed or not. Well, assuming browsers handle those headers according to the RFC specs ;)
If the cached file hasn’t changed, the browser won’t fetch the entire file again.
Invalidating the cache
Builds are linked to their parent build (if any), as well as their child builds (if any).
When a certain build is modified (ex: the build’s status is changed from unreleased -> released), its cached response needs to be invalidated. We decided the simplest and easiest way to invalidate a cached entry is to delete the build’s cached response, and all the other responses linked to it (its parent and children).
Here’s the code we use for invalidation (PicoLisp):
(de remove-cached-file (Filename) (call 'rm "-f" (pack "/tmp/jidometa/cache/" Filename)) )
(de remove-cached-build (Build) (remove-cached-file (pack "builds/" Build "-details.json")) )
(de remove-all-cached-builds (Builds) (mapcar '((N) (remove-cached-build (; N 1)) ) Builds ) )
A call to (remove-all-cached-builds) with the list of Builds as the only argument, will handle removing each cached response individually. What’s nice is we can also use (remove-cached-file “builds.json”ˆ) to remove the entire builds list from the cache.
Of course, we wrapped all those calls in a simple (invalidate-cache) function which can be called anywhere in our code; anywhere that updates an existing build, or inserts a new one.
If a build has one parent and three child builds, then a total of six cached responses will be removed from /tmp/jidometa/cache/, including the builds.json (list of all builds). This is quite an aggressive cache invalidation strategy, but it ensures there are never any stale responses, and a new cached response will be regenerated as soon as a request is made for the “missing” file. Easy.
Issues with caching
We have yet to encounter any issues with cached responses, but if they do occur, it’s quite simple to identify exactly which response is problematic, as well as delete it manually from the cache directory. It’s also simple to completely wipe the cache from the system and start over from scratch. This happens after reboot anyways.
The main issue with a fresh cache is the first page load will be slightly slower than the others. It might take a few clicks to “warm” the cache with fresh responses, but considering the use-case for Jidometa (on-prem virtual appliance), we can live with that. Had there been a requirement to support hundreds of concurrent users, we would have considered an in-memory cache with persistence, such as Redis... maybe.
In conclusion
I would go into further detail, but really there’s nothing more to it. Serving cached JSON responses the way we do is somewhat of a “poor man’s solution”, but the lack of overhead and the simplicity of the implementation are a net win for everyone.
Stay tuned for part 3, where we’ll provide more details regarding the Jidometa internals.
0 notes