#graphql apollo server tutorial
Explore tagged Tumblr posts
Text
GraphQL Client Side & Server-Side Components Explained with Examples for API Developers
Full Video Link - https://youtube.com/shorts/nezkbeJlAIk Hi, a new #video on #graphql #mutation published on #codeonedigest #youtube channel. @java @awscloud @AWSCloudIndia @YouTube #youtube @codeonedigest #graphql #graphqlresolver #graphqltutorial
Let’s understand the GraphQL components and the way they communicate with each other. The entire application components can be categories in to server side and client-side components. Server-side Components – GraphQL server forms the core component on the server side and allows to parse the queries coming from GraphQL client applications. Apollo Server is most commonly used implementation of…

View On WordPress
#graphql#graphql api#graphql apollo server express#graphql apollo server tutorial#graphql client#graphql client apollo#graphql client java#graphql client react#graphql client side#graphql client spring boot#graphql client tutorial#graphql example#graphql explained#graphql java client example#graphql schema and resolver#graphql server and client#graphql server apollo#graphql server components#graphql server tutorial#graphql tutorial
0 notes
Text
Node.js and GraphQL: An Introduction
Node.js and GraphQL: An Introduction to Building APIs
Introduction GraphQL is a query language for APIs developed by Facebook in 2012 and released as an open-source project in 2015. It provides a more efficient, powerful, and flexible alternative to REST. With GraphQL, clients can request exactly the data they need, making it easier to evolve APIs over time. Node.js, with its asynchronous nature and powerful libraries, is an excellent choice for…
#API development#Apollo Server#GraphQL API#GraphQL tutorial#Node.js GraphQL#Node.js tutorial#web development
0 notes
Text
Mastering GraphQL Subscriptions: A Guide to Streaming Data
1. Introduction GraphQL subscriptions enable real-time data updates, crucial for applications like live feeds or collaborative tools. This tutorial guides you through implementing GraphQL subscriptions, from setup to optimization. What You Will Learn Implement subscriptions using Apollo Server and Client. Handle real-time updates efficiently. Best practices for performance and…
0 notes
Text
Nodejs rest api

#Nodejs rest api full
At last you will learn to build serverless application using Node.js.
#Nodejs rest api full
Based on Apollo, our GraphQL plugin offers a full compatibility with the whole GraphQL. Later you will learn to test and verify API using Postman and set up API Gateway for the Rest API. You can use a standard RESTful API or GraphQL to query your content. It revolves around resource where every component is a resource and a resource is accessed by a common interface using HTTP standard methods. REST is web standards based architecture and uses HTTP Protocol. In this course, you will learn to set up Node.js, learn about important modules in Node.js, Events with Event loop and Event Emitter, Streams and stream pipes, after node you will learn about Express and Creating Rest API using express with both MySQL and MongoDB. Node.js - RESTful API Advertisements Previous Page Next Page What is REST architecture REST stands for REpresentational State Transfer. At last creating and deploying serverless project on AWS Lambda. Integrating Gateway to route the request and keep track of everything. STEP-1: Create a separate folder and with the help of a terminal or command prompt navigate to this folder: STEP-2: Create package.json by typing the following command in the terminal: npm init -y. Postman is an API testing tool and gives wide support for all possible HTTP methods. It will show the installed version of NodeJs to our system as shown in the below screenshot. Create and configure middleware to add things like logging, authentication and authorization, and other web. Express is a node.js web application framework and MySQL, MongoDB are databases used to manage data. A RESTful API is an Application Programming Interface (API) that uses HTTP verbs like GET, PUT, POST, and DELETE to operate data. Use Express for Node.js to build RESTful APIs. In this course, you will set up Node.js and create Rest API using express with both MySQL and MongoDB and test it using Postman. For now, let’s start creating our secure REST API using Node.js In this tutorial, we are going to create a pretty common (and very practical) REST API for a resource called users. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices. It is used for building fast and scalable network applications. Node.js is a server-side platform built on Google Chrome's JavaScript Engine. If you are going to create the routes in a simple and easy manner with out much understanding much about node js framework, you can go with Express JS.

0 notes
Photo

💮Happy Friday💮 Today I worked with a simple gauge chart using d3. I had never used d3 before so I watched some tutorials and went through many codepens to understand how it works. It was simple chart so it didn't take much time. I also worked with intro.js to implement a small introduction on client's website. It went pretty well. For this weekend I'm going to spend my time on GraphQL with Node JS and Apollo server. I'm going to use that in an upcoming project but I don't know anything about it yet. I checked out the docs and it looks pretty cool. What did you work on Today? What are your plans for this weekend? Let me know in the comments! Follow @codingwitharman for daily posts✔️ Happy Coding 👨💻 👩💻 #coding #javascript #php #programming #php #programmer #codinglife #coder #linux #kde #plasma #arch #desktop #programmerlife #codeworld #software #engineering #softwaredeveloper #developerlife #jquery #react #d3 #frontend #backend #fullstack #nodejs #graphql #api #python #reactnative #wordpress https://www.instagram.com/p/CB6C81UAAFG/?igshid=t3gu56j58jab
#coding#javascript#php#programming#programmer#codinglife#coder#linux#kde#plasma#arch#desktop#programmerlife#codeworld#software#engineering#softwaredeveloper#developerlife#jquery#react#d3#frontend#backend#fullstack#nodejs#graphql#api#python#reactnative#wordpress
0 notes
Text
Roll Your Own Comments With Gatsby and FaunaDB
If you haven’t used Gatsby before have a read about why it’s fast in every way that matters, and if you haven’t used FaunaDB before you’re in for a treat. If you’re looking to make your static sites full blown Jamstack applications this is the back end solution for you!
This tutorial will only focus on the operations you need to use FaunaDB to power a comment system for a Gatsby blog. The app comes complete with inputs fields that allow users to comment on your posts and an admin area for you to approve or delete comments before they appear on each post. Authentication is provided by Netlify’s Identity widget and it’s all sewn together using Netlify serverless functions and an Apollo/GraphQL API that pushes data up to a FaunaDB database collection.

I chose FaunaDB for the database for a number of reasons. Firstly there’s a very generous free tier! perfect for those small projects that need a back end, there’s native support for GraphQL queries and it has some really powerful indexing features!
…and to quote the creators;
No matter which stack you use, or where you’re deploying your app, FaunaDB gives you effortless, low-latency and reliable access to your data via APIs familiar to you
You can see the finished comments app here.
Get Started
To get started clone the repo at https://github.com/PaulieScanlon/fauna-gatsby-comments
or:
git clone https://github.com/PaulieScanlon/fauna-gatsby-comments.git
Then install all the dependencies:
npm install
Also cd in to functions/apollo-graphql and install the dependencies for the Netlify function:
npm install
This is a separate package and has its own dependencies, you’ll be using this later.
We also need to install the Netlify CLI as you’ll also use this later:
npm install netlify-cli -g
Now lets add three new files that aren’t part of the repo.
At the root of your project create a .env .env.development and .env.production
Add the following to .env:
GATSBY_FAUNA_DB = GATSBY_FAUNA_COLLECTION =
Add the following to .env.development:
GATSBY_FAUNA_DB = GATSBY_FAUNA_COLLECTION = GATSBY_SHOW_SIGN_UP = true GATSBY_ADMIN_ID =
Add the following to .env.production:
GATSBY_FAUNA_DB = GATSBY_FAUNA_COLLECTION = GATSBY_SHOW_SIGN_UP = false GATSBY_ADMIN_ID =
You’ll come back to these later but in case you’re wondering
GATSBY_FAUNA_DB is the FaunaDB secret key for your database
GATSBY_FAUNA_COLLECTION is the FaunaDB collection name
GATSBY_SHOW_SIGN_UP is used to hide the Sign up button when the site is in production
GATSBY_ADMIN_ID is a user id that Netlify Identity will generate for you
If you’re the curious type you can get a taster of the app by running gatsby develop or yarn develop and then navigate to http://localhost:8000 in your browser.
FaunaDB
So Let’s get cracking, but before we write any operations head over to https://fauna.com/ and sign up!
Database and Collection
Create a new database by clicking NEW DATABASE
Name the database: I’ve called the demo database fauna-gatsby-comments
Create a new Collection by clicking NEW COLLECTION
Name the collection: I’ve called the demo collection demo-blog-comments

Server Key
Now you’ll need to to set up a server key. Go to SECURITY
Create a new key by clicking NEW KEY
Select the database you want the key to apply to, fauna-gatsby-comments for example
Set the Role as Admin
Name the server key: I’ve called the demo key demo-blog-server-key

Environment Variables Pt. 1
Copy the server key and add it to GATSBY_FAUNA_DB in .env.development, .env.production and .env.
You’ll also need to add the name of the collection to GATSBY_FAUNA_COLLECTION in .env.development, .env.production and .env.
Adding these values to .env are just so you can test your development FaunaDB operations, which you’ll do next.
Let’s start by creating a comment so head back to boop.js:
// boop.js ... // CREATE COMMENT createComment: async () => { const slug = "/posts/some-post" const name = "some name" const comment = "some comment" const results = await client.query( q.Create(q.Collection(COLLECTION_NAME), { data: { isApproved: false, slug: slug, date: new Date().toString(), name: name, comment: comment, }, }) ) console.log(JSON.stringify(results, null, 2)) return { commentId: results.ref.id, } }, ...
The breakdown of this function is as follows;
q is the instance of faunadb.query
Create is the FaunaDB method to create an entry within a collection
Collection is area in the database to store the data. It takes the name of the collection as the first argument and a data object as the second.
The second argument is the shape of the data you need to drive the applications comment system.
For now you’re going to hard-code slug, name and comment but in the final app these values are captured by the input form on the posts page and passed in via args
The breakdown for the shape is as follows;
isApproved is the status of the comment and by default it’s false until we approve it in the Admin page
slug is the path to the post where the comment was written
date is the time stamp the comment was written
name is the name the user entered in the comments from
comment is the comment the user entered in the comments form
When you (or a user) creates a comment you’re not really interested in dealing with the response because as far as the user is concerned all they’ll see is either a success or error message.
After a user has posted a comment it will go in to your Admin queue until you approve it but if you did want to return something you could surface this in the UI by returning something from the createComment function.
Create a comment
If you’ve hard coded a slug, name and comment you can now run the following in your CLI
node boop createComment
If everything worked correctly you should see a log in your terminal of the new comment.
{ "ref": { "@ref": { "id": "263413122555970050", "collection": { "@ref": { "id": "demo-blog-comments", "collection": { "@ref": { "id": "collections" } } } } } }, "ts": 1587469179600000, "data": { "isApproved": false, "slug": "/posts/some-post", "date": "Tue Apr 21 2020 12:39:39 GMT+0100 (British Summer Time)", "name": "some name", "comment": "some comment" } } { commentId: '263413122555970050' }
If you head over to COLLECTIONS in FaunaDB you should see your new entry in the collection.

You’ll need to create a few more comments while in development so change the hard-coded values for name and comment and run the following again.
node boop createComment
Do this a few times so you end up with at least three new comments stored in the database, you’ll use these in a moment.
Delete comment by id
Now that you can create comments you’ll also need to be able to delete a comment.
By adding the commentId of one of the comments you created above you can delete it from the database. The commentId is the id in the ref.@ref object
Again you’re not really concerned with the return value here but if you wanted to surface this in the UI you could do so by returning something from the deleteCommentById function.
// boop.js ... // DELETE COMMENT deleteCommentById: async () => { const commentId = "263413122555970050"; const results = await client.query( q.Delete(q.Ref(q.Collection(COLLECTION_NAME), commentId)) ); console.log(JSON.stringify(results, null, 2)); return { commentId: results.ref.id, }; }, ...
The breakdown of this function is as follows
client is the FaunaDB client instance
query is a method to get data from FaunaDB
q is the instance of faunadb.query
Delete is the FaunaDB delete method to delete entries from a collection
Ref is the unique FaunaDB ref used to identify the entry
Collection is area in the database where the data is stored
If you’ve hard coded a commentId you can now run the following in your CLI:
node boop deleteCommentById
If you head back over to COLLECTIONS in FaunaDB you should see that entry no longer exists in collection
Indexes
Next you’re going to create an INDEX in FaunaDB.
An INDEX allows you to query the database with a specific term and define a specific data shape to return.
When working with GraphQL and / or TypeScript this is really powerful because you can use FaunaDB indexes to return only the data you need and in a predictable shape. This makes data typing responses in GraphQL and / TypeScript a dream… I’ve worked on a number of applications that just return a massive object of useless values which will inevitably cause bugs in your app. blurg!

Go to INDEXES and click NEW INDEX
Name the index: I’ve called this one get-all-comments
Set the source collection to the name of the collection you setup earlier
As mentioned above when you query the database using this index you can tell FaunaDB which parts of the entry you want to return.
You can do this by adding “values” but be careful to enter the values exactly as they appear below because (on the FaunaDB free tier) you can’t amend these after you’ve created them so if there’s a mistake you’ll have to delete the index and start again… bummer!
The values you need to add are as follows:
ref
data.isApproved
data.slug
data.date
data.name
data.comment
After you’ve added all the values you can click SAVE.
Get all comments
// boop.js ... // GET ALL COMMENTS getAllComments: async () => { const results = await client.query( q.Paginate(q.Match(q.Index("get-all-comments"))) ); console.log(JSON.stringify(results, null, 2)); return results.data.map(([ref, isApproved, slug, date, name, comment]) => ({ commentId: ref.id, isApproved, slug, date, name, comment, })); }, ...
The breakdown of this function is as follows
client is the FaunaDB client instance
query is a method to get data from FaunaDB
q is the instance of faunadb.query
Paginate paginates the responses
Match returns matched results
Index is the name of the Index you just created
The shape of the returned result here is an array of the same shape you defined in the Index “values”
If you run the following you should see the list of all the comments you created earlier:
node boop getAllComments
Get comments by slug
You’re going to take a similar approach as above but this time create a new Index that allows you to query FaunaDB in a different way. The key difference here is that when you get-comments-by-slug you’ll need to tell FaunaDB about this specific term and you can do this by adding data.slug to the Terms field.

Go to INDEX and click NEW INDEX
Name the index, I’ve called this one get-comments-by-slug
Set the source collection to the name of the collection you setup earlier
Add data.slug in the terms field
The values you need to add are as follows:
ref
data.isApproved
data.slug
data.date
data.name
data.comment
After you’ve added all the values you can click SAVE.
// boop.js ... // GET COMMENT BY SLUG getCommentsBySlug: async () => { const slug = "/posts/some-post"; const results = await client.query( q.Paginate(q.Match(q.Index("get-comments-by-slug"), slug)) ); console.log(JSON.stringify(results, null, 2)); return results.data.map(([ref, isApproved, slug, date, name, comment]) => ({ commentId: ref.id, isApproved, slug, date, name, comment, })); }, ...
The breakdown of this function is as follows:
client is the FaunaDB client instance
query is a method to get data from FaunaDB
q is the instance of faunadb.query
Paginate paginates the responses
Match returns matched results
Index is the name of the Index you just created
The shape of the returned result here is an array of the same shape you defined in the Index “values” you can create this shape in the same way you did above and be sure to add a value for terms. Again be careful to enter these with care.
If you run the following you should see the list of all the comments you created earlier but for a specific slug:
node boop getCommentsBySlug
Approve comment by id
When you create a comment you manually set the isApproved value to false. This prevents the comment from being shown in the app until you approve it.
You’ll now need to create a function to do this but you’ll need to hard-code a commentId. Use a commentId from one of the comments you created earlier:
// boop.js ... // APPROVE COMMENT BY ID approveCommentById: async () => { const commentId = '263413122555970050' const results = await client.query( q.Update(q.Ref(q.Collection(COLLECTION_NAME), commentId), { data: { isApproved: true, }, }) ); console.log(JSON.stringify(results, null, 2)); return { isApproved: results.isApproved, }; }, ...
The breakdown of this function is as follows:
client is the FaunaDB client instance
query is a method to get data from FaunaDB
q is the instance of faunadb.query
Update is the FaundaDB method up update an entry
Ref is the unique FaunaDB ref used to identify the entry
Collection is area in the database where the data is stored
If you’ve hard coded a commentId you can now run the following in your CLI:
node boop approveCommentById
If you run the getCommentsBySlug again you should now see the isApproved status of the entry you hard-coded the commentId for will have changed to true.
node boop getCommentsBySlug
These are all the operations required to manage the data from the app.
In the repo if you have a look at apollo-graphql.js which can be found in functions/apollo-graphql you’ll see the all of the above operations. As mentioned before the hard-coded values are replaced by args, these are the values passed in from various parts of the app.
Netlify
Assuming you’ve completed the Netlify sign up process or already have an account with Netlify you can now push the demo app to your GitHub account.
To do this you’ll need to have initialize git locally, added a remote and have pushed the demo repo upstream before proceeding.
You should now be able to link the repo up to Netlify’s Continuous Deployment.
If you click the “New site from Git” button on the Netlify dashboard you can authorize access to your GitHub account and select the gatsby-fauna-comments repo to enable Netlify’s Continuous Deployment. You’ll need to have deployed at least once so that we have a pubic URL of your app.
The URL will look something like this https://ecstatic-lewin-b1bd17.netlify.app but feel free to rename it and make a note of the URL as you’ll need it for the Netlify Identity step mentioned shortly.

Environment Variables Pt. 2
In a previous step you added the FaunaDB database secret key and collection name to your .env files(s). You’ll also need to add the same to Netlify’s Environment variables.
Navigate to Settings from the Netlify navigation
Click on Build and deploy
Either select Environment or scroll down until you see Environment variables
Click on Edit variables
Proceed to add the following:
GATSBY_SHOW_SIGN_UP = false GATSBY_FAUNA_DB = you FaunaDB secret key GATSBY_FAUNA_COLLECTION = you FaunaDB collection name
While you’re here you’ll also need to amend the Sensitive variable policy, select Deploy without restrictions
Netlify Identity Widget
I mentioned before that when a comment is created the isApproved value is set to false, this prevents comments from appearing on blog posts until you (the admin) have approved them. In order to become admin you’ll need to create an identity.
You can achieve this by using the Netlify Identity Widget.
If you’ve completed the Continuous Deployment step above you can navigate to the Identity page from the Netlify navigation.

You wont see any users in here just yet so lets use the app to connect the dots, but before you do that make sure you click Enable Identity
Before you continue I just want to point out you’ll be using netlify dev instead of gatsby develop or yarn develop from now on. This is because you’ll be using some “special” Netlify methods in the app and staring the server using netlify dev is required to spin up various processes you’ll be using.
Spin up the app using netlify dev
Navigate to http://localhost:8888/admin/
Click the Sign Up button in the header
You will also need to point the Netlify Identity widget at your newly deployed app URL. This was the URL I mentioned you’ll need to make a note of earlier, if you’ve not renamed your app it’ll look something like this https://ecstatic-lewin-b1bd17.netlify.app/ There will be a prompt in the pop up window to Set site’s URL.
You can now complete the necessary sign up steps.
After sign up you’ll get an email asking you to confirm you identity and once that’s completed refresh the Identity page in Netlify and you should see yourself as a user.
It’s now login time, but before you do this find Identity.js in src/components and temporarily un-comment the console.log() on line 14. This will log the Netlify Identity user object to the console.
Restart your local server
Spin up the app again using netlify dev
Click the Login button in the header
If this all works you should be able to see a console log for netlifyIdentity.currentUser: find the id key and copy the value.
Set this as the value for GATSBY_ADMIN_ID = in both .env.production and .env.development
You can now safely remove the console.log() on line 14 in Identity.js or just comment it out again.
GATSBY_ADMIN_ID = your Netlify Identity user id
…and finally
Restart your local server
Spin up the app again using netlify dev
Now you should be able to login as “Admin”… hooray!
Navigate to http://localhost:8888/admin/ and Login.
It’s important to note here you’ll be using localhost:8888 for development now and NOT localhost:8000 which is more common with Gatsby development
Before you test this in the deployed environment make sure you go back to Netlify’s Environment variables and add your Netlify Identity user id to the Environment variables!
Navigate to Settings from the Netlify navigation
Click on Build and deploy
Either select Environment or scroll down until you see Environment variables
Click on Edit variables
Proceed to add the following:
GATSBY_ADMIN_ID = your Netlify Identity user id
If you have a play around with the app and enter a few comments on each of the posts then navigate back to Admin page you can choose to either approve or delete the comments.
Naturally only approved comments will be displayed on any given post and deleted ones are gone forever.
If you’ve used this tutorial for your project I’d love to hear from you at @pauliescanlon.
By Paulie Scanlon (@pauliescanlon), Front End React UI Developer / UX Engineer: After all is said and done, structure + order = fun.
Visit Paulie’s Blog at: www.paulie.dev
The post Roll Your Own Comments With Gatsby and FaunaDB appeared first on CSS-Tricks.
source https://css-tricks.com/roll-your-own-comments-with-gatsby-and-faunadb/
from WordPress https://ift.tt/3bQyJZK via IFTTT
0 notes
Photo

How to Build a Web App with GraphQL and React
In this tutorial, we'll learn to build a web application with React and GraphQL. We'll consume an API available from graphql-pokemon and serve it from this link, which allows you to get information about Pokémon.
GraphQL is a query language for APIs and a runtime for fulfilling those queries created by Facebook. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools.
In this tutorial, we'll only learn the front end of a GraphQL application that makes use of Apollo for fetching data from a ready GraphQL API hosted on the web.
Let's get started with the prerequisites!
Prerequisites
There are a few prerequisites for this tutorial:
recent versions of Node.js and npm installed on your system
knowledge of JavaScript/ES6
familiarity with React
If you don't have Node and npm installed on your development machine, you can simply download the binaries for your system from the official website. You can also use NVM, a POSIX-compliant bash script to manage multiple active Node.js versions.
Installing create-react-app
Let's install the create-react-app tool that allows you to quickly initialize and work with React projects.
Open a new terminal and run the following command:
npm install -g create-react-app
Note: You may need to use sudo before your command in Linux and macOS or use a command prompt with administrator rights if you get EACCESS errors when installing the package globally on your machine. You can also simply fix your npm permissions.
At the time of writing, this installs create-react-app v3.1.1.
Creating a React Project
Now we're ready to create our React project.
Go back to your terminal and run the following command:
create-react-app react-pokemon
Next, navigate into your project's folder and start the local development server:
cd react-pokemon npm start
Go to http://localhost:3000 in your web browser to see your app up and running.
This is a screenshot of the app at this point:
Installing Apollo Client
Apollo Client is a complete data management solution that's commonly used with React, but can be used with any other library or framework.
Apollo provides intelligent caching that enables it to be a single source of truth for the local and remote data in your application.
You'll need to install the following packages in your React project to work with Apollo:
graphql: the JavaScript reference implementation for GraphQL
apollo-client: a fully-featured caching GraphQL client with integrations for React, Angular, and more
apollo-cache-inmemory: the recommended cache implementation for Apollo Client 2.0
apollo-link-http: the most common Apollo Link, a system of modular components for GraphQL networking
react-apollo: this package allows you to fetch data from your GraphQL server and use it in building complex and reactive UIs using the React framework
graphql-tag: this package provides helpful utilities for parsing GraphQL queries such as gql tag.
Open a new terminal and navigate to your project's folder, then run the following commands:
npm install graphql --save npm install apollo-client --save npm install apollo-cache-inmemory --save npm install apollo-link-http --save npm install react-apollo --save npm install graphql-tag --save
Now that we've installed the necessary packages, we need to create an instance of ApolloClient.
Open the src/index.js file and add the following code:
import { ApolloClient } from 'apollo-client'; import { InMemoryCache } from 'apollo-cache-inmemory'; import { HttpLink } from 'apollo-link-http'; const cache = new InMemoryCache(); const link = new HttpLink({ uri: 'https://graphql-pokemon.now.sh/' }) const client = new ApolloClient({ cache, link })
We first create an instance of InMemoryCache, then an instance of HttpLink and we pass in our GraphQL API URI. Next, we create an instance of ApolloClient and we provide the cache and link instances.
Connect the Apollo Client to React Components
After creating the instance of ApolloClient, we need to connect it to our React component(s).
We'll use the new Apollo hooks, which allows us to easily bind GraphQL operations to our UI.
We can connect Apollo Client to our React app by simply wrapping the root App component with the ApolloProvider component — which is exported from the @apollo/react-hooks package — and passing the client instance via the client prop.
The ApolloProvider component is similar to React's Context provider. It wraps your React app and places the client in the context, which enables you to access it from anywhere in your app.
Now let's import the ApolloProvider component in our src/index.js file and wrap the App component as follows:
The post How to Build a Web App with GraphQL and React appeared first on SitePoint.
by Ahmed Bouchefra via SitePoint https://ift.tt/325AIV2
0 notes
Link
"In this tutorial, you will learn how to build a GraphQL server with Nodejs and Express using Apollo Server library 2.0". Reblog with caption 🙃
0 notes
Video
youtube
Nodejs GraphQL API Project Tutorial for Microservice Developers | #graph... Full Video Link https://youtu.be/DXPxXJ7Qy7g Hello friends, new #video on #nodejs #graphql #api #microservices #tutorial for #api #developer #programmers with #examples is published on #codeonedigest #youtube channel. @java #java #aws #awscloud @awscloud @AWSCloudIndia #salesforce #Cloud #CloudComputing @YouTube #youtube #azure #msazure #restapi #nodejs #api #restapitutorial @codeonedigest #codeonedigest #graphqltutorial #graphql #javascript #graphqltutorialforbeginners #graphqlapireact #graphqlapicalls #graphqlapiproject #graphqlapinodejs #graphqlnodejstypescript #graphqlnodetypescript #graphqlnodejsreact #graphqlnodejsexpress #graphqlnodejsexpressexample #graphqlnodejsreact #nodejsgraphqlreact #graphqlserver #graphqltutorialapollo #graphqlapollo #graphqlapolloserver #graphqlapollotutorial #graphqlapollonodejs #apollographqlnodejs #api
#youtube#nodejs#graphql#nodejs tutorial#graphql tutorial#graphql schema#graphql query resolver#graphql resolver#graphql api#graphql server#graphql apollo server#apollo server#graphiql tool#graphiql api testing tool
1 note
·
View note
Text
Introduction to Apollo Client With React for GraphQL

GraphQL has been getting popular recently and is likely to replace the Rest API. In this tutorial, we will use Apollo Client to communicate with GitHub's GraphQL API. We will integrate Apollo Client with ReactJS, but you can use it with several other client platforms as well.
This tutorial does not cover how to start a React project, but you can use create-react-app to get started.
Once we have the react app ready to go, the next thing is to install the required modules.
Installing Modules
The following line installs all the required modules.
npm install apollo-client-preset react-apollo graphql-tag graphql --save
Now we can provide our component with a client.
Providing a Client to a Component
You can provide a client anywhere in your React component hierarchy. However, it is always a good practice to provide the component, wrapping your whole application, with the client.
import React from 'react'; import ReactDOM from 'react-dom'; import App from './App'; import { ApolloProvider } from 'react-apollo'; import { ApolloClient } from 'apollo-client'; import { HttpLink } from 'apollo-link-http'; import { InMemoryCache } from 'apollo-cache-inmemory'; const token = "YOUR_TOKEN"; const httpLink = { uri: 'https://api.github.com/graphql', headers: { authorization: `Bearer ${token}` } }; const client = new ApolloClient({ link: new HttpLink(httpLink), cache: new InMemoryCache() }); ReactDOM.render(<ApolloProvider client={client}><App/></ApolloProvider>, document.getElementById('root'));
Above you can see that we defined the uri for GitHub and also used a specific token for headers. You should be using your own token generated from GitHub. So don't forget to replace it with YOUR_TOKEN.
For this example, we defined the API token on the client side. However, you should not reveal your API token publicly. So it is always good to keep it on the server abstracted from the client side.
Notice that we have wrapped the <App/> component with ApolloProvider and used the client variable we created for the client prop.
GraphiQL Application
Before diving into the queries, I want to point out that there is a very handy tool called GraphiQL for testing your GraphQL queries. Before proceeding, make sure that you have downloaded it.
Once you open GraphiQL, you need to set the GraphQL Endpoint and HTTP Headers.
GraphQL Endpoint: https://api.github.com/graphql
Header Name: Authorization
Header Value: Bearer YOUR_TOKEN
Of course, you need to replace YOUR_TOKEN with your own token. Do not forget to include the Bearer in front of your token when defining the Header Value.
If you do not want to download an application, you can also use the online GraphQL API Explorer for GitHub.
GraphQL Queries
Unlike a REST API with several end-points, the GraphQL API has only one end-point, and you only fetch what is defined by your query.
The documentation of GitHub's GraphQL API gives you more insight.
Also, the best part of the GraphiQL application is that it gives you access to documentation for queries right inside the application. You can see the sidebar on the right named Docs.
Let's start with the simplest query:
query{ viewer{ login } }
This query returns you the login information of the viewer. In this case, the viewer is you since you used your own API token.
In this tutorial, I will not give detailed information on queries. You can always refer to the documentation and try queries on GraphQL tools to see if you are getting the correct data.
Let's use the following query for the rest of the tutorial.
query($name: String!){ search(query: $name, last: 10, type: REPOSITORY) { edges { node { ... on Repository { id name description url } } } } }
This query searches for the last 10 repositories matching the specific input string, which we will define in our application.
It returns the id, name, description, and url for each result.
Using the GraphQL Query in a React Component
We need to import the two modules below to our React component to be able to define the query within the component and then pass the results to the component as props.
import gql from 'graphql-tag'; import { graphql } from 'react-apollo';
Here we assigned our query to a constant variable, but we haven't defined the name parameter yet.
const repoQuery = gql` query($name: String!){ search(query: $name, last: 10, type: REPOSITORY) { edges { node { ... on Repository { id name description url } } } } } `
Now we wrap our component with the graphql HOC (Higher Order Component) in order to define the query parameters, execute the query, and then pass the result as props to our component.
const AppWithData = graphql( repoQuery, { options: { variables: { name: "tuts" } } } )(App)
Below is the final version of our component.
import React, { Component } from 'react'; import gql from 'graphql-tag'; import { graphql } from 'react-apollo'; class App extends Component { render() { return ( <div> </div> ); } } const repoQuery = gql` query($name: String!){ search(query: $name, last: 10, type: REPOSITORY) { edges { node { ... on Repository { id name description url } } } } } ` const AppWithData = graphql( repoQuery, { options: { variables: { name: "tuts" } } } )(App) export default AppWithData;
Note that we do not export the actual App component but the wrapped component, which is AppWithData.
Check the Data in the Console
Let's go ahead and add {console.log(this.props)} to the render method of your component.
class App extends Component { render() { console.log(this.props) return ( <div> </div> ); } }
When you check the console of your browser, you will see there are two object logs.
Inside each object, you will see the data property. This is provided to our component through the graphql HOC.
Notice that the first log has the loading: true property inside data, and the second log has loading: false and a new object named search, which is exactly the data we wanted to get.
Display the Data
Let's write some JSX to display the fetched data.
Since the search object is not initially there, we cannot directly try to render it. Therefore, first we need to check if we fetched the data and the search object is ready to be used.
In order to do that, we will simply use the loading information provided inside the data property.
If loading is true then we simply render the Loading text, otherwise the data itself.
class App extends Component { render() { return ( <div> {this.props.data.loading === true ? "Loading" : this.props.data.search.edges.map(data => <ul key={data.node.id}> <li style=><a href={data.node.url}>{data.node.name}</a></li> <li>{data.node.description}</li> </ul> )} </div> ); } }
I used the ?: ternary operator for basic inline conditional expressions. If loading is true we display Loading, and if it is false, we use the map function to iterate through our data array to display the information inside the <ul> and <li> elements.
This is just a basic example. You can use a regular if-else statement and return different results for your render method.
You can check the Apollo-Client-with-React repository, clone it on your computer, and play around.
P.S. Don't forget to replace the token variable with your own API token for GitHub.
Conclusion
We covered how to get started with Apollo Client for React. We installed the required modules, set up the client, and then provided it to our component at the top of the component hierarchy. We learned how to test GraphQL queries quickly before implementing them in our actual application. Finally, we integrated the query into a React component and displayed the fetched data.
via Envato Tuts+ Code https://ift.tt/2wqXsnX
0 notes
Text
Building Scalable & Efficient GraphQL APIs: Best Practices
1. Introduction GraphQL has emerged as a powerful alternative to RESTful APIs, offering flexibility, efficiency, and a better developer experience. This tutorial guides you through building scalable and efficient GraphQL APIs, focusing on best practices to ensure your implementation is robust and performant. What You Will Learn: How to set up a GraphQL server using Apollo Server Best practices…
0 notes
Text
Show HN: WunderGraph – Aggregate REST and GraphQL APIs, Add AuthN/Z and Caching https://news.ycombinator.com/item?id=24329651
Show HN: WunderGraph – Aggregate REST and GraphQL APIs, Add AuthN/Z and Caching Hey, I’m Jens, founder of WunderGraph. Over the years of working with REST & GraphQL APIs, I found that some aspects of using it are way too complicated. Here’s a list of problems I believe could be abstracted away: – Aggregating multiple GraphQL, REST, etc. APIs into a simple to use API (Backend for Frontend) without writing code – adding Authentication & Authorization to APIs you don’t have full control over – adding efficient and easy to use Caching to GraphQL APIs without writing code – adding persisted queries for security and performance reasons without making my application code and deployment process more complex Companies like Facebook, who are concerned about security and performance, use persisted Queries and don’t expose their GraphQL API directly to the public. While developing they write their Queries using Relay and persist (whitelist) them at compile time. At least that’s my understanding from their blog posts and conference talks. WunderGraph takes this approach to the next level by turning the flow around. Relay, Apollo, URQL, etc. are very complex pieces of software because of the dynamic nature of GraphQL. With WunderGraph we define all Operations in GraphiQL “on the server” and then generate a very simple client from it. In a nutshell, Queries become simple GET requests with variables as query parameters, Mutations still are POST requests but just with variables as the body. A more in depth explanation including an example can be found here: https://www.reddit.com/r/graphql/comments/ijui1x/show_wundergraph_public_beta_aggregate_rest/ More info & docs: https://wundergraph.com/ For those who like to watch videos, here’s a general overview: https://youtu.be/RwkThD5pz1E Here’s a full 26m tutorial with React & Typescript that helps to start from scratch: https://youtu.be/8BQNeeVoFGI August 31, 2020 at 02:06AM
from Blogger http://newcapitalentrepreneur.blogspot.com/2020/08/show-hn-wundergraph-aggregate-rest-and.html via SEO Services
Show HN: WunderGraph – Aggregate REST and GraphQL APIs, Add AuthN/Z and Caching https://news.ycombinator.com/item?id=24329651 via https://capitalentrepreneur.finance.blog/
0 notes
Text
Roll Your Own Comments With Gatsby and FaunaDB
If you haven’t used Gatsby before have a read about why it’s fast in every way that matters, and if you haven’t used FaunaDB before you’re in for a treat. If you’re looking to make your static sites full blown Jamstack applications this is the back end solution for you!
This tutorial will only focus on the operations you need to use FaunaDB to power a comment system for a Gatsby blog. The app comes complete with inputs fields that allow users to comment on your posts and an admin area for you to approve or delete comments before they appear on each post. Authentication is provided by Netlify’s Identity widget and it’s all sewn together using Netlify serverless functions and an Apollo/GraphQL API that pushes data up to a FaunaDB database collection.

I chose FaunaDB for the database for a number of reasons. Firstly there’s a very generous free tier! perfect for those small projects that need a back end, there’s native support for GraphQL queries and it has some really powerful indexing features!
…and to quote the creators;
No matter which stack you use, or where you’re deploying your app, FaunaDB gives you effortless, low-latency and reliable access to your data via APIs familiar to you
You can see the finished comments app here.
Get Started
To get started clone the repo at https://github.com/PaulieScanlon/fauna-gatsby-comments
or:
git clone https://github.com/PaulieScanlon/fauna-gatsby-comments.git
Then install all the dependencies:
npm install
Also cd in to functions/apollo-graphql and install the dependencies for the Netlify function:
npm install
This is a separate package and has its own dependencies, you’ll be using this later.
We also need to install the Netlify CLI as you’ll also use this later:
npm install netlify-cli -g
Now lets add three new files that aren’t part of the repo.
At the root of your project create a .env .env.development and .env.production
Add the following to .env:
GATSBY_FAUNA_DB = GATSBY_FAUNA_COLLECTION =
Add the following to .env.development:
GATSBY_FAUNA_DB = GATSBY_FAUNA_COLLECTION = GATSBY_SHOW_SIGN_UP = true GATSBY_ADMIN_ID =
Add the following to .env.production:
GATSBY_FAUNA_DB = GATSBY_FAUNA_COLLECTION = GATSBY_SHOW_SIGN_UP = false GATSBY_ADMIN_ID =
You’ll come back to these later but in case you’re wondering
GATSBY_FAUNA_DB is the FaunaDB secret key for your database
GATSBY_FAUNA_COLLECTION is the FaunaDB collection name
GATSBY_SHOW_SIGN_UP is used to hide the Sign up button when the site is in production
GATSBY_ADMIN_ID is a user id that Netlify Identity will generate for you
If you’re the curious type you can get a taster of the app by running gatsby develop or yarn develop and then navigate to http://localhost:8000 in your browser.
FaunaDB
So Let’s get cracking, but before we write any operations head over to https://fauna.com/ and sign up!
Database and Collection
Create a new database by clicking NEW DATABASE
Name the database: I’ve called the demo database fauna-gatsby-comments
Create a new Collection by clicking NEW COLLECTION
Name the collection: I’ve called the demo collection demo-blog-comments

Server Key
Now you’ll need to to set up a server key. Go to SECURITY
Create a new key by clicking NEW KEY
Select the database you want the key to apply to, fauna-gatsby-comments for example
Set the Role as Admin
Name the server key: I’ve called the demo key demo-blog-server-key

Environment Variables Pt. 1
Copy the server key and add it to GATSBY_FAUNA_DB in .env.development, .env.production and .env.
You’ll also need to add the name of the collection to GATSBY_FAUNA_COLLECTION in .env.development, .env.production and .env.
Adding these values to .env are just so you can test your development FaunaDB operations, which you’ll do next.
Let’s start by creating a comment so head back to boop.js:
// boop.js ... // CREATE COMMENT createComment: async () => { const slug = "/posts/some-post" const name = "some name" const comment = "some comment" const results = await client.query( q.Create(q.Collection(COLLECTION_NAME), { data: { isApproved: false, slug: slug, date: new Date().toString(), name: name, comment: comment, }, }) ) console.log(JSON.stringify(results, null, 2)) return { commentId: results.ref.id, } }, ...
The breakdown of this function is as follows;
q is the instance of faunadb.query
Create is the FaunaDB method to create an entry within a collection
Collection is area in the database to store the data. It takes the name of the collection as the first argument and a data object as the second.
The second argument is the shape of the data you need to drive the applications comment system.
For now you’re going to hard-code slug, name and comment but in the final app these values are captured by the input form on the posts page and passed in via args
The breakdown for the shape is as follows;
isApproved is the status of the comment and by default it’s false until we approve it in the Admin page
slug is the path to the post where the comment was written
date is the time stamp the comment was written
name is the name the user entered in the comments from
comment is the comment the user entered in the comments form
When you (or a user) creates a comment you’re not really interested in dealing with the response because as far as the user is concerned all they’ll see is either a success or error message.
After a user has posted a comment it will go in to your Admin queue until you approve it but if you did want to return something you could surface this in the UI by returning something from the createComment function.
Create a comment
If you’ve hard coded a slug, name and comment you can now run the following in your CLI
node boop createComment
If everything worked correctly you should see a log in your terminal of the new comment.
{ "ref": { "@ref": { "id": "263413122555970050", "collection": { "@ref": { "id": "demo-blog-comments", "collection": { "@ref": { "id": "collections" } } } } } }, "ts": 1587469179600000, "data": { "isApproved": false, "slug": "/posts/some-post", "date": "Tue Apr 21 2020 12:39:39 GMT+0100 (British Summer Time)", "name": "some name", "comment": "some comment" } } { commentId: '263413122555970050' }
If you head over to COLLECTIONS in FaunaDB you should see your new entry in the collection.

You’ll need to create a few more comments while in development so change the hard-coded values for name and comment and run the following again.
node boop createComment
Do this a few times so you end up with at least three new comments stored in the database, you’ll use these in a moment.
Delete comment by id
Now that you can create comments you’ll also need to be able to delete a comment.
By adding the commentId of one of the comments you created above you can delete it from the database. The commentId is the id in the ref.@ref object
Again you’re not really concerned with the return value here but if you wanted to surface this in the UI you could do so by returning something from the deleteCommentById function.
// boop.js ... // DELETE COMMENT deleteCommentById: async () => { const commentId = "263413122555970050"; const results = await client.query( q.Delete(q.Ref(q.Collection(COLLECTION_NAME), commentId)) ); console.log(JSON.stringify(results, null, 2)); return { commentId: results.ref.id, }; }, ...
The breakdown of this function is as follows
client is the FaunaDB client instance
query is a method to get data from FaunaDB
q is the instance of faunadb.query
Delete is the FaunaDB delete method to delete entries from a collection
Ref is the unique FaunaDB ref used to identify the entry
Collection is area in the database where the data is stored
If you’ve hard coded a commentId you can now run the following in your CLI:
node boop deleteCommentById
If you head back over to COLLECTIONS in FaunaDB you should see that entry no longer exists in collection
Indexes
Next you’re going to create an INDEX in FaunaDB.
An INDEX allows you to query the database with a specific term and define a specific data shape to return.
When working with GraphQL and / or TypeScript this is really powerful because you can use FaunaDB indexes to return only the data you need and in a predictable shape. This makes data typing responses in GraphQL and / TypeScript a dream… I’ve worked on a number of applications that just return a massive object of useless values which will inevitably cause bugs in your app. blurg!

Go to INDEXES and click NEW INDEX
Name the index: I’ve called this one get-all-comments
Set the source collection to the name of the collection you setup earlier
As mentioned above when you query the database using this index you can tell FaunaDB which parts of the entry you want to return.
You can do this by adding “values” but be careful to enter the values exactly as they appear below because (on the FaunaDB free tier) you can’t amend these after you’ve created them so if there’s a mistake you’ll have to delete the index and start again… bummer!
The values you need to add are as follows:
ref
data.isApproved
data.slug
data.date
data.name
data.comment
After you’ve added all the values you can click SAVE.
Get all comments
// boop.js ... // GET ALL COMMENTS getAllComments: async () => { const results = await client.query( q.Paginate(q.Match(q.Index("get-all-comments"))) ); console.log(JSON.stringify(results, null, 2)); return results.data.map(([ref, isApproved, slug, date, name, comment]) => ({ commentId: ref.id, isApproved, slug, date, name, comment, })); }, ...
The breakdown of this function is as follows
client is the FaunaDB client instance
query is a method to get data from FaunaDB
q is the instance of faunadb.query
Paginate paginates the responses
Match returns matched results
Index is the name of the Index you just created
The shape of the returned result here is an array of the same shape you defined in the Index “values”
If you run the following you should see the list of all the comments you created earlier:
node boop getAllComments
Get comments by slug
You’re going to take a similar approach as above but this time create a new Index that allows you to query FaunaDB in a different way. The key difference here is that when you get-comments-by-slug you’ll need to tell FaunaDB about this specific term and you can do this by adding data.slug to the Terms field.

Go to INDEX and click NEW INDEX
Name the index, I’ve called this one get-comments-by-slug
Set the source collection to the name of the collection you setup earlier
Add data.slug in the terms field
The values you need to add are as follows:
ref
data.isApproved
data.slug
data.date
data.name
data.comment
After you’ve added all the values you can click SAVE.
// boop.js ... // GET COMMENT BY SLUG getCommentsBySlug: async () => { const slug = "/posts/some-post"; const results = await client.query( q.Paginate(q.Match(q.Index("get-comments-by-slug"), slug)) ); console.log(JSON.stringify(results, null, 2)); return results.data.map(([ref, isApproved, slug, date, name, comment]) => ({ commentId: ref.id, isApproved, slug, date, name, comment, })); }, ...
The breakdown of this function is as follows:
client is the FaunaDB client instance
query is a method to get data from FaunaDB
q is the instance of faunadb.query
Paginate paginates the responses
Match returns matched results
Index is the name of the Index you just created
The shape of the returned result here is an array of the same shape you defined in the Index “values” you can create this shape in the same way you did above and be sure to add a value for terms. Again be careful to enter these with care.
If you run the following you should see the list of all the comments you created earlier but for a specific slug:
node boop getCommentsBySlug
Approve comment by id
When you create a comment you manually set the isApproved value to false. This prevents the comment from being shown in the app until you approve it.
You’ll now need to create a function to do this but you’ll need to hard-code a commentId. Use a commentId from one of the comments you created earlier:
// boop.js ... // APPROVE COMMENT BY ID approveCommentById: async () => { const commentId = '263413122555970050' const results = await client.query( q.Update(q.Ref(q.Collection(COLLECTION_NAME), commentId), { data: { isApproved: true, }, }) ); console.log(JSON.stringify(results, null, 2)); return { isApproved: results.isApproved, }; }, ...
The breakdown of this function is as follows:
client is the FaunaDB client instance
query is a method to get data from FaunaDB
q is the instance of faunadb.query
Update is the FaundaDB method up update an entry
Ref is the unique FaunaDB ref used to identify the entry
Collection is area in the database where the data is stored
If you’ve hard coded a commentId you can now run the following in your CLI:
node boop approveCommentById
If you run the getCommentsBySlug again you should now see the isApproved status of the entry you hard-coded the commentId for will have changed to true.
node boop getCommentsBySlug
These are all the operations required to manage the data from the app.
In the repo if you have a look at apollo-graphql.js which can be found in functions/apollo-graphql you’ll see the all of the above operations. As mentioned before the hard-coded values are replaced by args, these are the values passed in from various parts of the app.
Netlify
Assuming you’ve completed the Netlify sign up process or already have an account with Netlify you can now push the demo app to your GitHub account.
To do this you’ll need to have initialize git locally, added a remote and have pushed the demo repo upstream before proceeding.
You should now be able to link the repo up to Netlify’s Continuous Deployment.
If you click the “New site from Git” button on the Netlify dashboard you can authorize access to your GitHub account and select the gatsby-fauna-comments repo to enable Netlify’s Continuous Deployment. You’ll need to have deployed at least once so that we have a pubic URL of your app.
The URL will look something like this https://ecstatic-lewin-b1bd17.netlify.app but feel free to rename it and make a note of the URL as you’ll need it for the Netlify Identity step mentioned shortly.

Environment Variables Pt. 2
In a previous step you added the FaunaDB database secret key and collection name to your .env files(s). You’ll also need to add the same to Netlify’s Environment variables.
Navigate to Settings from the Netlify navigation
Click on Build and deploy
Either select Environment or scroll down until you see Environment variables
Click on Edit variables
Proceed to add the following:
GATSBY_SHOW_SIGN_UP = false GATSBY_FAUNA_DB = you FaunaDB secret key GATSBY_FAUNA_COLLECTION = you FaunaDB collection name
While you’re here you’ll also need to amend the Sensitive variable policy, select Deploy without restrictions
Netlify Identity Widget
I mentioned before that when a comment is created the isApproved value is set to false, this prevents comments from appearing on blog posts until you (the admin) have approved them. In order to become admin you’ll need to create an identity.
You can achieve this by using the Netlify Identity Widget.
If you’ve completed the Continuous Deployment step above you can navigate to the Identity page from the Netlify navigation.

You wont see any users in here just yet so lets use the app to connect the dots, but before you do that make sure you click Enable Identity
Before you continue I just want to point out you’ll be using netlify dev instead of gatsby develop or yarn develop from now on. This is because you’ll be using some “special” Netlify methods in the app and staring the server using netlify dev is required to spin up various processes you’ll be using.
Spin up the app using netlify dev
Navigate to http://localhost:8888/admin/
Click the Sign Up button in the header
You will also need to point the Netlify Identity widget at your newly deployed app URL. This was the URL I mentioned you’ll need to make a note of earlier, if you’ve not renamed your app it’ll look something like this https://ecstatic-lewin-b1bd17.netlify.app/ There will be a prompt in the pop up window to Set site’s URL.
You can now complete the necessary sign up steps.
After sign up you’ll get an email asking you to confirm you identity and once that’s completed refresh the Identity page in Netlify and you should see yourself as a user.
It’s now login time, but before you do this find Identity.js in src/components and temporarily un-comment the console.log() on line 14. This will log the Netlify Identity user object to the console.
Restart your local server
Spin up the app again using netlify dev
Click the Login button in the header
If this all works you should be able to see a console log for netlifyIdentity.currentUser: find the id key and copy the value.
Set this as the value for GATSBY_ADMIN_ID = in both .env.production and .env.development
You can now safely remove the console.log() on line 14 in Identity.js or just comment it out again.
GATSBY_ADMIN_ID = your Netlify Identity user id
…and finally
Restart your local server
Spin up the app again using netlify dev
Now you should be able to login as “Admin”… hooray!
Navigate to http://localhost:8888/admin/ and Login.
It’s important to note here you’ll be using localhost:8888 for development now and NOT localhost:8000 which is more common with Gatsby development
Before you test this in the deployed environment make sure you go back to Netlify’s Environment variables and add your Netlify Identity user id to the Environment variables!
Navigate to Settings from the Netlify navigation
Click on Build and deploy
Either select Environment or scroll down until you see Environment variables
Click on Edit variables
Proceed to add the following:
GATSBY_ADMIN_ID = your Netlify Identity user id
If you have a play around with the app and enter a few comments on each of the posts then navigate back to Admin page you can choose to either approve or delete the comments.
Naturally only approved comments will be displayed on any given post and deleted ones are gone forever.
If you’ve used this tutorial for your project I’d love to hear from you at @pauliescanlon.
By Paulie Scanlon (@pauliescanlon), Front End React UI Developer / UX Engineer: After all is said and done, structure + order = fun.
Visit Paulie’s Blog at: www.paulie.dev
The post Roll Your Own Comments With Gatsby and FaunaDB appeared first on CSS-Tricks.
Roll Your Own Comments With Gatsby and FaunaDB published first on https://deskbysnafu.tumblr.com/
0 notes
Text
Show HN: WunderGraph – Aggregate REST and GraphQL APIs, Add AuthN/Z and Caching https://news.ycombinator.com/item?id=24329651
Show HN: WunderGraph – Aggregate REST and GraphQL APIs, Add AuthN/Z and Caching https://news.ycombinator.com/item?id=24329651:
Show HN: WunderGraph – Aggregate REST and GraphQL APIs, Add AuthN/Z and Caching Hey, I’m Jens, founder of WunderGraph. Over the years of working with REST & GraphQL APIs, I found that some aspects of using it are way too complicated. Here’s a list of problems I believe could be abstracted away: - Aggregating multiple GraphQL, REST, etc. APIs into a simple to use API (Backend for Frontend) without writing code - adding Authentication & Authorization to APIs you don’t have full control over - adding efficient and easy to use Caching to GraphQL APIs without writing code - adding persisted queries for security and performance reasons without making my application code and deployment process more complex Companies like Facebook, who are concerned about security and performance, use persisted Queries and don’t expose their GraphQL API directly to the public. While developing they write their Queries using Relay and persist (whitelist) them at compile time. At least that’s my understanding from their blog posts and conference talks. WunderGraph takes this approach to the next level by turning the flow around. Relay, Apollo, URQL, etc. are very complex pieces of software because of the dynamic nature of GraphQL. With WunderGraph we define all Operations in GraphiQL “on the server” and then generate a very simple client from it. In a nutshell, Queries become simple GET requests with variables as query parameters, Mutations still are POST requests but just with variables as the body. A more in depth explanation including an example can be found here: https://www.reddit.com/r/graphql/comments/ijui1x/show_wundergraph_public_beta_aggregate_rest/ More info & docs: https://wundergraph.com/ For those who like to watch videos, here’s a general overview: https://youtu.be/RwkThD5pz1E Here’s a full 26m tutorial with React & Typescript that helps to start from scratch: https://youtu.be/8BQNeeVoFGI August 31, 2020 at 02:06AM
Show HN: WunderGraph – Aggregate REST and GraphQL APIs, Add AuthN/Z and Caching https://news.ycombinator.com/item?id=24329651 via https://newcapitalentrepreneur.tumblr.com/
0 notes
Link
Show HN: WunderGraph – Aggregate REST and GraphQL APIs, Add AuthN/Z and Caching Hey, I'm Jens, founder of WunderGraph. Over the years of working with REST & GraphQL APIs, I found that some aspects of using it are way too complicated. Here's a list of problems I believe could be abstracted away: - Aggregating multiple GraphQL, REST, etc. APIs into a simple to use API (Backend for Frontend) without writing code - adding Authentication & Authorization to APIs you don't have full control over - adding efficient and easy to use Caching to GraphQL APIs without writing code - adding persisted queries for security and performance reasons without making my application code and deployment process more complex Companies like Facebook, who are concerned about security and performance, use persisted Queries and don't expose their GraphQL API directly to the public. While developing they write their Queries using Relay and persist (whitelist) them at compile time. At least that's my understanding from their blog posts and conference talks. WunderGraph takes this approach to the next level by turning the flow around. Relay, Apollo, URQL, etc. are very complex pieces of software because of the dynamic nature of GraphQL. With WunderGraph we define all Operations in GraphiQL "on the server" and then generate a very simple client from it. In a nutshell, Queries become simple GET requests with variables as query parameters, Mutations still are POST requests but just with variables as the body. A more in depth explanation including an example can be found here: https://www.reddit.com/r/graphql/comments/ijui1x/show_wundergraph_public_beta_aggregate_rest/ More info & docs: https://wundergraph.com/ For those who like to watch videos, here's a general overview: https://youtu.be/RwkThD5pz1E Here's a full 26m tutorial with React & Typescript that helps to start from scratch: https://youtu.be/8BQNeeVoFGI August 31, 2020 at 02:06AM
0 notes
Photo

Getting Started with GraphQL and React Native
In 2012, Facebook engineer Nick Schrock started work on a small prototype to facilitate moving away from an old, unsupported partner API that powered the current Facebook News Feed. At the time, this was called “SuperGraph”. Fast forward to today and SuperGraph has helped shape the open-source query language GraphQL, which has been much of the buzzword in recent times.
Facebook describes GraphQL as a “query language for APIs and a runtime for fulfilling those queries with your existing data”. Put simply, GraphQL is an alternative to REST that has been steadily gaining popularity since its release. Whereas with REST a developer would usually collate data from a series of endpoint requests, GraphQL allows the developer to send a single query to the server that describes the exact data requirement.
Prerequisites
For this tutorial, you’ll need a basic knowledge of React Native and some familiarity with the Expo environment. You’ll also need the Expo client installed on your mobile device or a compatible simulator installed on your computer. Instructions on how to do this can be found here.
Project Overview
In this tutorial, we’re going to demostrate the power of GraphQL in a React Native setting by creating a simple coffee bean comparison app. So that you can focus on all of the great things GraphQL has to offer, I’ve put together the base template for the application using Expo.
To get started, you can clone this repo and navigate to the “getting-started” branch, which includes all of our basic views to start adding our GraphQL data to, as well as all of our initial dependencies, which at this stage are:
{ "expo": "^32.0.0", "react": "16.5.0", "react-native": "https://github.com/expo/react-native/archive/sdk-32.0.0.tar.gz", "react-navigation": "^3.6.1" }
To clone this branch, you’ll need to open up terminal and run this command:
git clone https://github.com/jamiemaison/graphql-coffee-comparison.git
To then navigate to the getting-started branch, you move into the newly cloned repo with cd graphql-coffee-comparison and run git checkout getting-started.
The next stage is to install our dependencies. To do this, make sure you’re on Node v11.10.1 and run npm install in the root directory of the project. This will add all of the dependencies listed above to your node_modules folder.
To start adding GraphQL to our React Native app, we’re going to need to install a few more dependencies that help us perform a few simple GraphQL functions. As is common with modern JavaScript development, you don’t need all of these dependencies to complete the data request, but they certainly help in giving the developer a better chance of structuring some clean, easy-to-read code. The dependencies you’ll need can be installed by running npm install --save apollo-boost react-apollo graphql-tag graphql.
Here’s an overview of what these dependencies are:
apollo-boost: a zero-configuration way of getting started with GraphQL in React/React Native
react-apollo: this provides an integration between GraphQL and the Apollo client
graphql-tag: a template literal tag that parses GraphQL queries
graphql: the JavaScript reference implementation for GraphQL
Once all of the necessary dependencies have finished installing, run npm start. You should now see your familiar Expo window, and if you launch the app (either via a simulator or on a device) then you should see a screen similar to this:
In basic terms, this application has two screens that are managed by react-navigation, Home.js and CoffeePage.js. The Home screen contains a simple FlatList that renders all of the coffee beans supplied to its data field. When clicked on, the user is navigated to the CoffeePage for that item, which displays more information about the product. It’s our job to now populate these views with interesting data from GraphQL.
Apollo Server Playground
There are two main elements to any successful GraphQL transaction: the server holding the data, and the front-end query making the request. For the purposes of this tutorial, we aren’t going to start delving into the wonderful world of server-side code, so I’ve created our server for us ready to go. All you need to do is navigate to yq42lj36m9.sse.codesandbox.io in your favorite browser and leave it running throughout the course of development. For those interested, the server itself is running using apollo-server and contains just enough code to hold the data we need and serve it upon receiving an appropriate query. For further reading, you can head over to apollographql.com to read more about apollo-server.
GraphQL Query Basics
Before we get into writing the actual code that’s going to request the data we need for our coffee bean comparison app, we should understand just how GraphQL queries work. If you already know how queries work or just want to get started with coding, you can skip ahead to the next section.
Note: these queries won’t work with our codesandbox server, but feel free to create your own at codesandbox.io if you’d like to test out the queries.
At its simplest level, we can use a flat structure for our queries when we know the shape of the data we’re requesting:
QUERY: RESPONSE: { { coffee { "coffee": { blend "blend": "rich" } } } }
On the left, we see the GraphQL query requesting the blend field from coffee. This works well when we know exactly what our data structure is, but what about when things are less transparent? In this example, blend returns us a string, but queries can be used to request objects as well:
QUERY: RESPONSE: { { coffee { "coffee": { beans { "beans": [ blend { } blend: "rich" } }, } { blend: "smooth" } ] } }
Here you can see we are simply requesting the beans object, with only the field blend being returned from that object. Each object in the beans array may very well contain other data other than blend, but GraphQL queries help us request only the data we need, cutting out any extra information that’s not necessary for our application.
So what about when we need to be more specific than this? GraphQL provides the capability for many things, but something that allows for extremely powerful data requests is the ability to pass arguments in your query. Take the following example:
QUERY: RESPONSE: { { coffee(companyId: "2") { "coffee": { beans { "beans": [ blend { } blend: "rich" } }, } { blend: "smooth" } ] } }
What we see is that we can pass an argument — in this case, the companyId — which ensures that we are only returned beans from one particular company. With REST, you can pass a single set of arguments via query params and URL segments, but with GraphQL querying every single field, it can get its own set of arguments. This allows GraphQL to be a dynamic solution for making multiple API fetches per request.
The post Getting Started with GraphQL and React Native appeared first on SitePoint.
by Jamie Maison via SitePoint https://ift.tt/2oseqPL
0 notes