Tumgik
#oauth2 spring boot rest api
codeonedigest · 2 years
Text
YouTube Short | What is Difference Between OAuth2 and SAML | Quick Guide to SAML Vs OAuth2
Hi, a short #video on #oauth2 Vs #SAML #authentication & #authorization is published on #codeonedigest #youtube channel. Learn OAuth2 and SAML in 1 minute. #saml #oauth #oauth2 #samlvsoauth2 #samlvsoauth
What is SAML? SAML is an acronym used to describe the Security Assertion Markup Language (SAML). Its primary role in online security is that it enables you to access multiple web applications using single sign-on (SSO). What is OAuth2?  OAuth2 is an open-standard authorization protocol or framework that provides applications the ability for “secure designated access.” OAuth2 doesn’t share…
Tumblr media
View On WordPress
0 notes
shalcool15 · 8 months
Text
How to Implement Java Microservices Architecture
Implementing a microservices architecture in Java is a strategic decision that can have significant benefits for your application, such as improved scalability, flexibility, and maintainability. Here's a guide to help you embark on this journey.
1. Understand the Basics
Before diving into the implementation, it's crucial to understand what microservices are. Microservices architecture is a method of developing software systems that focuses on building single-function modules with well-defined interfaces and operations. These modules, or microservices, are independently deployable and scalable.
2. Design Your Microservices
Identify Business Capabilities
Break down your application based on business functionalities.
Each microservice should represent a single business capability.
Define Service Boundaries
Ensure that each microservice is loosely coupled and highly cohesive.
Avoid too many dependencies between services.
3. Choose the Right Tools and Technologies
Java Frameworks
Spring Boot: Popular for building stand-alone, production-grade Spring-based applications.
Dropwizard: Useful for rapid development of RESTful web services.
Micronaut: Great for building modular, easily testable microservices.
Containerization
Docker: Essential for creating, deploying, and running microservices in isolated environments.
Kubernetes: A powerful system for automating deployment, scaling, and management of containerized applications.
Database
Use a database per service pattern. Each microservice should have its private database to ensure loose coupling.
4. Develop Your Microservices
Implement RESTful Services
Use Spring Boot to create RESTful services due to its simplicity and power.
Ensure API versioning to manage changes without breaking clients.
Asynchronous Communication
Implement asynchronous communication, especially for long-running or resource-intensive tasks.
Use message queues like RabbitMQ or Kafka for reliable, scalable, and asynchronous communication between microservices.
Build and Deployment
Automate build and deployment processes using CI/CD tools like Jenkins or GitLab CI.
Implement blue-green deployment or canary releases to reduce downtime and risk.
5. Service Discovery and Configuration
Service Discovery
Use tools like Netflix Eureka for managing and discovering microservices in a distributed system.
Configuration Management
Centralize configuration management using tools like Spring Cloud Config.
Store configuration in a version-controlled repository for auditability and rollback purposes.
6. Monitoring and Logging
Implement centralized logging using ELK Stack (Elasticsearch, Logstash, Kibana) for easier debugging and monitoring.
Use Prometheus and Grafana for monitoring metrics and setting up alerts.
7. Security
Implement API gateways like Zuul or Spring Cloud Gateway for security, monitoring, and resilience.
Use OAuth2 and JWT for secure, stateless authentication and authorization.
8. Testing
Write unit and integration tests for each microservice.
Implement contract testing to ensure APIs meet the contract expected by clients.
9. Documentation
Document your APIs using tools like Swagger or OpenAPI. This helps in maintaining clarity about service endpoints and their purposes.
Conclusion
Implementing a Java microservices architecture can significantly enhance your application's scalability, flexibility, and maintainability. However, the complexity and technical expertise required can be considerable. Hire Java developers or avail Java development services can be pivotal in navigating this transition successfully. They bring the necessary expertise in Java frameworks and microservices best practices to ensure your project's success. Ready to transform your application architecture? Reach out to professional Java development services from top java companies today and take the first step towards a robust, scalable microservice architecture.
0 notes
rssgreys · 2 years
Text
Mockwebserver enqueue
Tumblr media
#Mockwebserver enqueue code#
#Mockwebserver enqueue series#
You can further customise it using onStatus() method like below.
#Mockwebserver enqueue series#
The retrieve method in WebClient throws WebClientResponseException when there will be a 4xx and 5xx series exception is received. To handle errors in WebClient you can use extend retrieve method. If you have to make a POST call you can leverage post method and in that case you will have to pass a body to the method body, there are a lot of methods to play around with, once you start to use it you will have various methods to put in use for sure. The method getPost is the method which will be making the request with the help of webclient instance and retrieve the response from external API call. In the scenario above, we are leveraging GET method and returning a response from API call. You can choose HTTP methods depending upon the nature of the HTTP call and make request to it. Please refer to all the methods while you use it. In step3 we have created an instance of WebClient and initialized it by using WebClient builder, it will create a WebClient object and will also allow you to customise your call with several methods it offers. Add Spring WebFlux dependency to you POM.XML If you are using any other build tool, please find the dependency on the internet, they are easily available. Note – I would be using maven build tool to show the demo. So, make sure you are using Spring WebFlux already if you plan to use WebClient to make API calls. Pio ufheaoa a hatjafni nd neuwyufk usm tumfeyb id o CeyvCopjipce afsurl. setResponseCode(200)) Kaogc uten hmema lkib-yz-gxuq: Oco vwa vibsRibVisyax tcoq meu xsuuxoz jafade xi ayveiau i xugcuhzo. The main advantage of using the WebClient is, It’s reactive as it uses webflux and It’s also non blocking in nature by default and response will always be returned in either Mono or Flux. // 1 mockWebServer.enqueue( // 2 MockResponse() // 3. You can leverage any of these methods to make calls to the external service asynchronously. WebClient is simply an interface which offers some methods to make calls to rest services, there are methods like GET, POST, PUT, PATCH, DELETE and OPTIONS. It was introduced in Spring 5 as as part of Web reactive framework that helps to build reactive and non blocking web applications. If you are using Spring WebFlux, you can choose to use WebClient to call external rest services. I am overriding the “base-url” property with the value of the MockWebServer url and random port (e.g.Alright, In this article we will talk about Spring Boot Web Client. The method takes in the registry of properties, and lets you add or override properties. status ( addFilters = false ) class WebClientIntegrationTest Import .ObjectMapper import import import import .Assertions import .AfterAll import .BeforeAll import .Test import .annotation.Autowired import. import .context.SpringBootTest import .DynamicPropertyRegistry import .DynamicPropertySource import .servlet.MockMvc import .servlet.ResultActions import. import java.io.IOException import import static org.
#Mockwebserver enqueue code#
Here is the test code for handling 200 responses for GET and POST and a 500 response: MockWebServer will generate a url, which we will insert into our application properties using are also choosing to not require the MockMvc to pass a token to our application, because that concern is better tested after we deploy our application. If you want, you can skip to my GitHub repo with the adle file and example code. It is a simple pass-through API that hits a backend and returns the response from it. Our project will include Spring Security with OAuth2 Client Credentials, Actuator, Spring Web, JUnit 5 and Webflux, and some other common dependencies. We will be using Spring Boot version 2.6.3 with Gradle wrapper version 7.3.3 and Java 11. Shown below as Gradle imports: testImplementation '3:okhttp:4.0.1' testImplementation '3:mockwebserver:4.0.1' 3. baseUrl (server.url ('/')) // Other builder methods.build () Second, to get responses from the mock web server, you need to enqueue the expected. Retrofit retrofit new Retrofit.Builder (). Generally, you configure your retrofit to call the server's endpoint. To use MockWebServer, you need two dependencies. If you want to set the url, you'll need to pass it to the start () method. Better still, it enables you to test the token caching functionality that comes built into WebClient, but we won’t get to that today. This lets you test your WebClient setup as well. Spring Boot 2.2.6 introduced the which allows you to insert the MockWebServer url into the properties at runtime. The tests overrode the WebClient, and so did not cover the configuration of the WebClient (which could be incorrectly configured). The intention was to write an integration test that did not touch the inside of the code, but only the edges. I had previously written an article Integration Testing with MockWebServer that explained a way to write integration tests for a web application using WebClient and MockWebServer (okhttp).
Tumblr media
0 notes
vatt-world · 4 years
Text
interview nike
amazon simple notification service sns vs sqs elkstash testing bugs in java debugging in java building rest api in nod.js with aws lambda,api gateway build api gateway rest api with lambda integration create an index - amazon dynamoDB managing indexes amazon dynamoDB store files in dynamodb dynamodb datatypes ...what did u use aws dynamodb dynamodb create table aws node js aws lambda performance running api's written java in aws lambda aws lambda return aws lambda performance optimization aws lambda and java spring boot terraform script aws lambda spring bean scopes spring boot and oauth2 arraylist vs linked list access modifiers in java securing rest api with spring security time complexit o(n) o(1) best big o value practical examples of big o notation break a monolith into microservices increase team productivity using spring boot ////////////////////////////////////////
0 notes
myfreecourses · 5 years
Text
Angular & Spring 5: Creando Web App Full Stack (Angular 8+)
Angular & Spring 5: Creando Web App Full Stack (Angular 8+)
Desarrollo frontend con Angular 8 y backend Spring 5, Spring Boot 2, API REST, JPA, Spring Security OAuth2, JWT, Socket
What you’ll learn
Desarrolla aplicaciones web full-stack con Angular (frontend) y Spring Framework 5 + JPA (backend)
Desarrolla una aplicación de CRUD completa usando Angular + Spring + JPA + Restful
Maneja los componentes, directivas, rutas, pipes y servicios de una…
View On WordPress
0 notes
itbeatsbookmarks · 4 years
Link
(Via: Hacker News)
Today’s developers are expected to develop resilient and scalable distributed systems. Systems that are easy to patch in the face of security concerns and easy to do low-risk incremental upgrades. Systems that benefit from software reuse and innovation of the open source model. Achieving all of this for different languages, using a variety of application frameworks with embedded libraries is not possible.
Recently I’ve blogged about “Multi-Runtime Microservices Architecture” where I have explored the needs of distributed systems such as lifecycle management, advanced networking, resource binding, state abstraction and how these abstractions have been changing over the years. I also spoke about “The Evolution of Distributed Systems on Kubernetes” covering how Kubernetes Operators and the sidecar model are acting as the primary innovation mechanisms for delivering the same distributed system primitives.
On both occasions, the main takeaway is the prediction that the progression of software application architectures on Kubernetes moves towards the sidecar model managed by operators. Sidecars and operators could become a mainstream software distribution and consumption model and in some cases even replace software libraries and frameworks as we are used to.
The sidecar model allows the composition of applications written in different languages to deliver joint value, faster and without the runtime coupling. Let’s see a few concrete examples of sidecars and operators, and then we will explore how this new software composition paradigm could impact us.
Out-of-Process Smarts on the Rise
In Kubernetes, a sidecar is one of the core design patterns achieved easily by organizing multiple containers in a single Pod. The Pod construct ensures that the containers are always placed on the same node and can cooperate by interacting over networking, file system or other IPC methods. And operators allow the automation, management and integration of the sidecars with the rest of the platform. The sidecars represent a language-agnostic, scalable data plane offering distributed primitives to custom applications. And the operators represent their centralized management and control plane.
Let’s look at a few popular manifestations of the sidecar model.
Envoy
Service Meshes such as Istio, Consul, and others are using transparent service proxies such as Envoy for delivering enhanced networking capabilities for distributed systems. Envoy can improve security, it enables advanced traffic management, improves resilience, adds deep monitoring and tracing features. Not only that, it understands more and more Layer 7 protocols such as Redis, MongoDB, MySQL and most recently Kafka. It also added response caching capabilities and even WebAssembly support that will enable all kinds of custom plugins. Envoy is an example of how a transparent service proxy adds advanced networking capabilities to a distributed system without including them into the runtime of the distributed application components.
Skupper
In addition to the typical service mesh, there are also projects, such as Skupper, that ship application networking capabilities through an external agent. Skupper solves multicluster Kubernetes communication challenges through a Layer 7 virtual network and offers advanced routing and connectivity capabilities. But rather than embedding Skupper into the business service runtime, it runs an instance per Kubernetes namespace which acts as a shared sidecar.
Cloudstate
Cloudstate is another example of the sidecar model, but this time for providing stateful abstractions for the serverless development model. It offers stateful primitives over GRPC for EventSourcing, CQRS, Pub/Sub, Key/Value stores and other use cases. Again, it an example of sidecars and operators in action but this time for the serverless programming model.
Dapr
Dapr is a relatively young project started by Microsoft, and it is also using the sidecar model for providing developer-focused distributed system primitives. Dapr offers abstractions for state management, service invocation and fault handling, resource bindings, pub/sub, distributed tracing and others. Even though there is some overlap in the capabilities provided by Dapr and Service Mesh, both are very different in nature. Envoy with Istio is injected and runs transparently from the service and represents an operational tool. Dapr, on the other hand, has to be called explicitly from the application runtime over HTTP or gRPC and it is an explicit sidecar targeted for developers. It is a library for distributed primitives that is distributed and consumed as a sidecar, a model that may become very attractive for developers consuming distributed capabilities.
Camel K
Apache Camel is a mature integration library that rediscovers itself on Kubernetes. Its subproject Camel K uses heavily the operator model to improve the developer experience and integrate deeply with the Kubernetes platform. While Camel K does not rely on a sidecar, through its CLI and operator it is able to reuse the same application container and execute any local code modification in a remote Kubernetes cluster in less than a second. This is another example of developer-targeted software consumption through the operator model.
More to Come
And these are only some of the pioneer projects exploring various approaches through sidecars and operators. There is more work being done to reduce the networking overhead introduced by container-based distributed architectures such as the data plane development kit (DPDK), which is a userspace application that bypasses the layers of the Linux kernel networking stack and access directly to the network hardware. There is work in the Kubernetes project to create sidecar containers with more granular lifecycle guarantees. There are new Java projects based on GraalVM implementation such as Quarkus that reduce the resource consumption and application startup time which makes more workloads attractive for sidecars. All of these innovations will make the side-car model more attractive and enable the creation of even more such projects.
Sidecars providing distributed systems primitives
I’d not be surprised to see projects coming up around more specific use cases such as stateful orchestration of long-running processes such as Business Process Model and Notation (BPMN) engines in sidecars. Job schedulers in sidecars. Stateless integration engines i.e. Enterprise Integration Patterns implementations in sidecars. Data abstractions and data federation engines in sidecars. OAuth2/OpenID proxy in sidecars. Scalable database connection pools for serverless workloads in sidecars. Application networks as sidecars, etc. But why would software vendors and developers switch to this model? Let’s see a few of the benefits it provides.
Runtimes with Control Planes over Libraries
If you are a software vendor today, probably you have already considered offering your software to potential users as an API or a SaaS-based solution. This is the fastest software consumption model and a no-brainer to offer, when possible. Depending on the nature of the software you may be also distributing your software as a library or a runtime framework. Maybe it is time to consider if it can be offered as a container with an operator too. This mechanism of distributing software and the resulting architecture has some very unique benefits that the library mechanism cannot offer.
Supporting Polyglot Consumers
By offering libraries to be consumable through open protocols and standards, you open them up for all programming languages. A library that runs as a sidecar and consumable over HTTP, using a text format such as JSON does not require any specific client runtime library. Even when gRPC and Protobuf are used for low-latency and high-performance interactions, it is still easier to generate such clients than including third party custom libraries in the application runtime and implement certain interfaces.
Application Architecture Agnostic
The explicit sidecar architecture (as opposed to the transparent one) is a way of software capability consumption as a separate runtime behind a developer-focused API. It is an orthogonal feature that can be added to any application whether that is monolithic, microservices, functions-based, actor-based or anything in between. It can sit next to a monolith in a less dynamic environment, or next to every microservice in a dynamic cloud-based environment. It is trivial to create sidecars on Kubernetes, and doable on many other software orchestration platforms too.
Tolerant to Release Impedance Mismatch
Business logic is always custom and developed in house. Distributed system primitives are well-known commodity features, and consumed off-the-shelf as either platform features or runtime libraries. You might be consuming software for state abstractions, messaging clients, networking resiliency and monitoring libraries, etc. from third-party open source projects or companies. And these third party entities have their release cycles, critical fixes, CVE patches that impact your software release cycles too. When third party libraries are consumed as a separate runtime (sidecar), the upgrade process is simpler as it is behind an API and it is not coupled with your application runtime. The release impedance mismatch between your team and the consumed 3rd party libraries vendors becomes easier to manage.
Control Plane Included Mentality
When a feature is consumed as a library, it is included in your application runtime and it becomes your responsibility to understand how it works, how to configure, monitor, tune and upgrade. That is because the language runtimes (such as the JVM) and the runtime frameworks (such as Spring Boot or application servers) dictate how a third-party library can be included, configured, monitored and upgraded. When a software capability is consumed as a separate runtime (such as a sidecar or standalone container) it comes with its own control plane in the form of a Kubernetes operator.
That has a lot of benefits as the control plane understands the software it manages (the operand) and comes with all the necessary management intelligence that otherwise would be distributed as documentation and best practices. What’s more, operators also integrate deeply with Kubernetes and offer a unique blend of platform integration and operand management intelligence out-of-the-box. Operators are created by the same developers who are creating the operands, they understand the internals of the containerized features and know how to operate the best. Operators are executables SREs in containers, and the number of operators and their capabilities are increasing steadily with more operators and marketplaces coming up.
Software Distribution and Consumption in the Future
Software Distributed as Sidecars with Control Planes
Let’s say you are a software provider of a Java framework. You may distribute it as an archive or a Maven artifact. Maybe you have gone a step further and you distribute a container image. In either case, in today’s cloud-native world, that is not good enough. The users still have to know how to patch and upgrade a running application with zero downtime. They have to know what to backup and restore its state. They have to know how to configure their monitoring and alerting thresholds. They have to know how to detect and recover from complex failures. They have to know how to tune an application based on the current load profile.
In all of these and similar scenarios, intelligent control planes in the form of Kubernetes operators are the answer. An operator encapsulates platform and domain knowledge of an application in a declaratively configured component to manage the workload.
Sidecars and operators could become a mainstream software distribution and consumption model and in some cases even replace software libraries and frameworks as we are used to.
Let’s assume that you are providing a software library that is included in the consumer applications as a dependency. Maybe it is the client-side library of the backend framework described above. If it is in Java, for example, you may have certified it to run it on a JEE server, provided Spring Boot Starters, Builders, Factories, and other implementations that are all hidden behind a clean Java interface. You may have even backported it to .Net too.
With Kubernetes operators and sidecars all of that is hidden from the consumer. The factory classes are replaced by the operator, and the only configuration interface is a YAML file for the custom resource. The operator is then responsible for configuring the software and the platform so that users can consume it as an explicit sidecar, or a transparent proxy. In all cases, your application is available for consumption over remote API and fully integrated with the platform features and even other dependent operators. Let’s see how that happens.
Software Consumed over Remote APIs Rather than Embedded Libraries
One way to think about sidecars is similar to the composition over inheritance principle in OOP, but in a polyglot context. It is a different way of organizing the application responsibilities by composing capabilities from different processes rather than including them into a single application runtime as dependencies. When you consume software as a library, you instantiate a class, call its methods by passing some value objects. When you consume it as an out-of-process capability, you access a local process. In this model, methods are replaced with APIs, in-process methods invocation with HTTP or gRPC invocations, and value objects with something like CloudEvents. This is a change from application servers to Kubernetes as the distributed runtime. A change from language-specific interfaces, to remote APIs. From in-memory calls to HTTP, from value objects to CloudEvents, etc.
This requires software providers to distribute containers and controllers to operate them. To create IDEs that are capable of building and debugging multiple runtime services locally. CLIs for quickly deploying code changes into Kubernetes and configuring the control planes. Compilers that can decide what to compile in a custom application runtime, what capabilities to consume from a sidecar and what from the orchestration platform.
Software consumers and providers ecosystem
In the longer term, this will lead to the consolidation of standardized APIs that are used for the consumption of common primitives in sidecars. Rather than language-specific standards and APIs we will have polyglot APIs. For example, rather than Java Database Connectivity (JDBC) API, caching API for Java (JCache), Java Persistence API (JPA), we will have polyglot APIs over HTTP using something like CloudEvents. Sidecar centric APIs for messaging, caching, reliable networking, cron jobs and timer scheduling, resource bindings (connectors to other APIs, protocols), idempotency, SAGAs, etc. And all of these capabilities will be delivered with the management layer included in the form of operators and even wrapped with self-service UIs. The operators are key enablers here as they will make this even more distributed architecture easy to manage and self-operate on Kubernetes. The management interface of the operator is defined by the CustomResourceDefinition and represents another public-facing API that remains application-specific.
This is a big shift in mentality to a different way of distributing and consuming software, driven by the speed of delivery and operability. It is a shift from a single runtime to multi runtime application architectures. It is a shift similar to what the hardware industry had to go through from single-core to multicore platforms when Moore’s law ended. It is a shift that is slowly happening by building all the elements of the puzzle: we have uniformly adopted and standardized containers, we have a de facto standard for orchestration through Kubernetes, possibly improved sidecars coming soon, rapid operators adoption, CloudEvents as a widely agreed standard, light runtimes such as Quarkus, etc. With the foundation in place, applications, productivity tools, practices, standardized APIs, and ecosystem will come too.
This post was originally published at ​The New Stack here.
0 notes
Link
Tumblr media
Why Spring Boot?
So why should you as Java developer care about Spring Boot? Well there are many good reasons! 😊 First of all Spring is open source, meaning it is continuously maintained and tested by community and it is free or charge. Second, according to Hotframeworks, it is the most widely used Java web framework of 2019. Third, there’s an excellent way to get your Spring application up and running quickly, which is where Spring Boot comes into play: Thanks to Spring Boot, you don’t need to worry about a lot of the boiler plate code and configuration. Spring Boot automatically sets a lot of config defaults for you, but you can always overwrite those if needed. For that purpose, Spring Boot is opinionated, meaning the people in the Spring team chose some configs for you, but those are well accepted by the community. Btw, why should we care about web frameworks at all? Well there are many items which are used over and over in typical web services, such as answering to HTTP request, spanning new threads for each incoming request, security mechanisms like HTTPS and OAUTH2 and so forth. We do not want to reinvent the wheel every time we create a new web service, and for that purpose we can use web frameworks with all those common mechanisms provided. Additional features from web frameworks include data base access, scheduling of tasks, inversion of control etc. All these nice features are included in Spring Boot and thus you have more time for other stuff like drinking a good cappuccino☕ As a final introductory remark, let me mention that Spring is not only compatible with Java, but also with Kotlin, a language very popular for Android apps.
Prerequisites
We will now create a hello-world web service. All necessary code is given here, and the final solution is also available on my Github repo. Requisites to follow all the steps:
Maven
Java JDK 8 or higher
Command line
For this blog, we will do all the work from the command line. Alternatively, you can use an IDE like IntelliJ. In fact, I will soon release a post on IntelliJ and cover introductory topics like code completion, searching for a given code snippet in your project, compilation, debugging etc.
Using the Spring initializr
We use Maven as build tool, and Spring Boot offers a great way to create your POM file: Head over to https://start.spring.io/ and enter all the details of our app like below:
Tumblr media
You can use a newer version of Spring Boot and Java – of course - if you prefer to. Anyways, remember to add “Spring Web” as a starter dependency – we will use it for our REST endpoints. Once you have filled in all details, use the “GENERATE” button. This will download a ZIP file with the initial Java project structure, and most importantly, the initial pom.xml file. Let us have a closer look at the generated POM file. At the top of the POM, you can see we inherit from spring-boot-starter-parent, which contains all necessities for a Spring-Boot app.
<parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>2.2.6.RELEASE</version> <relativePath/> <!-- lookup parent from repository --> </parent>
Further down in the POM, under dependencies, you can see we will use spring-boot-starter-web:
<dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency>
You can find a nice description of this dependency on mvnrepository.com:
Starter for building web, including RESTful, applications using Spring MVC. Uses Tomcat as the default embedded container.
Anyways, so far, we have only looked at one important file: the pom.xml file. Next let us focus on the Main Class, which you can find under src/main/java/com/example/springbootexample/SpringBootExampleApplication.java:
@SpringBootApplication public class SpringBootExampleApplication { public static void main(final String[] args) { SpringApplication.run(SpringBootExampleApplication.class, args); } }
What is interesting here is just the annotation at the top: @SpringBootApplication. Among several things, this annotation makes sure our Spring Boot app gets configured with the default Spring Boot properties (like timeouts for HTTP requests and many, many other things).
Hello-World REST endpoint
Since we want to create a REST endpoint later on, we need our Main class to search for Servlets, and therefore we need to add one more annotation to our Main class: @ServletComponentScan (again, if today is your lazy day and you don’t want to do any coding, you can look at the completed code in my Github repo). Next, let us create a REST endpoint. For this purpose, we create a new Java class and call it PingRestController.java (you can use the same folder as for the Main class). The content of PingRestController.java should look like so:
package com.example.springbootexample; import org.springframework.http.ResponseEntity; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RequestMethod; import org.springframework.web.bind.annotation.RestController; @RestController public class PingRestController { @RequestMapping(method = RequestMethod.GET, path = "/api/ping") public ResponseEntity<String> getPing() { return ResponseEntity.ok("pong"); } }
The annotation @RestController signifies that this class contains REST endpoints. Each REST endpoint is a method annotated with @RequestMapping. In this particular case, we have only one such method: getPing. This method is executed every time the corresponding REST call arrives at our server. Let us look more in detail at the @RequestMapping annotation: We specify a method and a path variable. These two variables specify that we want to capture HTTP GET request to the URI “/api/ping”. Also, note the return type of our getPing method: A ResponseEntity wraps the HTTP answer, and the HTTP body should be just a String. Thus, the response to the HTTP call will always looks as follows:
Headers: Status: 200, ContentType: text/plain;charset=UTF-8 Body: "pong"
With the modified Main class and the PingRestController class, we have all pieces ready to run our service. In the terminal, type:
mvn clean install java -jar target/spring-boot-example-0.0.1-SNAPSHOT.jar
Now, in your favorite web browser, type:
localhost:8080/api/ping
You should see the “pong” response! What happens in the background is that your browser fires a HTTP GET request to localhost, which is handled by your Spring Boot app and responded to with the String “pong”.
Integration Test
A great way to make sure our REST endpoint really works, is by writing an integration test. This test will run every time we build our application. Why do we use integration tests? First, because we developers want to automate everything and do not like testing manually. Second, because this adds stability to future development: As our web service will be extended, this test will still run with every build, and we can be sure this feature still works. What is an integration test? Contrary to unit tests which only look at one class, an integration test is for our app as a whole, where all components get integrated together. We typically mock third party systems like data bases, so we can test independent of (sometimes unreliable) surrounding systems. In our case, we want to really boot up our web service, but if we had a data base, then we would just simulate it. We implement our integration test in src/test/java/com/example/springbootexample/PingIntegrationTest.java:
package com.example.springbootexample; import org.junit.jupiter.api.Test; import org.junit.jupiter.api.extension.ExtendWith; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.boot.test.autoconfigure.web.servlet.AutoConfigureMockMvc; import org.springframework.boot.test.context.SpringBootTest; import org.springframework.test.context.junit.jupiter.SpringExtension; import org.springframework.test.web.servlet.MockMvc; import static org.junit.jupiter.api.Assertions.assertEquals; import static org.springframework.test.web.servlet.request.MockMvcRequestBuilders.get; @ExtendWith(SpringExtension.class) @SpringBootTest @AutoConfigureMockMvc public class PingIntegrationTest { @Autowired private MockMvc mvc; @Test public void testHelloWorldEndpoint() throws Exception { String response = mvc .perform(get("/api/ping")) .andReturn().getResponse().getContentAsString(); assertEquals("Hello world", response); } }
As you can see, testing a REST endpoint takes slightly more code. For now, let us just focus on the interesting bits, and I will leave it to you to understand every single line of code. So here are the important points:
@SpringBootTest: Start the web server locally and have it ready to answer to test REST calls
private MockMvc mvc: A MockMvc object allows us to fire test HTTP calls to our web server
@Test: Just as with Junit-Tests, each test is implemented in a method annotated with @@test .
mvc.perform(get(“api/ping”)): Here we trigger an HTTP GET request
You can run the test with the following command:
mvn -Dtest=PingIntegrationTest test
Aaaaand... the integration test fails🙈 What happened? Well no need to worry, there is just a tiny error in the above test case. I will leave it up to you to find the error cause and fix it!
Spring Configuration
Spring Boot automatically configures many properties of your service. For example, when we pinged our own service, we had to use port 8080. Now we did not define this anywhere… this is just a Spring Boot default. All those default properties can be found in the official docu here. To change this default behavior, all we need to do is create an application.properties file and overwrite the appropriate value. So, go ahead and modify src/main/resources/application.properties:
server: port: 8082
Now, when you recompile and restart the service, the REST endpoint will be available on port 8082.
Conclusion
We have covered all code necessary to create a simple REST service. Using Spring Boot, we just needed a total of 23 lines of Java code to create a working REST endpoint! Moreover, there was zero XML configuration needed. Pretty cool!
0 notes
our-srujanksk · 6 years
Text
python stream tarfile member to named pipe | python linux file io
http://timesofksk.com/python-stream-tarfile-member-to-named-pipe-python-linux-file-io/
Spring boot return application/pdf with a ResponseEntity<Resource> | rest spring-boot pdf http://timesofksk.com/spring-boot-return-application-pdf-with-a-responseentity-rest-spring-boot-pdf/
How do you turn a Google Services oAuth2 into a Google Ads API oAuth2 access | php google-api google-adwords google-oauth2 http://timesofksk.com/how-do-you-turn-a-google-services-oauth2-into-a-google-ads-api-oauth2-access-php-google-api-google-adwords-google-oauth2/
Could not find codec parameter for webcam in ffmpeg | ffmpeg centos webcam v4l2 http://timesofksk.com/could-not-find-codec-parameter-for-webcam-in-ffmpeg-ffmpeg-centos-webcam-v4l2/
Android XML “match_parent” RecyclerView width issue | android xml design android-recyclerview http://timesofksk.com/android-xml-match_parent-recyclerview-width-issue-android-xml-design-android-recyclerview/
0 notes
abhilashkrish · 7 years
Text
CRYPTOBOT - CRYPTOCURRENCY BOT SOFTWARE DESIGN SPECIFICATION
INTRODUCTION
CRYPTOBOT is designed to perform automatic lending of Cryptocurrencies through Poloniex, Bitfinex and Bittrex Exchanges.
Additionally, 50+ Cryptocurrency Exchanges can be integrated and traded through the Platform.
APPLICATION
CRYPTOBOT User Interface will be a SPA (SINGLE PAGE APPLICATION) powered by Angular JS v 4.3.0 and Spring Boot and a host of other components, frameworks and APIs hosted on Amazon Cloud.
TECHNOLOGY STACK
AMAZON (EC2) CLOUD HOSTED PLATFORM
ANGULAR JS
JAVA 1.8
SPRING BOOT
SPRING MVC REST
SPRING DATA JPA
HAZELCAST IMDG
TOMCAT EMBEDDED
POSTGRESQL
GOOGLE AUTHENTICATOR
AWS ELB
AWS JAVA SDK
DOCKER SWARM
POLONIEX API
BITFINEX API
BITTREX API
THE PLATFORM
The platform will comprise of user account management, referral system, cryptocurrency lending, lending history, third-party exchanges, administration, charts and Bot components.
The platform will be able to lend out multiple Cryptocurrencies through multiple Exchanges.
The platform will be built with extensibility, scalability and high availability.
The platform will be able to serve hundreds of thousands of users.
The platform will be designed for 99.9% uptime.
Entire operations of the platform will be automated.
USER INTERFACE AND FUNCTIONS
HOME PAGE
The Page will have a header to choose multiple languages. The default language can be configured.
The Page will provide a button to Setup Bot
On click of Setup Bot button, options for User Registration and User Authentication are displayed.
The Page will also display current lending rates of Cryptocurrencies from supported exchanges.
USER REGISTRATION
Registration page with form inputs for Name, Email address, Password is displayed.
On form submission page will send HTTP POST request to the platform REST API
REST API process the registration request and if successful send a confirmation email to the user’s supplied Email address.
REST API will send a response back to the Registration page as Success or Failed.
If success, Authentication page will be loaded.
USER AUTHENTICATION
Authentication page with form inputs for Email address and Password is displayed.
On form submission page will send HTTP POST request to the Platform OAuth2 REST API
REST API will verify the form input and issues an access token if successful authentication.
Page will receive the access token and Google Authenticator token form is displayed.
On submission of Google Authenticator token, page will send a request for validation from the platform.
If Google Authenticator token is valid, User Home page is loaded.
If Google Authenticator token is invalid, OAuth2 token is invalidated and Login form will be loaded again.
USER HOME
User Home page will be loaded with Header elements containing Logo, Account, and Logout with respective icons are displayed.
A language chooser drop-down box will also be displayed.
The page will also contain eight Tabs – Settings, My Loans, My Interest, My Deposits, My Withdrawals, Transfer Balance, My Referrals, Bot fee with respective icons.
There will be a Notification Tab also displayed to configure Notification settings.
For the first time user, Settings tab will be displayed.
USER HOME – SETTINGS TAB
Settings Tab will have four display boxes. Three boxes for Exchange Bot activation. One box for Settings.
Three exchanges will be supported at the moment; Poloniex, Bitfinex and Bittrex.
SETTINGS TAB - BOT
The Bot display box will have Exchange name, ON/OFF slider button to enable or disable Bot, Settings for enabling Coins for the exchange.
SETTINGS TAB – ENABLE BOT
Once ON slider button is enabled, User has to provide Exchange API Key and Secret.
Page will send the request to the Platform REST API with access token retrieved from the cookie.
Bot creation and the launcher will be explained in the Platform Design Section.
SETTINGS TAB – DISABLE BOT
If the OFF slider is enabled in the Bot display box, a disable request will sent to the Platform with access token retrieved from the cookie or browser local storage.
Disabling of Bot will be explained in the Platform Design Section.
USER HOME – MY LOANS TAB
A tabular data will be displayed for the loans of the user.
Exchange, Token, State, Duration, End time, Amount lent out, the Interest rate will be the information displayed in the tabular structure.
User’s total Coin balances and the current interest rate wil be displayed.
A hyperlink will be provided to view forecast value gain at the current interest rate and constant price for 50 years.
On clicking hyperlink, a Line Graph will be displayed with $ value on X-Axis and Date on Y-Axis.
A pie chart will also be provided to show the current balance in $ for the subscribed exchanges.
USER HOME – MY INTEREST TAB
The page will display the total lending balance and total interest (30d)
A tabular structure with Exchange, Coin, Balance, Balance($), Interest (30d), Interest (30d $) information will be displayed.
Pie chart with Balances of each Exchange subscribed will be displayed.
Pie chart with Interest from each Exchange subscribed will be displayed.
USER HOME – MY DEPOSITS
A form with an input box, currency chooser, and exchange chooser will be displayed.
The User can deposit currencies to various Exchanges from the page.
On submission of the form, REST API will connect to Exchange and deposit the amount into the Exchange account.
USER HOME - MY WITHDRAWALS
A tabular data of Currency, Exchange, Balance and Withdraw button will be displayed.
On click of Withdraw button, a request will be sent to REST API.
REST API will connect to Exchange and the requested amount is withdrawn from the account.
The updated balance will be displayed in the tabular data for the withdrawn currency.
USER HOME – TRANSFER BALANCE
A tabular data will be displayed with Currency, Exchange, and Balance Amount.
A form will also be displayed with two Currency choosers and Exchange chooser for transferring the balance from one currency to another in the Exchange.
A button will be available to perform the Balance Transfer.
On click of the button, REST API will be invoked and balance transfer operation will be performed in the Exchange.
The balance will be updated in the Exchange account and displayed in the tabular data.
USER HOME – MY REFERRALS
The User of the CRYPTOBOT system can refer another user through Email or SMS.
Both the parties involved in the Referral process will get Bonus coins in their account balance.
BOT FEE
This tab will display the Bot fees accrued through lending operations by User from the subscribed exchanges through CRYPTOBOT.
For the lender, a 15% fee will be applied to earned Interest.
The Bot fee can be configured in the system.
NOTIFICATION TAB – NOTIFICATION SETTINGS
ON/OFF Slider buttons will be provided for Notifications by Email and SMS.
The Notification system will be explained in the Platform Design Section.
ADMIN USER INTERFACE
The Admin User Interface will be used to perform administrative functions from the CRYPTOBOT System.
The Admin UI will be used to activate or de-active User accounts on suspicious activity.
The Bot settings and parameters can be configured from Admin User Interface.
The Lending settings and configuration can also be applied.
Addition or Removal of Exchanges from the system.
Addition or Removal of trading currencies.
CHARTS - HIGHCHARTS
Highcharts JS is a JavaScript charting library based on SVG, with fallbacks to VML and canvas for old browsers.
CRYPTOBOT charts will be displayed by availing Highcharts Graph APIs and chart components.
LENDING STRATEGY & ALGORITHMS
ALGORITHMS & FUNCTIONS
Lending Algorithm
Finely divided lending
Referral program
Lending fee
Short sale
Long sale
Loan Orders
Forced Liquidation
Order book
POLONIEX LENDING ALGORITHM
Automatically lend your coins on Poloniex at the highest possible rates, 24 hours a day.
Configure your own lending strategy! Be aggressive and hold out for a great rate or be conservative and lend often but at a lower rate, your choice!
The ability to spread your offers out to take advantage of spikes in the lending rate.
Withhold lending a percentage of your coins until the going rate reaches a certain threshold to maximize your profits.
Lock in a high daily rate for a longer period of time period of up to sixty days, all configurable!
Automatically transfer any funds you deposit (configurable on a coin-by-coin basis) to your lending account instantly after deposit.
View a summary of your bot's activities, status, and reports via an easy-to-set-up webpage that you can access from anywhere!
Choose any currency to see your profits in, even show how much you are making in USD!
Select different lending strategies on a coin-by-coin basis.
Run multiple instances of the bot for multiple accounts easily using multiple config files.
Configure a date you would like your coins back, and watch the bot make sure all your coins are available to be traded or withdrawn at the beginning of that day.
BITFINEX LENDING ALGORITHM – MARGIN BOT
Margin Bot is designed to manage 1 or more bitfinex accounts, doing its best to keep any money in the "deposit" wallet lent out at the highest rate possible while avoiding long periods of pending loans (as often happens when using the Flash Return Rate, or some other arbitrary rate). There are numerous options and setting to tailor the bot to your requirements.
MinDailyLendRate. The lowest daily lend rate to use for any offer except the HighHold, as it is a special case ( a warning message is shown in case HighHoldDailyRate < MinDailyLendRate).
SpreadLend. The number of offers to split the available balance uniformly across the [GapTop, GapBottom] range. If set to 1 all balance will be offered at the rate of GapBottom position.
uGapBottom. The depth of lendbook (in volume) to move trough before placing the first offer. If set to 0 first offer will be placed at the rate of lowest ask.
GapTop. The depth of lendbook (in volume) to move trough before placing the last offer. if SpreadLend is set to >1 all offers will be distributed uniformly in the [GapTop, GapBottom] range.
ThirtyDayDailyThreshold. Daily lend rate threshold after which we offer lends for 30 days as opposed to 2. If set to 0 all offers will be placed for a 2 day period.
HighHoldDailyRate. Special High Hold offer for keeping a portion of wallet balance at a much higher daily rate. Does not count towards SpreadLend parameter. Always offered for 30 day period.
HighHoldAmount. The amount of currency to offer at the HighHoldDailyRate rate. Does not count towards SpreadLend parameter. Always offered for 30 day period. If set to 0 High Hold offer is not made.
BITFINEX LENDING ALGORITHM – CASCADE BOT
Cascade Bot lending strategy is modified so that starting daily lend rate is not defined as an absolute value, but rather than an increment (which can also be negative) to FRR.
Cascade lending bot for Bitfinex. Places lending offers at a high rate, then gradually lower them until they're filled.
This is intended as a proof of concept alternative to fractional reserve rate (FRR) loans. FRR lending heavily distorts the swap market on Bitfinex. My hope is that Bitfinex will remove the FRR, and implement an on-site version of this bot for lazy lenders (myself included) to use instead.
uStartDailyLendRateFRRInc Float. The starting rate of FRR + StartDailyLendRateFRRInc that offers will be placed at.
ReduceDailyLendRate Float. The rate at which to reduce already existing offers every ReductionIntervalMinutes minutes.
MinDailyLendRate Float. The minimum daily lend rate that you're willing to lend at.
LendPeriod Integer. The period for lend offers.
ReductionIntervalMinutes Float. How often should the unlent offers` rate be decremented. Note that this parameter should be more than or equal to the interval at which bot is scheduled to run (usually 10 minutes).
ExponentialDecayMult Float. Exponential decay constant which sets the decay rate. Set to 1 for a linear decay. Decay formula: NewDailyRate = (CurrentDailyRate - MinDailyLendRate) * ExponentialDecayMult + MinDailyLendRate.
PLATFORM DESIGN
PLATFORM DESIGN – REST API
The Platform will be built using Spring Boot technology with Tomcat Embedded server.
Spring MVC REST APIs exposed from the platform communicates with the web app built using Angular JS v 2.0 or above.
Data access and storage in the platform will be performed using Spring Data middleware components.
In order to speed up the operations in the Platform Hazelcast In Memory Data Grid (IMDG) will be used.
Docker Swarm will be used as container in which Platform components run during development, testing and production deployments.
The Platform will be deployed on Amazon Elastic Compute Cloud.
API ENDPOINTS
The platform will serve requests from web app through Spring MVC REST APIs. The following REST APIs are identified:
/register
/authenticate
/googleauthenticator
/logout
/account
/deposit
/withdraw
/transfer
/createbot
/activatebot
/deactivatebot
/myloans
/loanbalanceBTC
/exchnagerateBTC
/exchnagerate$
/minrateperday
/mindepositrent
/manualspreadsize
/maxlending
/maxdurationlending
/config
/myinterest
/lendingbalance
/totalinterest
/coinbalanceBTC
/currentrateBTC
/currentBalance$
/currenexchangetrate$
/forecast
/balancechart
/interestchart
/history
/notificationsettingsemail
/notificationsettingsms
/notifyemail
/notifysms
PLATFORM DESIGN – BOT CREATION
Multiple Bots can be created from the User Home Settings box. User needs to provide API Key and Secret of the Exchange to create the Bot.
Bot creation request will be sent to Platform REST API with the access token.
Bot creation REST API will send the request to Exchange through REST APIs exposed by Exchange with API Key and Secret provided by Exchange.
On successful response from Exchange by verifying the credentials, the Platform will move to Bot creation process.
On unsuccessful response from Exchange, Platform REST API will send the response to the user as Bot creation Failed.
PLATFORM DESIGN - BOT ACTIVATION
Once Bot activation request is received from the Platform, Bot configuration settings for respective Exchange will be loaded from Database or Hazelcast IMDG.
AWS Java SDK will be used to clone an already created t2.nano or t2.micro Ubuntu AMI (Amazon Instance) running Embedded Tomcat instance created in an automated fashion from the Platform.
The Embedded Tomcat will also expose a Web Socket Endpoint for real-time communication with the Platform and User such as Loan offers and Loan demands, Interest rates, Exchange rates.
The Bot Host Name will be stored in the Hazelcast as a value along with Encrypted API Key and Email address of the user.
PLATFORM DESIGN – MULTI-THREADED BOT COMPONENT
The Multi-threaded Bot component will be launched when the Bot is launched and it will use the Bot settings from Database or Hazelcast cache if available.
The Multi-threaded Bot component will be connected to the Exchange to perform periodic operations.
The components will be configured for active, inactive sleep and request timeout scenarios.
The User communicates with the Bot through Web Socket Session via Platform.
The responses from connected Exchanges are sent back to User through the Web Socket Session object.
PLATFORM DESIGN – BOT AMI INSTANCE LAUNCHER
An Ubuntu t2.nano or t2.micro instance with an Embedded Tomcat will be cloned from an existing Ubuntu AMI instance in Amazon Elastic Computing Cloud.
In the init script of existing Ubuntu AMI instance, Embedded Tomcat will be configured to launch at startup.
Once a request for the launch of Bot is received AWS Java SDK will be used to clone the t2.nano or t2.micro existing Ubuntu AMI instance and manage the lifecycle of AMI instance.
PLATFORM DESIGN – EXCHANGE API INTEGRATION
Platform and Bot will connect to Exchange APIs to perform Lending operations.
Spring REST client APIs will be used to connect to Exchange REST APIs.
PLATFORM DESIGN – DOCKER
Docker is an open platform for developers and sysadmins to build, ship, and run distributed applications, whether on laptops, data center VMs, or the cloud.
Autorestart is a service-level setting in Docker that can automatically start the containers if they stop or crash.
A Docker Swarm is a cluster of Docker engines, or nodes, to deploy services.
The swarm manager uses ingress load balancing to expose the services to make available externally to the swarm.
Docker Cloud makes it easy to spawn new containers of services to handle the additional load.
The CRYPTOBOT platform will be shipped to Test, Integration and Production environment using Docker Swarm deployed on Amazon EC2.
PLATFORM DESIGN - NOTIFICATIONS
The lending activities and alerts can be configured in the system.
The notification settings such as SMTP Server, SMS Gateway can be retrieved from Database or Hazelcast cache if available.
The platform will send notifications or alerts to the subscribed users as and when notifications or alerts are received in the system.
The notifications window can be configured in the system database.
DATABASE DESIGN
PostgreSQL database will be used to store data.
User Account, Referral System, Lending, Loans, Admin, History, Exchanges, Sale, Buy, Balance, Currencies, Charts, Notifications, Bots are the identified tables for the Database.
Spring Data JPA will be used for Data Access operations from the Database.
0 notes