digvijayb
digvijayb
digvijayb
64 posts
beAwesome();
Don't wanna be here? Send us removal request.
digvijayb · 4 years ago
Link
0 notes
digvijayb · 6 years ago
Text
Angular Virtual Scrolling — ngVirtualScrolling
https://medium.com/better-programming/angular-virtual-scrolling-ngvirtualscrolling-159e1c66c63b
Learn how to implement virtual scrolling in Angular 7
0 notes
digvijayb · 6 years ago
Text
Learn Enough Docker to be Useful
Docker Images
Recall that a Docker container is a Docker image brought to life. It’s a self-contained, minimal operating system with application code.
The Docker image is created at build time and the Docker container is created at run time.
The Dockerfile is at the heart of Docker. The Dockerfile tells Docker how to build the image that will be used to make containers.
Each Docker image contains a file named Dockerfile with no extension. The Dockerfile is assumed to be in the current working directory when docker build is called to create an image. A different location can be specified with the file flag (-f).
Recall that a container is built from a series of layers. Each layer is read only, except the final container layer that sits on top of the others. The Dockerfile tells Docker which layers to add and in which order to add them.
Each layer is really just a file with the changes since the previous layer. In Unix, pretty much everything is a file.
The base image provides the initial layer(s). A base image is also called a parent image.
When an image is pulled from a remote repository to a local machine only layers that are not already on the local machine are downloaded. Docker is all about saving space and time by reusing existing layers.
A Dockerfile instruction is a capitalized word at the start of a line followed by its arguments. Each line in a Dockerfile can contain an instruction. Instructions are processed from top to bottom when an image is built. Instructions look like this:
FROM ubuntu:18.04 COPY . /app
Only the instructions FROM, RUN, COPY, and ADD create layers in the final image. Other instructions configure things, add metadata, or tell Docker to do something at run time, such as expose a port or run a command.
In this article, I’m assuming you are using a Unix-based Docker image. You can also used Windows-based images, but that’s a slower, less-pleasant, less-common process. So use Unix if you can.
Let’s do a quick once-over of the dozen Dockerfile instructions we’ll explore.
A Dozen Dockerfile Instructions
FROM — specifies the base (parent) image. LABEL —provides metadata. Good place to include maintainer info. ENV — sets a persistent environment variable. RUN —runs a command and creates an image layer. Used to install packages into containers. COPY — copies files and directories to the container. ADD — copies files and directories to the container. Can upack local .tar files. CMD — provides a command and arguments for an executing container. Parameters can be overridden. There can be only one CMD. WORKDIR — sets the working directory for the instructions that follow. ARG — defines a variable to pass to Docker at build-time. ENTRYPOINT — provides command and arguments for an executing container. Arguments persist. EXPOSE — exposes a port. VOLUME — creates a directory mount point to access and store persistent data.
Let’s get to it!
Instructions and Examples
A Dockerfile can be as simple as this single line:
FROM ubuntu:18.04
FROM
A Dockerfile must start with a FROM instruction or an ARG instruction followed by a FROM instruction.
The FROM keyword tells Docker to use a base image that matches the provided repository and tag. A base image is also called a parent image.
In this example, ubuntu is the image repository. Ubuntu is the name of an official Docker repository that provides a basic version of the popular Ubuntu version of the Linux operating system.
Notice that this Dockerfile includes a tag for the base image: 18.04 . This tag tells Docker which version of the image in the ubuntu repository to pull. If no tag is included, then Docker assumes the latest tag, by default. To make your intent clear, it’s good practice to specify a base image tag.
When the Dockerfile above is used to build an image locally for the first time, Docker downloads the layers specified in the ubuntu image. The layers can be thought of as stacked upon each other. Each layer is a file with the set of differences from the layer before it.
When you create a container, you add a writable layer on top of the read-only layers.
Docker uses a copy-on-write strategy for efficiency. If a layer exists at a previous level within an image, and another layer needs read access to it, Docker uses the existing file. Nothing needs to be downloaded.
When an image is running, if a layer needs modified by a container, then that file is copied into the top, writeable layer. Check out the Docker docs here to learn more about copy-on-write.
A More Substantive Dockerfile
Although our one-line image is concise, it’s also slow, provides little information, and does nothing at container run time. Let’s look at a longer Dockerfile that builds a much smaller size image and executes a script at container run time.
FROM python:3.7.2-alpine3.8 LABEL maintainer=" [email protected]" ENV ADMIN="jeff" RUN apk update && apk upgrade && apk add bashCOPY . ./appADD https://raw.githubusercontent.com/discdiver/pachy-vid/master/sample_vids/vid1.mp4 \ /my_app_directory RUN ["mkdir", "/a_directory"]CMD ["python", "./my_script.py"]
Whoa, what’s going on here? Let’s step through it and demystify.
The base image is an official Python image with the tag 3.7.2-alpine3.8. As you can see from its source code, the image includes Linux, Python and not much else. Alpine images are popular because they are small, fast, and secure. However, Alpine images don’t come with many operating system niceties. You must install such packages yourself, should you need them.
LABEL
The next instruction is LABEL. LABEL adds metadata to the image. In this case, it provides the image maintainer’s contact info. Labels don’t slow down builds or take up space and they do provide useful information about the Docker image, so definitely use them. More about LABEL metadata can be found here.
ENV
ENV sets a persistent environment variable that is available at container run time. In the example above, you could use the ADMIN variable when when your Docker container is created.
ENV is nice for setting constants. If you use a constant several places in your Dockerfile and want to change its value at a later time, you can do so in one location.
With Dockerfiles there are often multiple ways to accomplish the same thing. The best method for your case is a matter of balancing Docker conventions, transparency, and speed. For example, RUN, CMD, and ENTRYPOINT serve different purposes, and can all be used to execute commands.
RUN
RUN creates a layer at build-time. Docker commits the state of the image after each RUN.
RUN is often used to install packages into an image. In the example above, RUN apk update && apk upgrade tells Docker to update the packages from the base image. && apk add bash tells Docker to install bash into the image.
apk stands for Alpine Linux package manager. If you’re using a Linux base image in a flavor other than Alpine, then you’d install packages with RUN apt-get instead of apk. apt stand for advanced package tool. I’ll discuss other ways to install packages in a later example.
RUN — and its cousins, CMD and ENTRYPOINT — can be used in exec form or shell form. Exec form uses JSON array syntax like so: RUN ["my_executable", "my_first_param1", "my_second_param2"].
In the example above, we used shell form in the format RUN apk update && apk upgrade && apk add bash.
Later in our Dockerfile we used the preferred exec form with RUN ["mkdir", "/a_directory"] to create a directory. Don’t forget to use double quotes for strings with JSON syntax for exec form!
COPY
The COPY . ./app instruction tells Docker to take the files and folders in your local build context and add them to the Docker image’s current working directory. Copy will create the target directory if it doesn’t exist.
ADD
ADD does the same thing as COPY, but has two more use cases. ADD can be used to move files from a remote URL to a container and ADD can extract local TAR files.
I used ADD in the example above to copy a file from a remote url into the container’s my_app_directory. The Docker docs don’t recommend using remote urls in this manner because you can’t delete the files. Extra files increase the final image size.
The Docker docs also suggest using COPY instead of ADD whenever possible for improved clarity. It’s too bad that Docker doesn’t combine ADD and COPY into a single command to reduce the number of Dockerfile instructions to keep straight 😃.
Note that the ADD instruction contains the \ line continuation character. Use it to improve readability by breaking up a long instruction over several lines.
CMD
CMD provides Docker a command to run when a container is started. It does not commit the result of the command to the image at build time. In the example above, CMD will have the Docker container run the my_script.py file at run time.
A few other things to know about CMD:
Only one CMD instruction per Dockerfile. Otherwise all but the final one are ignored.
CMD can include an executable. If CMD is present without an executable, then an ENTRYPOINT instruction must exist. In that case, both CMD and ENTRYPOINT instructions should be in JSON format.
Command line arguments to docker run override arguments provided to CMD in the Dockerfile.
Ready for more?
Let’s introduce a few more instructions in another example Dockerfile.
FROM python:3.7.2-alpine3.8 LABEL maintainer=" [email protected]"# Install dependencies RUN apk add --update git # Set current working directory WORKDIR /usr/src/my_app_directory # Copy code from your local context to the image working directory COPY . . # Set default value for a variable ARG my_var=my_default_value # Set code to run at container run time ENTRYPOINT ["python", "./app/my_script.py", "my_var"] # Expose our port to the world EXPOSE 8000 # Create a volume for data storage VOLUME /my_volume
Note that you can use comments in Dockerfiles. Comments start with #.
Package installation is a primary job of Dockerfiles. As touched on earlier, there are several ways to install packages with RUN.
You can install a package in an Alpine Docker image with apk. apk is like apt-get in regular Linux builds. For example, packages in a Dockerfile with a base Ubuntu image can be updated and installed like this: RUN apt-get update && apt-get install my_package.
In addition to apk and apt-get, Python packages can be installed through pip, wheel, and conda. Other languages can use various installers.
The underlying layers need to provide the install layer with the the relevant package manger. If you’re having an issue with package installation, make sure the package managers are installed before you try to use them. 😃
You can use RUN with pip and list the packages you want installed directly in your Dockerfile. If you do this concatenate your package installs into a single instruction and break it up with line continuation characters (\). This method provides clarity and fewer layers than multiple RUN instructions.
Alternatively, you can list your package requirements in a file and RUN a package manager on that file. Folks usually name the file requirements.txt. I’ll share a recommended pattern to take advantage of build time caching with requirements.txt in the next article.
WORKDIR
WORKDIR changes the working directory in the container for the COPY, ADD, RUN, CMD, and ENTRYPOINT instructions that follow it. A few notes:
It’s preferable to set an absolute path with WORKDIR rather than navigate through the file system with cd commands in the Dockerfile.
WORKDIR creates the directory automatically if it doesn’t exist.
You can use multiple WORKDIR instructions. If relative paths are provided, then each WORKDIR instruction changes the current working directory.
ARG
ARG defines a variable to pass from the command line to the image at build-time. A default value can be supplied for ARG in the Dockerfile, as it is in the example: ARG my_var=my_default_value.
Unlike ENV variables, ARG variables are not available to running containers. However, you can use ARG values to set a default value for an ENV variable from the command line when you build the image. Then, the ENV variable persists through container run time. Learn more about this technique here.
ENTRYPOINT
The ENTRYPOINT instruction also allows you provide a default command and arguments when a container starts. It looks similar to CMD, but ENTRYPOINT parameters are not overwritten if a container is run with command line parameters.
Instead, command line arguments passed to docker run my_image_name are appended to the ENTRYPOINT instruction’s arguments. For example, docker run my_image bash adds the argument bash to the end of the ENTRYPOINT instruction’s existing arguments.
A Dockerfile should have at least one CMD or ENTRYPOINT instruction.
The Docker docs have a few suggestions for choosing between CMD and ENTRYPOINT for your initial container command:
Favor ENTRYPOINT when you need to run the same command every time.
Favor ENTRYPOINT when a container will be used as an executable program.
Favor CMD when you need to provide extra default arguments that could be overwritten from the command line.
In the example above, ENTRYPOINT ["python", "my_script.py", "my_var"]has the container run the the python script my_script.py with the argument my_var when the container starts running. my_var could then be used by my_script via argparse. Note that my_var has a default value supplied by ARG earlier in the Dockerfile. So if an argument isn’t passed from the command line, then the default argument will be used.
Docker recommends you generally use the exec form of ENTRYPOINT: ENTRYPOINT ["executable", "param1", "param2"]. This form is the one with JSON array syntax.
EXPOSE
The EXPOSE instruction shows which port is intended to be published to provide access to the running container. EXPOSE does not actually publish the port. Rather, it acts as a documentation between the person who builds the image and the person who runs the container.
Use docker run with the -p flag to publish and map one or more ports at run time. The uppercase -P flag will publish all exposed ports.
VOLUME
VOLUME specifies where your container will store and/or access persistent data. Volumes are the topic of a forthcoming article in this series, so we’ll investigate them then.
Let’s review the dozen Dockerfile instructions we’ve explored.
Important Dockerfile Instructions
FROM — specifies the base (parent) image. LABEL —provides metadata. Good place to include maintainer info. ENV — sets a persistent environment variable. RUN —runs a command and creates an image layer. Used to install packages into containers. COPY — copies files and directories to the container. ADD — copies files and directories to the container. Can upack local .tar files. CMD — provides a command and arguments for an executing container. Parameters can be overridden. There can be only one CMD. WORKDIR — sets the working directory for the instructions that follow. ARG — defines a variable to pass to Docker at build-time. ENTRYPOINT — provides command and arguments for an executing container. Arguments persist. EXPOSE — exposes a port. VOLUME — creates a directory mount point to access and store persistent data.
Now you know a dozen Dockerfile instructions to make yourself useful! Here’s a bonus bagel: a cheat sheet with all the Dockerfile instructions. The five commands we didn’t cover are USER, ONBUILD, STOPSIGNAL, SHELL, and HEALTHCHECK. Now you’ve seen their names if you come across them. 😃
Wrap
Dockerfiles are perhaps the key component of Docker to master. I hope this article helped you gain confidence with them. We’ll revisit them in the next article in this series on slimming down images. Follow me to make sure you don’t miss it!
If you found this article helpful, please help others find it by sharing on your favorite social media.
Credit : https://towardsdatascience.com/learn-enough-docker-to-be-useful-b0b44222eef5
0 notes
digvijayb · 6 years ago
Text
gRPC Unary API using Java
gRPC — the modern, lightweight communication protocol from Google. It’s a high-performance, open-source universal remote procedure call (RPC) framework that works across a dozen languages running in any OS. gRPC declares the service in a language-agnostic Interface Definition Language (IDL), and then generate language-specific bindings.
gRPC is designed to make the clients believe that the server is on the same machine. Clients invoke a method on the Stub, which gets transparently handled by the underlying protocol.
gRPC’s secret sauce lies in the way the serialization is handled. It is based on Protocol Buffers, an open source mechanism for serializing structured data, which is language and platform neutral. Similar to XML, Protocol Buffers are verbose and descriptive. But they are smaller, faster, and more efficient than other wire-format protocols. Any custom data type that needs to be serialized will be defined as a Protocol Buffer in gRPC.
The latest version of Protocol Buffer is proto3, which supports code generation in Java, C++, Python, Java Lite, Ruby, JavaScript, Objective-C, and C#. When a Protocol Buffer is compiled for a specific language, it comes with accessors (setters and getters) for each field definition.
When compared to REST+JSON combination, gRPC offers better performance and security. It heavily promotes the use of SSL/TLS to authenticate the server and to encrypt all the data exchanged between the client and the server.
Why should microservices developers use gRPC? It uses HTTP/2 to support highly performant and scalable APIs. The use of binary rather than text keeps the payload compact and efficient. HTTP/2 requests are multiplexed over a single TCP connection, allowing multiple concurrent messages to be in flight without compromising network resource usage. It uses header compression to reduce the size of requests and responses.
Project Setup.
Let’s make a simple client/server system on Java and send messages inside of it. We start from creating a new brand new Gradle project
And choose a path to the Gradle. Also, don’t forget to turn on Auto-Import. Here we set a path to the Gradle installed via Brew, but you can have it in any other location depending on your OS. Or you can simply use default gradle wrapper option if you have it in your IDE.
The last thing is a root module name, we leave it as a default project name — users.
Dependencies.
Open your build.gradle file. The first rows define a project name and version.
Next, we should apply the required plugins:
java — this plugin adds compilation and building capabilities for Java along with testing;
idea — this plugin is for Intellij users only;
com.google.protobuf — this plugin adds support for Protobuf library.
Let’s add Java 8 compatibility to the project.
And repositories of dependencies.
Let’s add 3 dependencies to support gRPC and Protobuf in our application. Here we should add:
grpc-netty — is for networking layer;
grpc-protobuf — is for Protobuf layer;
grpc-stub — to support gRPC Stubs.
We will generate gRPC support classes from proto files. To use them in our project we have to add additional Source Sets pointing to those classes.
And a build script for Gradle to generate classes from proto files.
The last thing in Gradle config to start work with gRPC is
Protobuf.
Let’s define the project hierarchy. We should have 2 top-level directories for the source code — java and proto. In the java directory, we add a root package and two packages in it with corresponding classes.
In proto directory, we define Protobuf files which will be used to generate gRPC classes.
Let’s take a look at proto file. It’s pretty standard for Protobuf protocol. We define syntax level, package, also java package name for generating classes.
Next we define a User with first and last name. Unary gRPC request means you send a request like in plain old REST and receive a response. So we define UserRequest and UserResponse messages. And the final step we define the service to send forth and back all this stuff.
Let’s try to generate Java classes. In Gradle tab within Other section you should see a generateProto task. After you will run it, you should see a new build directory with classes deep inside of it.
Now we’re ready to use generated classes in our project.
Server.
We begin with UserService implementation. This is a new class extended from one of the generated gRPC classes pack — UserServiceImplBase. We create it in a server package.
The content of the class is a single method that overrides the default implementation of
rpc User(UserRequest) returns (UserResponse) {};
Here the input attributes are the request itself and the response stream observer — remember, we use HTTP/2 in gRPC and it’s an open TCP connection.
We create a response from the hardcoded first name and last name of the User entity — John Doe. Then we wrap it in a UserResponse object and send back via StreamObserver with onNext method. By calling the onCompleted method in we finalize TCP connection.
The UserServer class contains the main method which starts the server on localhost and 5000 port. We’re adding a User Service into the server builder.
Also, we should gracefully shut down the server when application going to stop. We add a shutdown hook before making the server to wait for termination. In the hook, we send a shutdown message to the server.
Client.
Let’s go through UserClient class. Here for the sake of clarity, we put all the code into the main method. The first step is to open a channel to localhost:5000. Next, we creating a blocking Stub for this channel. We create a User object with hardcode name value. To send a request we create, well, a UserRequest instance generated from proto. And send this request via the client. Right after we get the response, we shut down the channel.
Up and Running.
To run the server and the client, we add two applications to the configuration of the project.
Run server app first, then client app. In the console, you should see
Stop both of them and the server will call a shutdown hook.
Credit : https://medium.com/pharos-production/grpc-unary-api-using-java-cfcb07533c82
0 notes
digvijayb · 6 years ago
Link
0 notes
digvijayb · 7 years ago
Text
Reactive Programming with Spring 5
https://medium.com/@hantsy/reactive-programming-with-spring-5-3bfc5d324ba0
This One Stop Guide to get started with Full On Reactive Programming using Spring 5 Reactor Core
Tumblr media
Reactive or Reactive Streams is a hot topic in these days, you can see it in blog entries, presentations, or some online course. In this post I will introduce the new Reactive feature provided in the upcoming Spring 5.
3 notes · View notes
digvijayb · 7 years ago
Text
Dynamic Job Scheduling with Quartz and Spring
https://juliuskrah.com/tutorial/2017/09/26/dynamic-job-scheduling-with-quartz-and-spring/
We will dynamically create jobs that sends emails to a predefined group of people on a user defined schedule using Spring Boot.
http://projects.spring.io/spring-batch/faq.html#schedulers
https://docs.spring.io/spring-batch/trunk/reference/html/configureJob.html
0 notes
digvijayb · 7 years ago
Text
Provider JSON Message Write and Read for JAX-RS
This Javaee 8 come with jsonb api that mean need to add other third party lib. for greater control you can write MessageBodyWrite and MessageBodyReader.
https://github.com/javaee/jsonp/blob/master/jaxrs/src/main/java/org/glassfish/json/jaxrs/JsonValueBodyWriter.java
https://github.com/javaee/jsonp/blob/master/jaxrs/src/main/java/org/glassfish/json/jaxrs/JsonValueBodyReader.java
0 notes
digvijayb · 7 years ago
Text
The Top 5 New Features in Java EE 8
https://dzone.com/articles/the-top-5-new-features-in-java-ee-8
The new Security API: Annotation-driven authentication mechanism. The brand new Security API, which contains three excellent new feature: an identity store abstraction, a new security context, and a new annotation-driven authentication mechanism that makes web.xmlfile declarations obsolete. This last one is what I'll be talking about today.
JAX-RS 2.1: New reactive client. The new reactive client in JAX-RS 2.1, which embraces the reactive programming style and allows the combination of endpoint results.
The new JSON Binding API. The new JSON-binding API, which provides a native Java EE solution to JSON serialization and deserialization.
CDI 2.0: Use in Java SE. This interesting new feature in CDI 2.0 allows bootstrapping of CDI in Java SE application.
Servlet 4.0: Server Push. This server push feature in Servlet 4.0 aligns the servlet specification with HTTP/2.
0 notes
digvijayb · 7 years ago
Text
What's new in Java EE 8
https://www.ibm.com/developerworks/library/j-whats-new-in-javaee-8/index.html
The much-anticipated release of Java™ EE 8 is nearly upon us. This first release of the Java enterprise platform since June 2013 is half of a two-part release culminating with Java EE 9. Oracle has strategically repositioned Java EE, emphasizing technologies that support cloud computing, microservices, and reactive programming. Reactive programming is now woven into the fabric of many Java EE APIs, and the JSON interchange format underpins the core platform.
We’ll take a whistle-stop tour of the main features found in Java EE 8. Highlights include API updates and introductions, and new support for HTTP/2, reactive programming, and JSON. Get started with the Java EE specifications and upgrades that will surely shape enterprise Java programming for years to come.
0 notes
digvijayb · 7 years ago
Link
Encrypted properties with Spring
0 notes
digvijayb · 7 years ago
Link
Spring Boot: Encrypt Property Value in Properties File
0 notes
digvijayb · 7 years ago
Text
How to Maven Command-Line Arguments in Spring Boot
Spring Boot 1.x
mvn spring-boot:run -Drun.arguments=--spring.main.banner-mode=off,--customArgument=custom
Multi-Arguments should be comma separated
Spring Boot 2.x
mvn spring-boot:run -Dspring-boot.run.arguments=--spring.main.banner-mode=off,--customArgument=custom
Overriding System Properties
Other than passing custom arguments, we can also override system properties.
If needed, we can stop our application from converting command-line arguments to properties
@SpringBootApplication public class Application extends SpringBootServletInitializer { public static void main(String[] args) { SpringApplication application = new SpringApplication(Application.class); application.setAddCommandLineProperties(false); application.run(args); } }
More Info
https://docs.spring.io/spring-boot/docs/current-SNAPSHOT/maven-plugin/run-mojo.html
0 notes
digvijayb · 7 years ago
Text
Class Loader HOW-TO (TOMCAT)
An explanation of how Tomcat's classloading mechanism works
WebappX — A class loader is created for each web application that is deployed in a single Tomcat instance. All unpacked classes and resources in the /WEB-INF/classes directory of your web application, plus classes and resources in JAR files under the /WEB-INF/lib directory of your web application, are made visible to this web application, but not to other ones.
If, then, the Spring Jar files are bundled in WEB-INF/lib for each application then you will have no issues. An issue would only arise if they were in some shared location.
http://tomcat.apache.org/tomcat-7.0-doc/class-loader-howto.html
1 note · View note
digvijayb · 7 years ago
Text
Unable to locate Spring NamespaceHandler for XML schema namespace
I am working on a backend Maven3 project. I am building an uber JAR which contains several other JAR files and resources. That worked pretty good. But than I tried to run the JAR file with
java -jar <uber-super-duper-jar-file>
I got this Exception:
INFO org.springframework.beans.factory.xml.XmlBeanDefinitionReader - Loading XML bean definitions from class path resource [applicationContext.xml] Exception in thread "main" org.springframework.beans.factory.parsing.BeanDefinitionParsingException: Configuration problem: Unable to locate Spring NamespaceHandler for XML schema namespace [http://www.springframework.org/schema/context]
This is because I have several Spring Jar in my dependencies. Some of the spring jars contain meta info files with the same name. To avoid that some meta files are overridden you have to merge it. If you are using the maven shade plugin to build your JAR file you can do the merge with this xml snippet:
<transformer implementation="org.apache.maven.plugins.shade.resource.AppendingTransformer"> <resource>META-INF/spring.handlers</resource> </transformer> <transformer implementation="org.apache.maven.plugins.shade.resource.AppendingTransformer"> <resource>META-INF/spring.schemas</resource> </transformer>
Here is a more detailed description:
http://maven.apache.org/plugins/maven-shade-plugin/examples/resource-transformers.html
0 notes
digvijayb · 7 years ago
Link
Dependency injection has always been one of Angular’s biggest features and selling points. It allows us to inject dependencies in different components across our applications, without needing to know, how those dependencies are created, or what dependencies they need themselves. However, it turns out that the current dependency injection system in Angular 1.x has some problems that need to be solved in Angular 2.x, in order to build the next generation framework. In this article, we’re going to explore the new dependency injection system for future generations.
0 notes
digvijayb · 7 years ago
Link
0 notes