#Free SSL with Nginx Proxy
Explore tagged Tumblr posts
virtualizationhowto · 2 years ago
Text
Setting Up Nginx Proxy Manager on Docker with Easy LetsEncrypt SSL
Setting Up Nginx Proxy Manager on Docker with Easy LetsEncrypt SSL #homelab #selfhosted #NginxProxyManagerGuide #EasySSLCertificateManagement #UserFriendlyProxyHostSetup #AdvancedNginxConfiguration #PortForwarding #CustomDomainForwarding #FreeSSL
There are many reverse proxy solutions that enable configuring SSL certificates, both in the home lab and production environments. Most have heard about Traefik reverse proxy that allows you to pull LetsEncrypt certificates for your domain name automatically. However, there is another solution that provides a really great GUI dashboard for managing your reverse proxy configuration and LetsEncrypt…
Tumblr media
View On WordPress
0 notes
rootresident · 1 month ago
Text
SSL Cert Automation
SSL/TLS certificates are absolutely vital to the web. Yes, even your homelab, even if everything is local-only. I wholeheartedly recommend buying a domain for your homelab, as they can be had for ~$5/yr or less depending on the TLD (top-level domain) you choose. Obviously a .com domain is going to be more expensive, but others like .xyz are super affordable, and it makes a lot of things a whole lot easier. I recommend Cloudflare or Porkbun as your registrar; I've also used Namecheap and they're good but lack API access for small accounts. And please, PLEASE for the love of god DO NOT USE GODADDY. EVER.
First of all, why is cert automation even important? Most certificates you will purchase are issued for a one year period, so you only need to worry about renewal once a year, that's not too bad right? Well, that's all changing very soon. With issuers like Letsencrypt ending expiry emails, and the push to further shorten cert lifetime, automation is all the more needed. Not to mention Letsencrypt is free so there is very little reason not to use them (or a similar issuer).
"Okay, you've convinced me. But how???" Well, I'm glad you asked. By far the absolute easiest way is to use a reverse proxy that does all the work for you. Simply set up Caddy, Traefik, Nginx Proxy Manager, etc. and the appropriate provider plugin (if you're using DNS challenge, more on that later), and you're good to go. Everything you host will go through the proxy, which handles SSL certificate provisioning, renewal, and termination for you without needing to lift a finger. This is how a lot of people do it, and there's nothing wrong with doing it this way. However, it may not be the best solution depending on the complexity of your lab.
If you know a thing or two about managing SSL certificates you might be thinking about just running your own certificate authority. That does make it easier, you can make the certs expire whenever you want! Woo, 100 year certificates! Except not really, because many browsers/devices will balk at certificates with unrealistic lifetimes. Then you also have to install the cert authority on any and all client devices, docker containers, etc. It gets to be more of a pain than it's worth, especially when getting certs from an actual trusted CA is so easy. Indeed I used to do this, but when the certs did need to be renewed it was a right pain in the ass.
My lab consists of 6 physical computers, 3 are clustered with each other and all of them talk to the others for various things. Especially for the proxmox cluster, having a good certificate strategy is important because they need to be secure and trust each other. It's not really something I can reasonably slap a proxy in front of and expect it to be reliable. But unfortunately, there's not really any good out of the box solutions for exactly what I needed, which is automatic renewal and deployment to physical machines depending on the applications on each that need the certs.
So I made one myself. It's pretty simple really, I have a modified certbot docker container which uses a DNS challenge to provision or renew a wildcard certificate for my domain. Then an Ansible playbook runs on all the physical hosts (or particularly important VMs) to update the new cert and restart the application(s) as needed. And since it's running on a schedule, it helps eliminate the chance of accidental misconfiguration if I'm messing with something else in the lab. This way I apply the same cert to everything, and the reverse proxy will also use this same certificate for anything it serves.
The DNS challenge is important, because it's required to get a wildcard cert. You could provision certs individually without it, but the server has to be exposed to the internet which is not ideal for many backend management type stuff like Proxmox. You need to have API access to your registrar/DNS provider in order to accomplish this, otherwise you need to add the DNS challenge manually which just defeats the whole purpose. Basically certbot request a certificate, and the issuer says, "Oh yeah? If you really own this domain, then put this random secret in there for me to see." So it does, using API access, and the issuer trusts that you own the domain and gives you the requested certificate. This type of challenge is ideal for getting certs for things that aren't on the public internet.
This sure was a lot of words for a simple solution, huh. Well, more explanation never hurt anyone, probably. The point of this post is to show that while SSL certificates can be very complicated, for hobby use it's actually really easy to set up automation even for more complex environments. It might take a bit of work up front, but the comfort and security you get knowing you can sit back and not worry about anything and your systems will keep on trucking is pretty valuable.
0 notes
fullstackmasters01 · 1 year ago
Text
Python FullStack Developer Interview Questions
Tumblr media
Introduction :
A Python full-stack developer is a professional who has expertise in both front-end and back-end development using Python as their primary programming language. This means they are skilled in building web applications from the user interface to the server-side logic and the database. Here’s some information about Python full-stack developer jobs.
Interview questions
What is the difference between list and tuple in Python?
Explain the concept of PEP 8.
How does Python's garbage collection work?
Describe the differences between Flask and Django.
Explain the Global Interpreter Lock (GIL) in Python.
How does asynchronous programming work in Python?
What is the purpose of the ORM in Django?
Explain the concept of middleware in Flask.
How does WSGI work in the context of web applications?
Describe the process of deploying a Flask application to a production server.
How does data caching improve the performance of a web application?
Explain the concept of a virtual environment in Python and why it's useful.
Questions with answer
What is the difference between list and tuple in Python?
Answer: Lists are mutable, while tuples are immutable. This means that you can change the elements of a list, but once a tuple is created, you cannot change its values.
 Explain the concept of PEP 8.
Answer: PEP 8 is the style guide for Python code. It provides conventions for writing code, such as indentation, whitespace, and naming conventions, to make the code more readable and consistent.
 How does Python's garbage collection work?
Answer: Python uses automatic memory management via a garbage collector. It keeps track of all the references to objects and deletes objects that are no longer referenced, freeing up memory.
Describe the differences between Flask and Django.
Answer: Flask is a micro-framework, providing flexibility and simplicity, while Django is a full-stack web framework with built-in features like an ORM, admin panel, and authentication.
Explain the Global Interpreter Lock (GIL) in Python.
Answer: The GIL is a mutex that protects access to Python objects, preventing multiple native threads from executing Python bytecode at once. This can limit the performance of multi-threaded Python programs.
How does asynchronous programming work in Python?
Answer: Asynchronous programming in Python is achieved using the asyncio module. It allows non-blocking I/O operations by using coroutines, the async and await keywords, and an event loop.
What is the purpose of the ORM in Django?
Answer: The Object-Relational Mapping (ORM) in Django allows developers to interact with the database using Python code, abstracting away the SQL queries. It simplifies database operations and makes the code more readable.
Explain the concept of middleware in Flask.
Answer: Middleware in Flask is a way to process requests and responses globally before they reach the view function. It can be used for tasks such as authentication, logging, and modifying request/response objects.
How does WSGI work in the context of web applications?
Answer: Web Server Gateway Interface (WSGI) is a specification for a universal interface between web servers and Python web applications or frameworks. It defines a standard interface for communication, allowing compatibility between web servers and Python web applications.
      Describe the process of deploying a Flask application to a production server.
Answer: Deploying a Flask application involves configuring a production-ready web server (e.g., Gunicorn or uWSGI), setting up a reverse proxy (e.g., Nginx or Apache), and handling security considerations like firewalls and SSL.
How does data caching improve the performance of a web application?
Answer: Data caching involves storing frequently accessed data in memory, reducing the need to fetch it from the database repeatedly. This can significantly improve the performance of a web application by reducing the overall response time.
 Explain the concept of a virtual environment in Python and why it's useful.
Answer: A virtual environment is a self-contained directory that contains its Python interpreter and can have its own installed packages. It is useful to isolate project dependencies, preventing conflicts between different projects and maintaining a clean and reproducible development environment.
When applying for a Python Full Stack Developer position:
Front-end Development:
What is the Document Object Model (DOM), and how does it relate to front-end development?
Explain the difference between inline, internal, and external CSS styles. When would you use each one?
What is responsive web design, and how can you achieve it in a web application?
Describe the purpose and usage of HTML5 semantic elements, such as , , , and .
What is the role of JavaScript in front-end development? How can you handle asynchronous operations in JavaScript?
How can you optimize the performance of a web page, including techniques for reducing load times and improving user experience?
What is Cross-Origin Resource Sharing (CORS), and how can it be addressed in web development?
Back-end Development:
Explain the difference between a web server and an application server. How do they work together in web development?
What is the purpose of a RESTful API, and how does it differ from SOAP or GraphQL?
Describe the concept of middleware in the context of web frameworks like Django or Flask. Provide an example use case for middleware.
How does session management work in web applications, and what are the common security considerations for session handling?
What is an ORM (Object-Relational Mapping), and why is it useful in database interactions with Python?
Discuss the benefits and drawbacks of different database systems (e.g., SQL, NoSQL) for various types of applications.
How can you optimize a database query for performance, and what tools or techniques can be used for profiling and debugging SQL queries?
Full Stack Development:
What is the role of the Model-View-Controller (MVC) design pattern in web development, and how does it apply to frameworks like Django or Flask?
How can you ensure the security of data transfer in a web application? Explain the use of HTTPS, SSL/TLS, and encryption.
Discuss the importance of version control systems like Git in collaborative development and deployment.
What are microservices, and how do they differ from monolithic architectures in web development?
How can you handle authentication and authorization in a web application, and what are the best practices for user authentication?
Describe the concept of DevOps and its role in the full stack development process, including continuous integration and deployment (CI/CD).
These interview questions cover a range of topics relevant to a Python Full Stack Developer position. Be prepared to discuss your experience and demonstrate your knowledge in both front-end and back-end development, as well as the integration of these components into a cohesive web application.
Database Integration: Full stack developers need to work with databases, both SQL and NoSQL, to store and retrieve data. Understanding database design, optimization, and querying is crucial.
Web Frameworks: Questions about web frameworks like Django, Flask, or Pyramid, which are popular in the Python ecosystem, are common. Understanding the framework’s architecture and best practices is essential.
Version Control: Proficiency in version control systems like Git is often assessed, as it’s crucial for collaboration and code management in development teams.
Responsive Web Design: Full stack developers should understand responsive web design principles to ensure that web applications are accessible and user-friendly on various devices and screen sizes.
API Development: Building and consuming APIs (Application Programming Interfaces) is a common task. Understanding RESTful principles and API security is important.
Web Security: Security questions may cover topics such as authentication, authorization, securing data, and protecting against common web vulnerabilities like SQL injection and cross-site scripting (XSS).
DevOps and Deployment: Full stack developers are often involved in deploying web applications. Understanding deployment strategies, containerization (e.g., Docker), and CI/CD (Continuous Integration/Continuous Deployment) practices may be discussed.
Performance Optimization: Questions may be related to optimizing web application performance, including front-end and back-end optimizations, caching, and load balancing.
Coding Best Practices: Expect questions about coding standards, design patterns, and best practices for maintainable and scalable code.
Problem-Solving: Scenario-based questions and coding challenges may be used to evaluate a candidate’s problem-solving skills and ability to think critically.
Soft Skills: Employers may assess soft skills like teamwork, communication, adaptability, and the ability to work in a collaborative and fast-paced environment.
To prepare for a Python Full Stack Developer interview, it’s important to have a strong understanding of both front-end and back-end technologies, as well as a deep knowledge of Python and its relevant libraries and frameworks. Additionally, you should be able to demonstrate your ability to work on the full development stack, from the user interface to the server infrastructure, while following best practices and maintaining a strong focus on web security and performance.
Python Full Stack Developer interviews often include a combination of technical assessments, coding challenges, and behavioral or situational questions to comprehensively evaluate a candidate’s qualifications, skills, and overall fit for the role. Here’s more information on each of these interview components:
1. Technical Assessments:
Technical assessments typically consist of questions related to various aspects of full stack development, including front-end and back-end technologies, Python, web frameworks, databases, and more.
These questions aim to assess a candidate’s knowledge, expertise, and problem-solving skills in the context of web development using Python.
Technical assessments may include multiple-choice questions, fill-in-the-blank questions, and short-answer questions.
2. Coding Challenges:
Coding challenges are a critical part of the interview process for Python Full Stack Developers. Candidates are presented with coding problems or tasks to evaluate their programming skills, logical thinking, and ability to write clean and efficient code.
These challenges can be related to web application development, data manipulation, algorithmic problem-solving, or other relevant topics.
Candidates are often asked to write code in Python to solve specific problems or implement certain features.
Coding challenges may be conducted on a shared coding platform, where candidates can write and execute code in real-time.
3. Behavioral or Situational Questions:
Behavioral or situational questions assess a candidate’s soft skills, interpersonal abilities, and how well they would fit within the company culture and team.
These questions often focus on the candidate’s work experiences, decision-making, communication skills, and how they handle challenging situations.
Behavioral questions may include scenarios like, “Tell me about a time when you had to work under tight deadlines,” or “How do you handle conflicts within a development team?”
4. Project Discussions and Portfolio Review:
Candidates may be asked to discuss their previous projects, providing details about their roles, responsibilities, and the technologies they used.
Interviewers may inquire about the candidate’s contributions to the project, the challenges they faced, and how they overcame them.
Reviewing a candidate’s portfolio or past work is a common practice to understand their practical experience and the quality of their work.
5. Technical Whiteboard Sessions:
Some interviews may include whiteboard sessions where candidates are asked to solve technical problems or explain complex concepts on a whiteboard.
This assesses a candidate’s ability to communicate technical ideas and solutions clearly.
6. System Design Interviews:
For more senior roles, interviews may involve system design discussions. Candidates are asked to design and discuss the architecture of a web application, focusing on scalability, performance, and data storage considerations.
7. Communication and Teamwork Evaluation:
Throughout the interview process, assessors pay attention to how candidates communicate their ideas, interact with interviewers, and demonstrate their ability to work in a collaborative team environment.
8. Cultural Fit Assessment:
Employers often evaluate how well candidates align with the company’s culture, values, and work ethic. They may ask questions to gauge a candidate’s alignment with the organization’s mission and vision.
conclusion :
Preparing for a Python Full Stack Developer interview involves reviewing technical concepts, practicing coding challenges, and developing strong communication and problem-solving skills. Candidates should be ready to discuss their past experiences, showcase their coding abilities, and demonstrate their ability to work effectively in a full stack development role, which often requires expertise in both front-end and back-end technologie
Tumblr media
Thanks for reading ,hopefully you like the article if you want to take Full stack Masters course from our Institute please attend our live demo sessions or contact us: +918464844555 providing you with the best Online Full Stack Developer Course in Hyderabad with an affordable course fee structure.
0 notes
skynats · 5 years ago
Text
Apache vs Nginx – Which is best?
Web servers are software tools that store, process and deliver web pages to clients. Apache (refers to the ‘Apache HTTP server) is secure, open-source, web server application designed for the modern operating system. It was developed by Apache Software Foundation. Apache can be downloaded at no cost. Nginx is lightweight, open-source HTTP and reverse proxy server and also an IMAP/POP3 proxy server.
Tumblr media
Working of Apache
In order to handle additional connections, Apache creates threads and processes. The server can be configured by the administrator to control the maximum number of allowable process. Too many process exhaust memory and also apache refuses additional connection when the limit of the process is reached. Apache is flexible in terms of how it processes web requests. This is based on the Multi-Processing Module (MPM) used. The three main Apache MPMs are Process (Prefork) MPM, Worker MPM, and Event MPM.
Click here to get about more info :  cost optimization services
Working of Nginx
Nginx works differently than Apache. Nginx does not setup new process for each web request, instead, the administrator configures how many worker processes to create for main Nginx process. Thousands of concurrent connections can be handled by each worker. To read data from disk, Nginx spins off cache loader and cache manager processes and load it into the cache and expire it from the cache when directed. Nginx can act as a reverse proxy server for TCP, UDP, HTTP, HTTPS, SMTP, POP3, and IMAP protocols. It can also act as a load balancer and an HTTP cache. Nginx uses a single-thread to handle the web server connections.
Strengths of Apache
Apache provides a wide range of built-in support.
Support for the latest HTTP 1.1 protocol.
Simple, powerful file-based configuration.
Support virtual hosts, PHP scripting, Java Servlet, and JSP. Support for Secured Socket Layer (SSL).
Apache has extensible Plugin Architecture.
Weaknesses of Apache
Performance and scalability issues.
Slows down under load.
Strengths of Nginx
Lightweight and able to handle more than 10,000 simultaneous connections.
Takes less memory and other resources.
Reverse proxy with caching.
Load balancing and fault tolerance.Embedded Perl scripting.
Weaknesses of Nginx
Lack of built-in support for Python and Ruby.
Nginx Plus version is not free.
Performance
Apache can handle static content using its conventional file-based methods. It can also serve the dynamic content by embedding a processor of the language in question into each of its worker requests. Nginx does not have any ability to process dynamic content natively but it serves static content much faster than Apache. NGINX is about two times faster and consumes a bit less memory (4%).
To get about more info click here : digitalocean cloud management
Security
Even though both apache and Nginx have a secure code base, both get stuck by security vulnerabilities. Comparing both, Nginx is slightly more secure than Apache with its centralized single configuration management.
Operating System
Apache has full support for Microsoft Windows and runs on all kinds of Unix-like systems. Even though Nginx has support for windows, its windows performance is not as strong as other platforms.
Apache vs Nginx Comparison
Apache is easier to configure than Nginx configuration is not easy.
In comparison to Nginx, Apache has excellent documentation.
Event-driven Architecture (EDA) is used by nginx whereas Apache uses process-driven architecture.
Nginx has non-blocking nature while Apache has blocking architecture.
Nginx uses Single-thread that means that it doesn’t create a new process for a new request. But in Apache, a new process is created for a new request.
Nginx has a very low memory consumption for a static page but, in case of Apache, memory consumption is high because of the requirement of creating a new process for each request.
Nginx is extremely fast as compared to Apache when it comes to serving static pages.
Nginx lacks the support of operating systems such as OpenVMS and IBM, but Apache supports the complete range of operating systems.
As Nginx comes up with required core features, it is much lighter than Apache
1 note · View note
arminehttpdebugger-blog · 6 years ago
Text
Optimize Website Speed with Varnish Cache
Guidelines to speed up your website with Varnish Cache
The page load time is one of the most important factors for today’s online business.
Walmart found a 2% increase in conversions for every 1-second of improvement of website speed. Amazon found 1% increase of revenue for every 100 milliseconds improvement to their website speed. Akamai found that:
47% of people expect a web page to load in two seconds or less.
40% will abandon a web page if it takes more than three seconds to load.
52% of online shoppers say quick page loads are important for their loyalty to a website.
Not sure yet? Then think again; a 1-second delay in page load time yields:
11% fewer page views.
16% decrease in customer satisfaction.
7% loss in conversions.
Still not sure? Okay, then there's another one:
By using the techniques described below in this article, you can save your hosting costs. You can have a website that works on a cheap $15 hosting to serve as many numbers of users as several hundred dollars hosting.
In this article, we'll describe how you can achieve this by using the Varnish Cache.
What is Varnish Cache?According to the official documentation,
Varnish Cache
is a web application accelerator also known as a caching HTTP reverse proxy. It can speed up delivery of your website with a factor of 300 - 1000x, depending on your architecture. Varnish comes with its own configuration language called VCL. Varnish is used in such large CDN providers as Fastly and KeyCDN.
Varnish serves data from virtual memory, a response is returned without needing to access the application and database servers. Each Varnish server can handle thousands of requests per second, much faster than a ‘usual’ website's framework alone.How does it work?
The secret of acceleration is to install Varnish before your application server (Apache, Nginx, IIS). In this case, each HTTP request from browser will first go to the Varnish server. If the current request was not found in the cache, Varnish will request your application server, cache the response and send the response back to the browser. All subsequent requests for similar URLs will be served from the Varnish cache thus unloading your application server.
Another advantage of this approach is that Varnish will improve the availability of your website. For example, if a PHP fatal error breaks your website, the pages still will be served from the Varnish cache and end-users will not realize that anything went wrong.
Of course, each approach has its downside. If you change the content for some of your pages, your users will not be able to immediately see these changes as they will receive the old version of the page from the Varnish cache. But this problem can be easily fixed by forcing Varnish to invalidate specific resources by sending so called 'PURGE' requests to it, to notify that there is a fresh content available for the specified URLs. Varnish will delete the cached content for these URLs and request them again from your server. You can also reduce the time for storing a copy of page in the cache by specifying the time-to-live (ttl) ratio for example to 2 minutes. This will force Varnish to update the data in its cache more frequently.
Another downside relates to security reasons, Varnish can serve only anonymous users and will remove all session-specific cookies from HTTP headers (this can be managed via VCL). Requests from registered users won’t be cached and will go directly to your application server.
Installing Varnish CacheThere are a lot of good articles about how to install and configure Varnish and we will not elaborate on this in our article. For example you can check this
article
(you can skip all steps not related to the installation of Varnish).
Testing Varnish CacheUsing Varnish requires changing the DNS settings for your domain name provider. This is OK for production server, but how to test it on your development environment?
Let's say our production website is hosted on 140.45.129.179 IP address and we deployed our Varnish server on 165.227.10.154. One approach is to force browser to visit 165.227.10.154 instead of 140.45.129.179 by changing the domain IP address in the hosts file (%WINDIR%\System32\drivers\etc\hosts). But this requires you to restart your computer every time you change the hosts file.
If you have the
HTTP Debugger
application installed on your computer, you can do this without changing the hosts file and without restarting the computer. To do this, just add TCP / IP Redirection Rule to HTTP Debugger as shown on the image:
The positive side of this solution is that in this case Varnish can be configured on any port and not only to 80, and you can quickly enable / disable redirection without restarting the computer.
Below is the same response from a web server without and with Varnish cache enabled. As you can see on the right image, the page was served from the cache and Varnish added/removed some HTTP headers (this can be managed via VCL).
Load testingNow let's check how in fact the Varnish can speed up the website by doing a load testing using the
loader.io
service.
For our test, we chose the free package that allows generating 10,000 requests in 15 seconds. The testing website is a simple ASP.NET website hosted on Microsoft Azure (IIS) for about $ 100/m and does not use MS SQL. The Varnish Cache is hosted on DigitalOcean droplet for $10/m.
Below are test results for without and with using the Varnish Cache. Quite impressive, isn’t it?
ConclusionAs we see, using Varnish you can really boost your website speed for dozens of times, increase its reliability and save on hosting costs at the same time. It sounds fantastic, but as we've seen it really works.
Useful Links
https://varnish-cache.org
https://loader.io
https://www.httpdebugger.com
https://www.crazyegg.com/blog/speed-up-your-website
https://www.digitalocean.com/community/tutorials/how-to-configure-varnish-cache-4-0-with-ssl-termination-on-ubuntu-14-04
https://github.com/mattiasgeniar/varnish-4.0-configuration-templates/blob/master/default.vcl
Copyright Notice: Please don't copy or translate this article without prior written permission from the HTTPDebugger.com
HTTP Debugger is a proxy-less http analyzer for developers that provide the ability to capture and analyze HTTP headers, cookies, POST params, HTTP content and CORS headers from any browser or desktop application. Very easy to use, with clean UI, and short ramp-up time. Download FREE 7-Day Trial
1 note · View note
tittacache · 3 years ago
Text
Lib jitsi meet
Tumblr media
Lib jitsi meet how to#
Lib jitsi meet install#
You can later exchange the SSL certificate to an officially signed one e.g. This warning appears as the site is currently protected by a self-signed SSL certificate. Open your web browser and type the URL or You will be redirected to the following page: Jitsi Meet is now up and listening on port 443. Select the first option and click on the Ok button to start the installation. You will be asked to select the SSL certificate as shown below: Provide your hostname and click on the OK button.
Lib jitsi meet install#
Next, update the repository and install Jitsi Meet with the following command: sudo apt-get update -yĭuring the installation process, you will need to provide your hostname as shown below: Sudo sh -c "echo 'deb stable/' > /etc/apt//jitsi.list" You can do this by running the following command: wget -qO - | sudo apt-key add. So you will need to add the repository for that. Jitsi Meet Installīy default, Jitsi Meet is not available in the Ubuntu 18.04 default repository. Jun 17 11:56:22 server1 systemd: Started A high performance web server and a reverse proxy server. Jun 17 11:56:21 server1 systemd: Starting A high performance web server and a reverse proxy server. ├─34894 nginx: master process /usr/sbin/nginx -g daemon on master_process on Loaded: loaded (/lib/systemd/system/rvice enabled vendor preset: enabled)Īctive: active (running) since Wed 11:56:22 UTC 12s ago rvice - A high performance web server and a reverse proxy server.:/home/administrator# sudo systemctl status nginx Output: Synchronizing state of rvice with SysV service script with /lib/systemd/systemd-sysv-install.Įxecuting: /lib/systemd/systemd-sysv-install enable nginx Once the Nginx is installed, you can check the Nginx service with the following command: sudo systemctl status nginx You can install it with the following command: Advertisement sudo apt-get install nginx -y So you will need to install it to your system. Jitsi Meet uses Nginx as a reverse proxy. OpenJDK 64-Bit Server VM (build 25.252-b09, mixed mode) Once the Java is installed, verify the Java version with the following command: java -version You can install OpenJDK JRE 8 by running the following command: sudo apt-get install -y openjdk-8-jre-headless -y Next, you will need to install Java to your system. Then, verify the hostname with the following command: hostname -f Next, open /etc/hosts file and add FQDN: sudo nano /etc/hostsĪdd the following line: 127.0.1.1 server1 You can do this by running the following command: sudo hostnamectl set-hostname server1 Next, you will need to set up a hostname and FQDN to your system. Once your system is up-to-date, restart your system to apply the changes. Getting Started with installing Jitsi Meet on Ubuntu 20.04īefore starting, update your system with the latest version with the following command: sudo apt-get update -y
Lib jitsi meet how to#
In this tutorial, we will learn how to install the video conferencing service Jitsi Meet on an Ubuntu 20.04 LTS server. You can video chat with the entire team and invite users to a meeting using a simple, custom URL. With Jisti Meet you can stream your desktop or just some windows. The Jitsi Meet client runs in your browser, so you don’t need to install anything on your computer. Vmware horizon client the supplied certificate is expired or not yet valid.Jitsi Meet is a free, open-source, secure, simple, and scalable video conferencing solution that you can use as a standalone application or embed it into your web application.
Tumblr media
1 note · View note
trustvi · 3 years ago
Text
Mamp pro nginx
Tumblr media
#Mamp pro nginx install
#Mamp pro nginx pro
This means that PHP is the most widely used programming language for creating websites.
#Mamp pro nginx pro
For easy setup, MAMP PRO comes with phpMyAdmin.Ĩ1% (and growing) of all websites using PHP as the server programming language. Thanks to MAMP you can easily develop complex applications MySQL database on your local PC and then upload them to your live system. There is a MySQL interface for almost all programming languages and scripts available. MAMP comes with MySQL, which is the system relational database most commonly used. It can act as a reverse proxy server for HTTP, HTTPS, SMTP, POP3 and IMAP as well as a load balancer and an HTTP cache.Ī database is at the heart of every modern and dynamic website. Many ISPs use Apache MAMP what makes it the perfect tool to test their websites locally before releasing them. MAMP comes with more than 70 Apache modules like PHP, SSL, WebDAV, Auth, Cache and many more. Because of its modular structure, it can be improved easily with supplements. In these cases, the corresponding license applies.Īpache, the web server http open source is one of the main parts of MAMP. Note that some of the included software is released with a different license. MAMP is distributed under the GNU General Public License and therefore can be distributed freely within the limits of this license. Similar to a distribution of Linux, MAMP is a combination of free software and therefore is offered for free.
#Mamp pro nginx install
It can install Apache, PHP and MySQL without starting a script or having to change any configuration files! Also, if MAMP is no longer needed, simply delete the MAMP folder and everything returns to its original state (ie MAMP does not modify the “normal” system). MAMP will not compromise any existing Apache installation that is running on your system. It comes for free, and is easily installed. MAMP installs a local server environment in a matter of seconds on your computer. We are now supporting MySQL 5.6 and Nginx is now fully integrated. MAMP offers even more opportunities for web developers. Professional programmers and Web developers can use MAMP Pro to create and manage their own custom development environment. MAMP PRO helps you install and manage their own development environments that provide support for multiple DNS dynamic, virtual hosts and more.
Tumblr media
0 notes
villegreys · 3 years ago
Text
Service traccar installed
Tumblr media
Service traccar installed for free#
Service traccar installed how to#
In this tutorial I’ll assume that my domain is ģrd option would be to make it more robust and scalable using RDS to host MySQL database backend. This way your public IP address won’t change when you decide to terminate and recreate your instance. I encourage to allocate ‘Elastic IP address’ and create appropriate DNS A record for the domain name you’ve selected to use.
Service traccar installed for free#
Second one will use Nginx proxy with SSL Certificate acquired automatically and for free from Let’s Encrypt. First one is the easiest possible (and insecure) giving your Traccar running on HTTP port 80. Don’t worry if you don’t know any of these - everything will go up automatically with just one command. Everything will be set up with some Bash and Ansible scripts in infrastructure as a code style. We’ll use Docker containers to run Traccar and (optionally) Nginx. Please note that Amazon’s default EC2 instance domains *. cannot be used to get a free SSL certificate from Let’s Encrypt as they are considered ephemeral and are on a blacklist. It’s required to set up SSL protected version. One additional thing to consider is to get an external domain name. To keep it cheap even after your free tier is over we’ll only use one EC2 instance to host everything. We’ll do everything on Amazon AWS - there is nothing you need to set up on your local computer. Get a pre-paid one for testing and make sure you have some positive balance. In both cases you will need a GPRS-enabled SIM card for mobile data transmission. Another option is to use a regular smartphone with Traccar Client application installed. For a full list of supported devices please consult devices section on Traccar project website. Some popular cheap models are marketed as TK102B and GT06. You can get your own GPS tracker device from AliExpress, eBay etc. We’ll be using AWS free tier so hosting shouldn’t cost you anything (at least for the first year until your free tier is over).
Service traccar installed how to#
In this article I’ll show you how to set up your own secure GPS tracking server that you can use to locate your assets like a car or even a luggage. Traccar is an Open Source GPS Tracking Platform that supports a variety of protocols and device models.
Tumblr media
0 notes
hll-howtos · 3 years ago
Text
Setup SSL for HLL crcon
Use NGINX Proxy Manager with Letsencrypt certificates for your HLL crcon
Prerequisites: Functioning crcon environment running on a Linux VPS with already accessible front-end using a noip.com free dynamic DNS domain
Step 1 Create a docker-compose.yml file with the following content:
version: '3' services:   app:     image: 'jc21/nginx-proxy-manager:latest'     restart: unless-stopped     ports:       - '80:80'       - '81:81'       - '443:443'     volumes:       - ./data:/data       - ./letsencrypt:/etc/letsencrypt Now run the following command sudo docker-compose up -d and wait while the image is being downloaded and the container has started. Step 2 Access the GUI like this http://your_server_IP:81 and login with [email protected] and password changeme . Click on the Hosts tab and then on Add proxy host. Enter your DynDNS domain name , e.g. mycrcon.ddns.net , under Scheme select https, under Forward Hostname/IP enter your server's IP address and under Forward port enter 9010 (default port, might be something else if you have changed this). Go to the SSL tab, click on None and select Request a new SSL certificate. Enter your e-mail address and enable the I agree to the Lets Encrypt Terms of Service button followed by clicking the Save button. After a short time a proxy host will have been added and you should be able to access the rcon GUI by using https://mycrcon.ddns.net . If you have a second or even third HLL server, you will a DynDNS domain per server and you just repeat step 2 using the correct port (9010, 9011 etc) for those crcon instances. Notes The NGINX proxy manager is accessible over http so login information is not encrypted. As a best practice I have to advise you to use a VPN connection to your VPS to access it's GUI.
Revision 1.0 September-2022
1 note · View note
aryansubhash · 7 years ago
Text
Sixteen Steps To Become a DevOps Professional
The DevOps ecosystem is growing fast since the past few years but I’ve always seen the same question that is somehow hard to answer in some lines: How to become a DevOps engineer?
so, i have decided to write this article which will help you to become a successful DevOps Engineer.So,with out wasting any time go through the blog.
Tumblr media
Here are the 16 steps to follow,
1.      Start By Learning About Culture
2.      Learn A Programming Language
3.      Learn How To Manage Servers
4.      Learn Networking and Security Basics
5.      Learn Scripting
6.      Learn How To Install & Configure Middleware’s
7.      Learn How To Deploy Software
8.      Learn GIT
9.      Learn How To Build Software
10.  Learn How To Automate Your Software Factory
11.  Learn Configuration Management
12.  Learn Infrastructure As Code
13.  Learn How To Monitor Software & Infrastructure
14.  Learn About Containers & Orchestration
15.  Learn How To Deploy & Manage Server less Applications.
16.  Read Technical Article related to devops stuff from blogs like,
DevOps.com, DzoneDevOps, the XebiaLabs DevOps, DevOps Guys
1. Start By Learning about the Culture:
DevOps is a movement and a culture before being a job this is why cultural aspects are very important.
2. Learn A Programming Language:
In my experience, a good DevOps engineer is someone who has skills in development and operations. Python, Go, Nodejs .you have a large choice! You don’t necessarily need to learn the same main language that your company use but programming skills are really nice to have.
3. Learn How To Manage Servers:
One of the principal tasks that a DevOps professional do, is managing servers. Knowing how servers work is a must-know and to do this, some good knowledge about the hardware (CPU, architecture, memory  ...) is needed. The other thing to learn is operating systems and especially Linux. You can start by choosing a distribution like Ubuntu.
If you are really beginning with Linux, you can try it first in your laptop/desktop and start playing with in order to learn.
You can also use DigitalOcean, Amazon Lightsail or Linode to start a cheap server and start learning Linux.
4. Learn Networking & Security Basics
You may probably say that these are skills for network and security engineers. No! Knowing how HTTP, DNS, FTP and other protocols work, securing your deployed software, anticipating security flaws in the code and configuring your infrastructure network are things that you should know. Using Kali Linux could be a good way to learn networking and security.
5. Learn Scripting
Even with the growing number of tools that could be an alternative to creating your own scripts, scripting is a must-know and you will need it for sure. In my experience, Bash is one of the most used scripting languages. Python is also a good scripting language that could be used to go fast while writing less code.
6. Learn How to install & Configure Middleware’s
Apache and Nginx are the most used middleware in the DevOps industry and knowing how to install and configure things like virtual hosts, reverse proxies, domain names and SSL will help you a lot in your daily tasks. Start by deploying Nginx as a web server for a WordPress blog then, as a load balancer for two backend servers.
7. Learn How to Deploy Software
Once you know how to deploy and configure Nginx, you need to know how to deploy applications to a production server.
Create a “hello world” applications using Python, Nodejs and PHP. Deploy these 3 applications. You can use Nginx as a reverse proxy for all of them.
8. Learn GIT
GIT is one of the versioning systems being used in the IT industry. You don’t need to be a GIT expert but this is a technology that will follow you through all of your DevOps experiences.
GIT basics are well explained in the official documentation.
“Pro Git” is the book you really need to read if you want to learn GIT.
9. Learn How to Build Software
Building comes before running. Building software is generally about running a procedure of creating a software release that could run in a production server. A DevOps professional need to know about this important part of the software lifecycle.
Create an application in the language of your choice and check the different ways to install its dependencies and build your code.
10. Learn How to Automate Your Software Factory
DevOps is not about automation, but automation is one of the pillars of the DevOps business transformation. Once you learned how to build software, you can use tools like Jenkins to automate builds and connect your code to the code repository. If you are not familiar with all of this, read about Continuous Integration and Continuous Delivery.
11. Learn Configuration Management
Once things become more complex and once you will need to manage multiple environments and configurations, learning a configuration management tool will make your life easier.
There are a lot of CM tools like Saltstack , Ansible, Chef, Puppet ..Etc. and you can find online resource that compares these tools. In function of what you need, choose a CM tool and start learning it.
12. Learn Infrastructure as Code
IaC is absolutely important to automate your infrastructure and provision your environments with simple scripts or alternative tools. DevOps is about reducing the time to market while keeping a good software quality and IaC will help you on this.
Choose a cloud provider (AWS, GCP ..Etc.) and you will find a lot of free online resources to start your infrastructure. You can also learn how to use “cloud managers” technologies, some CM tools like Saltstack could help you provision infrastructure on AWS or GCP, otherwise, if you need more go for technologies like Terraform.
 13. Learn How to Monitor Software & Infrastructure
A software deployed in production and the infrastructure hosting it should be monitored. Monitoring and alerting are one of the important skills you need to know.
Zabbix, Icinga, Sensu, prometheus.. There are a lot of tools you can learn but start by comparing these tools and choose the one that fits your requirements. You can also consider learning how to deploy and use an ELK stack.
14. Learn About Containers & Orchestration
Containers like Docker are becoming a must-know skill! You need to have good skills creating, building, deploying and managing containers in development and production environments.
15. Learn How to Deploy & Manage Serverless Applications
Serverless is one of the most buzzing technologies of 2017 and sooner it will become a requirement in many job descriptions.
AWS Lambda, Azure Functions, Google Cloud Functions, IBM OpenWhisk, or Auth0 WebTask, you have the choice to start learning one of them.
16.  Read Technical Article related to devops stuff 
from blogs like,
DevOps.com, DzoneDevOps, the XebiaLabs DevOps, DevOps Guys
2 notes · View notes
slmanblogs · 4 years ago
Text
☁️ cloud ways review
The Detailed Review of Cloudways - A High Performance WordPress Hosting
Tumblr media
Hosting is a popular topic when it comes to WordPress sites. Many users often find it difficult to choose the right hosting solution for their website and sometimes it could become a tedious task. The reason is simple, you have a lot of options available out there and you can’t test them all. https://www.cloudways.com/en/?id=973628
Table of Contents hide 1. Cloudways Overview 2. Cloudways Security 3. Cloudways Performance 3.1. CloudwaysCDN 3.2. Cloudways Data Centers 3.3. Free WordPress Cache Plugin 3.4. Advanced Caches 4. Cloudways Platform - Developer Friendly 4.1. Cron Job Management 4.2. Git Deployment 4.3. WP-CLI 4.4. Application Settings 4.5. Staging Site 4.6. Server Cloning 5. Cloudways Pricing 6. Final Thoughts! Considering that, today, I’m going to help you choose the right hosting provider that will not only fulfill your website needs but will also give you a hassle-free working environment. So, in this article, I am going to cover a detailed review of Cloudways - a high-performance WordPress managed hosting and will also discuss the prominent features that make Cloudways stand out from the rest.
Cloudways Overview Cloudways is a managed cloud hosting platform that is partnered with the five top cloud infrastructure providers including AWS, DO, Vultr, Linode, and Google Cloud Platform. It offers a flexible hosting environment that improves your work productivity and gives you a hassle-free experience. It comes with two intuitive panels; server management and application management. On each panel, you can perform all the essential tasks and configurations. In short, it simplifies the hosting process and provides you a managed platform while handling all the complex server configurations. Server Management: Server-related configurations. Configure server settings of Cloudways Application Management: Application related configurations. Configure application settings of Cloudways Cloudways Security For a site owner, maintaining security is the biggest and ongoing challenge, especially with the increasing number of attacks and threats that can cause huge loss of data and can damage your site. This is why you have to make sure that you take various security measures to protect it from hackers. With Cloudways, your site is in good hands because their expert team is ready to deal with security threats and vulnerabilities as soon as they are discovered. Cloudways platform comes with different security features that are designed to secure your site and server at all levels. Let’s take a brief look at its security features. Bot Protection (Free Feature): Cloudways in partnership with MalCare introduced application-level security protection that identifies bad bots, blocks malicious traffic, and protects your site from attacks like Brute Force, Web Scraping, etc. This feature monitors all the activities and provides a detailed traffic report. With a click of a button, you can enable the bot protection feature. SSL Certificates: Cloudways offers free SSL certificates and with a click of a button you can quickly install them. You can install SSL certificates easily for your WordPress website. It supports Let’s Encrypt and custom SSL certificates. Two Factor Authentication: Two Factor Authentication is a great way to strengthen the platform security protection and help you protect your account (access) from attackers or unauthorized users. Two Factor Authentication can strengthen the platform security protection of your website. Database Security: It is important to secure your site from all angles that’s why Cloudways also takes care of your site data. By default, you can’t access your database remotely and to enable this function you have to whitelist your IP addresses (minimize the unwanted database access). Cloudways also takes care of the database security of your WordPress website. SSH/SFTP Logins: Brute force is a common cyberattack where a hacker tries to log in to your server by attempting many passwords and its combinations. Cloudways has a very powerful system where it blocks the IPs that try to exploit your SSH/SFTP. Cloudways has a very powerful system to blocks the IPs that try to exploit your SSH/SFTP. You can also block all the IPs and grant permission access only to specific IPs (whitelist IPs). Firewalls: Cloudways comes with a dedicated firewall that protects your site by allowing access only to specific
ports. Cloudways Performance When you’re considering an optimized performance-oriented hosting then it would not be wrong to say that Cloudways is at the top of the list. Cloudways has a very unique and optimized stack that is specially designed to improve your site performance. It uses a combination of Apache as a web server, NGINX as a reverse proxy, MariaDB as a database solution, PHP, and LINUX as an Operating System. Here, you’ll also find different performance catalysts such as Redis, Varnish, and many more. Cloudways has a very unique and optimized stack that is specially designed to improve your site performance. The best part is that, with Cloudways, you can also manage your packages with a click of a button. CLOUDWAYSCDN Cloudways also offers a Content Delivery Network (CDN) that will help you boost your site performance and faster page-loading. The website’s static content will be cached by CloudwaysCDN and served to the audience from the closest servers. A CDN can boost your website performance and faster page-loading. CLOUDWAYS DATA CENTERS It is always recommended to launch a server that is nearest to your target audience. Thanks to Cloudways, it offers 65+ server locations and choices over the top 5 IaaS providers. Cloudways offers 65+ server locations and choice over the top 5 IaaS providers. FREE WORDPRESS CACHE PLUGIN Cloudways has its own cache plugin known as Breeze and has more than 100,000 active installations. This is a great plugin that is designed to improve your site speed and performance. It comes with robust features like file-level caching, database caching, browser caching, and much more. Also, it has an advanced option from where you can exclude particular URLs, JS files, and CSS files that you wish not to cache. Cloudways has its own WordPress cache plugin known as Breeze. ADVANCED CACHES Cloudways takes care of your site performance at all levels. Therefore, it offers different advanced caching mechanisms like Redis, Varnish, and Memcached. You can easily manage these from the manage services section like enable, purge, restart, and many more with a click of button. Cloudways takes care of your site performance at all levels. Cloudways Platform - Developer Friendly People working around different projects and always looking for a hosting solution that will fulfill their required needs. Cloudways is one of the best-managed hosting providers that offer a flexible working environment and improve work and team productivity. CRON JOB MANAGEMENT What I like about Cloudways is that it provides every solution on its platform from basic to advanced setup. For instance, if you want to set up a cron job to your WordPress site then you have a Cron Job Management section on the application management panel. With a few clicks, you can quickly set up a cron job and not just only a basic PHP script (simple scheduled tasks) as you have an advanced cron job section available for complex scheduled tasks. Manage cron job easily and quickly with Cloudways. GIT DEPLOYMENT If you have a number of people working around different projects and for team coordination, they use a version control system like GIT then you know how difficult it is to manage development workflow in a live environment. But Cloudways provides smooth and seamless integration with Git where you can easily deploy your code to your website. Cloudways provides smooth and seamless integration with Git. WP-CLI WP-CLI is a WordPress command-line interface used for interacting and managing WordPress sites without actually using a web browser. It includes a set of commands through which you can carry out administrator-related tasks with ease. WP-CLI is a WordPress command-line interface. APPLICATION SETTINGS This is an important section from where you can control and configure your application settings. From general configurations to Varnish settings. Control and configure all application settings of your hosting server. STAGING SITE Staging is a very essential feature and most of the developers test their site changes,
updates, and bug fixes on a staging site before going to live. With Cloudways, you can quickly create a staging of your WordPress site and easily manage (push and pull changes) the staging environment. Create a staging of your WordPress site with Cloudways. SERVER CLONING For applications, you have a staging/cloning feature. Similarly, for servers, you have a cloning feature available where you can create a copy of your entire server. Server cloning is too easy with Cloudways and all you need to do is click Clone server. Server cloning is too easy and all you need to do is click Clone server. Cloudways Pricing I love the pricing model of Cloudways as it offers a pay-as-you-go model that means they will charge you only for the resources that you consumed. Unlike conventional hosting, Cloudways doesn’t bound you with yearly payment packages. Cloudways doesn’t bound you with yearly payment packages. Cloudways offers a number of features at a very affordable price. The basic plan starts from $10/Month for which you get a 1 GB DO server, 1 core processor, 25GB storage, and 1 TB bandwidth. Cloudways provides a powerful solution, and the basic DO plan is enough to run a small WooCommerce store. If you’re planning to run a big and large WooCommerce store then you must have to consider the other essential WooCommerce items and their estimated cost. Cloudways has already estimated the budget you need so you can refer to it here. Final Thoughts! Now, you know why we recommend Cloudways to our users and why it is one of the leading hosting solutions. Cloudways offers rich features that help you improve your site security, performance, reliability, and even the right hosting environment that speeds up the work productivity level. You may also like Hosting WordPress sites on Azure hosting.
0 notes
thishostingrocks · 4 years ago
Text
How to Install Odoo 14 on Ubuntu 20.04
How to Install Odoo 14 on Ubuntu 20.04
In this tutorial, we’re going to show you how to install Odoo 14 (community edition) on an Ubuntu 20.04 server with Nginx as a reverse proxy and HTTPS through a free SSL certificate (Let’s Encrypt). (more…)
Tumblr media
View On WordPress
0 notes
bigdataschool-moscow · 5 years ago
Text
Как сделать Elasticsearch безопасным: защищаем Big Data от утечек
Tumblr media
Вчера мы рассказывали про самые известные утечки Big Data с открытых серверов Elasticsearch (ES). Сегодня рассмотрим, как предупредить подобные инциденты и надежно защитить свои большие данные. Читайте в нашей статье про основные security-функции ELK-стека: какую безопасность они обеспечивают и в чем здесь подвох.
Несколько cybersecurity-решений для ES под разными лицензиями
Чуть больше года назад, 20 мая 2019, компания Elastic сообщила, что базовые функции обеспечения информационной безопасности ELK-стека, будут теперь бесплатными для всех пользователей, а не только тех, кто подписан на коммерческой основе. Под этим имелись ввиду следующие возможности [1]: ·       криптографический протокол транспортного уровня TLS для шифрованной связи; ·       инструментарий для создания и управления пользовательскими записями (file и native-realm); ·       управление доступом пользователей к API и кластеру на основе ролей (RBAC, Role Based Access Control); ·       многопользовательский доступ к Kibana с использованием Kibana Spaces. Однако, эти новости не встретили ответного энтузиазма со стороны профессионального сообщества [2]. В частности, потому, что преобладающее число компонентов ELK-стека (Elasticsearch, LogStash, Kibana, FileBeats, Grafana) и так ��ыли бесплатными и открытыми, распространяясь по лицензии Apache 2.0. А X-Pack, коммерческий продукт компании Elastic, расширяющий возможности ELK, включая обеспечение cybersecurity, был не бесплатным и распространялся по лицензии Elastic License. Он стал открытым с 2018 года, но не перестал быть коммерческим. Возмущение пользователей вызвал факт путаницы с лицензиями: с версии 6.3 ELK и X-Pack интегрированы в один репозиторий на Github, из-за чего невозможно понять, какое решение бесплатное, а за что нужно платить [3].
Tumblr media
Изменения в схеме лицензирования продуктов Elastic Stack В свете этого события компания Amazon выступила с собственным продуктом на основе ELK Stack: Open Distro for Elasticsearh. С точки зрения cybersecurity наиболее интересны следующие возможности Open Distro [4]: ·       средства защиты кластера – аутентификация через Active Directory, Kerberos, SAML и OpenID, единая точка входа (SSO), поддержка шифрования трафика, RBAC-модель избирательного разграничения доступа, детальное логгирование и инструменты для обеспечения соблюдения требований (compliance); ·       система отслеживания событий и генерации предупреждений для мониторинга за состоянием данных с автоматической отправкой уведомлений при возникновении внештатных ситуаций и нарушении безопасности. Определить условия подобных событий можно через язык запросов Elasticsearch и средства написания скриптов. Визаулизация мониторинга осуществляется через web-интерфейс Kibana. Open Distro от Amazon – не единственное коммерческое cybersecurity-решение для Elasticsearch. Еще один пример – Search Guard, модуль комплексного обеспечения безопасности, включая аутентификацию через Active Directory, LDAP, Kerberos, веб-токены JSON, SAML, OpenID и многие другие. Также он поддерживает детальный RBSC-доступ к индексам, документам и полям, многопользовательский режим в Kibana и требования GDPR, HIPAA, PCI, SOX и ISO с помощью аудита изменений и ведения журнала соответствия [5].
Как обеспечить безопасность бесплатного Elasticsearch: алгоритм действий Big Data администратора
При использовании бесплатной версии Elasticsearch следует вручную обеспечить ее безопасность. Для того администратор Big Data должен выполнить ряд мероприятий [6]: ·       защитить подключение к СУБД, настроив аутентификацию с помощью Open Distro от Amazon. Например, на Debian-подобных операционных системах, в частности, Ubuntu, для этого в консоли следует прописать команду установки плагина из репозитория: wget -qO ‐ https://d3g5vo6xdbdb9a.cloudfront.net/GPG-KEY-opendistroforelasticsearch | sudo apt-key add - ·       далее необходимо настроить защищенное SSL-взаимодействие между серверами кластера Elasticsearch с помощью удостоверяющего центра или самостоятельно созданного сертификата; ·       изменить пароли внутренних пользователей; ·       настроить межсетевой экран операционной системы, разрешив подключение к Elasticsearch; ·       обновить пароли и проверить все настройки. Также стоит помнить про открытые по умолчанию порты: ES использует порт 9200 для HTTP-трафика и 9300 – для сообщения между узлами кластера. Их рекомендуется заменить другими, отредактировав файл elasticsearch.yml на каждом сервере. Это позволит ограничить внешний доступ к ES-СУБД, чтобы посторонние не могли получить доступ к данным или отключить весь кластер Elasticsearch через REST API [7]. В качестве альтернативы можно включить в ELK-кластер обратный прокси-сервер (reverse proxy), например, Nginx, который ретранслирует запросы клиентов из внешней сети на один или несколько серверов, логически расположенных во внутренней сети. Для клиента это будет выглядеть, как будто запрашиваемые ресурсы находятся непосредственно на самом прокси-сервере. В отличие от классическо��о прокси, который перенаправляет запросы клиентов к любым серверам в сети и возвращает им результат, reverse proxy взаимодействует напрямую лишь с ассоциированными узлами и возвращает ответ только от них [8]. Таким образом, Nginx в качестве обратного прокси-сервера позволит обеспечить защиту Elasticsearch и Kibana от доступа неавторизованных пользователей. Для этого потребуется включить протокол HTTPS с использованием корректного сертификата SSL [9]. Здесь стоит отметить возможность применения Cilium – open-source продукта для прозрачной защиты сетевого подключения между службами приложений, развернутыми на платформах управления контейнерами Linux, таких как Docker и Kubernetes. Cilium основан на BPF – технологии ядра Linux, которая обеспечивает видимость и безопасность сети по API без изменений в коде приложения или контейнерах. Cilium полностью распределяется на узлах Linux, где выполняются рабочие нагрузки, избегая централизованных точек. Интеграция с Kubernetes позволяет Cilium объединять видимость трафика с идентификацией модуля и применять правильные политики безопасности, даже при работе на разных узлах кластера. Cilium может фильтровать отдельные вызовы API и применять политики, которые разрешают доступ с наименьшими привилегиями для Elasticsearch и любых других API-служб на основе протоколов HTTP, gRPC и Kafka. Политики видимости и безопасности эффективно реализуются в виде потоков сетевых данных через ядро в контейнер и из него без обходных путей к централизованному межсетевому экрану или прокси [10].
Tumblr media
Использование Cillium для защиты Elasticsearch от утечек В следующей статье мы рассмотрим, как использовать ELK Stack для аналитики больших данных и работать с алгоритмами машинного обучения (Machine Learning) в Elasticsearch и Kibana, а также, что еще есть полезного в Open Distro от Amazon. А как на практике обеспечить информационную безопасность кластера Elasticsearch и других систем сбора и анализа больших данных в своих проектах цифровизации, вы узнаете на практических курсах по администрированию и эксплуатации Big Data систем в нашем лицензированном учебном центре повышения квалификации и обучения руководителей и ИТ-специалистов (разработчиков, архитекторов, инженеров и аналитиков) в Москве. Смотреть расписание Записаться на курс Источники 1.       https://www.elastic.co/blog/security-for-elasticsearch-is-now-free 2.       https://habr.com/ru/company/itsumma/blog/453110/ 3.       https://habr.com/ru/post/443528/ 4.       https://www.opennet.ru/opennews/art.shtml?num=50322 5.       https://www.fgts.ru/collection/search-guard 6.       https://habr.com/ru/company/dataline/blog/487210/ 7.       https://www.digitalocean.com/community/tutorials/how-to-install-elasticsearch-logstash-and-kibana-elastic-stack-on-ubuntu-18-04-ru 8.       https://ru.wikipedia.org/wiki/Обратный_прокси 9.       https://netpoint-dc.com/blog/elk-authentication-nginx/ 10.   https://cilium.io/blog/2018/07/10/cilium-security-elasticsearch/ Read the full article
0 notes
t-baba · 5 years ago
Photo
Tumblr media
How to Use SSL/TLS with Node.js
In 2020, there’s no reason for your website not to use HTTPS. Visitors expect it, Google uses it as a ranking factor and browser makers will happily name and shame those sites not using it.
In this tutorial, I’ll walk you through a practical example of how to add a Let’s Encrypt–generated certificate to your Express.js server.
But protecting our sites and apps with HTTPS isn’t enough. We should also demand encrypted connections from the servers we’re talking to. We’ll see that possibilities exist to activate the SSL/TLS layer even when it’s not enabled by default.
Note: if you’re looking for instructions on how to set up SSL with NGINX when configuring it to work as a reverse proxy for a Node app, check out our quick tip, “Configuring NGINX and SSL with Node.js”.
Let’s start with a short review of the current state of HTTPS.
HTTPS Everywhere
The HTTP/2 specification was published as RFC 7540 in May 2015, which means at this point it’s a part of the standard. This was a major milestone. Now we can all upgrade our servers to use HTTP/2. One of the most important aspects is the backwards compatibility with HTTP 1.1 and the negotiation mechanism to choose a different protocol. Although the standard doesn’t specify mandatory encryption, currently no browser supports HTTP/2 unencrypted. This gives HTTPS another boost. Finally we’ll get HTTPS everywhere!
What does our stack actually look like? From the perspective of a website running in the browser (at the application level) we have to traverse the following layers to reach the IP level:
Client browser
HTTP
SSL/TLS
TCP
IP
HTTPS is nothing more than the HTTP protocol on top of SSL/TLS. Hence all of HTTP’s rules still apply. What does this additional layer actually give us? There are multiple advantages: we get authentication by having keys and certificates; a certain kind of privacy and confidentiality is guaranteed, as the connection is encrypted in an asymmetric manner; and data integrity is also preserved, as transmitted data can’t be changed during transit.
One of the most common myths is that using SSL/TLS is computationally expensive and slows the server down. This is certainly not true anymore. We also don’t need any specialized hardware with cryptography units. Even for Google, the SSL/TLS layer accounts for less than 1% of the CPU load and the the network overhead of HTTPS as compared to HTTP is below 2%. All in all, it wouldn’t make sense to forgo HTTPS for the sake of a little overhead.
As Ilya Grigorik puts it, there is but one performance problem:
TLS has exactly one performance problem: it is not used widely enough. Everything else can be optimized: https://t.co/1kH8qh89Eg
— Ilya Grigorik (@igrigorik) February 20, 2014
The most recent version is TLS 1.3. TLS is the successor of SSL, which is available in its latest release SSL 3.0. The changes from SSL to TLS preclude interoperability, but the basic procedure is, however, unchanged. We have three different encrypted channels. The first is a public key infrastructure for certificate chains. The second provides public key cryptography for key exchanges. Finally, the third one is symmetric. Here we have cryptography for data transfers.
TLS 1.3 uses hashing for some important operations. Theoretically, it’s possible to use any hashing algorithm, but it’s highly recommended to use SHA2 or a stronger algorithm. SHA1 has been a standard for a long time but has recently become obsolete.
HTTPS is also gaining more attention for clients. Privacy and security concerns have always been around, but with the growing amount of online accessible data and services, people are getting more and more concerned. For those sites that don’t implement it, there is a useful browser extension — HTTPS Everywhere from the EFF — which encrypts our communications with most websites.
Tumblr media
The creators realized that many websites offer HTTPS only partially. The plugin allows us to rewrite requests for those sites that offer only partial HTTPS support. Alternatively, we can also block HTTP altogether (see the screenshot above).
Basic Communication
The certificate’s validation process involves validating the certificate signature and expiration. We also need to verify that it chains to a trusted root. Finally, we need to check to see if it’s been revoked. There are dedicated, trusted authorities in the world that grant certificates. In case one of these were to become compromised, all other certificates from the said authority would get revoked.
The sequence diagram for an HTTPS handshake looks as follows. We start with the initialization from the client, which is followed by a message with the certificate and key exchange. After the server sends its completed package, the client can start the key exchange and cipher specification transmission. At this point, the client is finished. Finally the server confirms the cipher specification selection and closes the handshake.
Tumblr media
The whole sequence is triggered independently of HTTP. If we decide to use HTTPS, only the socket handling is changed. The client is still issuing HTTP requests, but the socket will perform the previously described handshake and encrypt the content (header and body).
So what do we need to make SSL/TLS work with an Express.js server?
HTTPS
By default, Node.js serves content over HTTP. But there’s also an HTTPS module that we have to use in order to communicate over a secure channel with the client. This is a built-in module, and the usage is very similar to how we use the HTTP module:
const https = require("https"), fs = require("fs"); const options = { key: fs.readFileSync("/srv/www/keys/my-site-key.pem"), cert: fs.readFileSync("/srv/www/keys/chain.pem") }; const app = express(); app.use((req, res) => { res.writeHead(200); res.end("hello world\n"); }); app.listen(8000); https.createServer(options, app).listen(8080);
Ignore the /srv/www/keys/my-site-key.pem and and /srv/www/keys/chain.pem files for the moment. Those are the SSL certificates we need to generate, which we’ll do a bit later. This is the part that changed with Let’s Encrypt. Previously, we had to generate a private/public key pair, send it to a trusted authority, pay them and probably wait for a bit in order to get an SSL certificate. Nowadays, Let’s Encrypt instantly generates and validates your certificates for free!
Generating Certificates
Certbot
The TLS specification demands a certificate, which is signed by a trusted certificate authority (CA). The CA ensures that the certificate holder is really who they claim to be. So basically when you see the green lock icon (or any other greenish sign to the left side of the URL in your browser) it means that the server you’re communicating with is really who it claims to be. If you’re on facebook.com and you see a green lock, it’s almost certain you really are communicating with Facebook and no one else can see your communication — or rather, no one else can read it.
It’s worth noting that this certificate doesn’t necessarily have to be verified by an authority such as Let’s Encrypt. There are other paid services as well. You can technically sign it yourself, but then (as you’re not a trusted CA) the users visiting your site will likely see a large scary warning offering to get them back to safety.
In the following example, we’ll use the Certbot, which is used to generate and manage certificates with Let’s Encrypt.
On the Certbot site you can find instructions on how to install Certbot for almost any OS/server combination. You should choose the options that are applicable to you.
A common combination for deploying Node apps is NGINX on the latest LTS Ubuntu and that’s what I’ll use here.
sudo apt-get update sudo apt-get install software-properties-common sudo add-apt-repository universe sudo add-apt-repository ppa:certbot/certbot sudo apt-get update
Webroot
Webroot is a Certbot plugin that, in addition to the Certbot default functionality (which automatically generates your public/private key pair and generates an SSL certificate for those), also copies the certificates to your webroot folder and verifies your server by placing some verification code into a hidden temporary directory named .well-known. In order to skip doing some of these steps manually, we’ll use this plugin. The plugin is installed by default with Certbot. In order to generate and verify our certificates, we’ll run the following:
certbot certonly --webroot -w /var/www/example/ -d www.example.com -d example.com
You may have to run this command as sudo, as it will try to write to /var/log/letsencrypt.
You’ll also be asked for your email address. It’s a good idea to put in a real address you use often, as you’ll get a notification if your certificate is about to expire. The trade-off for Let’s Encrypt issuing a free certificate is that it expires every three months. Luckily, renewal is as easy as running one simple command, which we can assign to a cron job and then not have to worry about expiration. Additionally, it’s a good security practice to renew SSL certificates, as it gives attackers less time to break the encryption. Sometimes developers even set up this cron to run daily, which is completely fine and even recommended.
Keep in mind that you have to run this command on a server to which the domain specified under the -d (for domain) flag resolves — that is, your production server. Even if you have the DNS resolution in your local hosts file, this won’t work, as the domain will be verified from outside. So if you’re doing this locally, it will most likely fail, unless you opened up a port from your local machine to the outside world and have it running behind a domain name which resolves to your machine. This is a highly unlikely scenario.
Last but not least, after running this command, the output will contain paths to your private key and certificate files. Copy these values into the previous code snippet — into the cert property for certificate, and the key property for the key:
// ... const options = { key: fs.readFileSync("/var/www/example/sslcert/privkey.pem"), cert: fs.readFileSync("/var/www/example/sslcert/fullchain.pem") // these paths might differ for you, make sure to copy from the certbot output }; // ...
The post How to Use SSL/TLS with Node.js appeared first on SitePoint.
by Florian Rappl via SitePoint https://ift.tt/2x69fcL
0 notes
cybercrew · 6 years ago
Photo
Tumblr media
Nmap Defcon Release! 80+ improvements include new NSE scripts/libs, new Npcap, etc.
Fellow hackers, I'm here in Las Vegas for Defcon and delighted to release Nmap 7.80.  It's the first formal Nmap release in more than a year, and I hope you find it worth the wait! The main reason for the delay is that we've been working so hard on our Npcap Windows packet capturing driver.  As many of you know, Windows Nmap traditionally depended on Winpcap for packet capture.  That is great software, but it has been discontinued and has seen no updates since 2013. It doesn't always work on Windows 10, and it depends on long-deprecated Windows API's that Microsoft could remove at any time.  So we've spent the last few years building our own Npcap raw packet capturing/sending driver, starting with Winpcap as the base.  It uses modern APIs and is more performant as well as more secure and more featureful.  We've had 15 Npcap releases since Nmap 7.70 and we're really happy with where it is now.  Even Wireshark switched to Npcap recently.  More details on Npcap can be found at https://npcap.org. But Windows users aren't the only ones benefiting from this new Nmap release.  It includes 80+ cross-platform improvements you can read about below, including 11 new NSE scripts, a bunch of new libraries, bug fixes and performance improvements. map 7.80 source code and binary packages for Linux, Windows, and Mac are available for free download from the usual spot: https://nmap.org/download.html If you find any bugs in this release, please let us know on the Nmap Dev list or bug tracker as described at https://nmap.org/book/man-bugs.html. Here is the full list of significant changes since 7.70: map 7.70 source code and binary packages for Linux, Windows, and Mac are available for free download from the usual spot: https://nmap.org/download.html If you find any bugs in this release, please let us know on the Nmap Dev list or bug tracker as described at https://nmap.org/book/man-bugs.html. Here is the full list of significant changes: o [Windows] The Npcap Windows packet capturing library (https://npcap.org/)  is faster and more stable than ever. Nmap 7.80 updates the bundled Npcap  from version 0.99-r2 to 0.9982, including all of these changes from the  last 15 Npcap releases: https://nmap.org/npcap/changelog o [NSE] Added 11 NSE scripts, from 8 authors, bringing the total up to 598!  They are all listed at https://nmap.org/nsedoc/, and the summaries are  below:  + [GH#1232] broadcast-hid-discoveryd discovers HID devices on a LAN by    sending a discoveryd network broadcast probe. [Brendan Coles]  + [GH#1236] broadcast-jenkins-discover discovers Jenkins servers on a LAN    by sending a discovery broadcast probe. [Brendan Coles]  + [GH#1016][GH#1082] http-hp-ilo-info extracts information from HP    Integrated Lights-Out (iLO) servers. [rajeevrmenon97]  + [GH#1243] http-sap-netweaver-leak detects SAP Netweaver Portal with the    Knowledge Management Unit enabled with anonymous access. [ArphanetX]  + https-redirect detects HTTP servers that redirect to the same port, but    with HTTPS. Some nginx servers do this, which made ssl-* scripts not run    properly. [Daniel Miller]  + [GH#1504] lu-enum enumerates Logical Units (LU) of TN3270E servers.    [Soldier of Fortran]  + [GH#1633] rdp-ntlm-info extracts Windows domain information from RDP    services. [Tom Sellers]  + smb-vuln-webexec checks whether the WebExService is installed and allows    code execution. [Ron Bowes]  + smb-webexec-exploit exploits the WebExService to run arbitrary commands    with SYSTEM privileges. [Ron Bowes]  + [GH#1457] ubiquiti-discovery extracts information from the Ubiquiti    Discovery service and assists version detection. [Tom Sellers]  + [GH#1126] vulners queries the Vulners CVE database API using CPE    information from Nmap's service and application version detection.    [GMedian, Daniel Miller] o [GH#1291][GH#34][GH#1339] Use pcap_create instead of pcap_live_open in  Nmap, and set immediate mode on the pcap descriptor. This solves packet  loss problems on Linux and may improve performance on other platforms.  [Daniel Cater, Mike Pontillo, Daniel Miller] o [NSE] Collected utility functions for string processing into a new  library, stringaux.lua. [Daniel Miller] o [NSE] New rand.lua library uses the best sources of random available on  the system to generate random strings. [Daniel Miller] o [NSE] New library, oops.lua, makes reporting errors easy, with plenty of  debugging detail when needed, and no clutter when not. [Daniel Miller] o [NSE] Collected utility functions for manipulating and searching tables  into a new library, tableaux.lua. [Daniel Miller] o [NSE] New knx.lua library holds common functions and definitions for  communicating with KNX/Konnex devices. [Daniel Miller] o [NSE][GH#1571] The HTTP library now provides transparent support for gzip-  encoded response body. (See https://github.com/nmap/nmap/pull/1571 for an  overview.) [nnposter] o [Nsock][Ncat][GH#1075] Add AF_VSOCK (Linux VM sockets) functionality to  Nsock and Ncat. VM sockets are used for communication between virtual  machines and the hypervisor. [Stefan Hajnoczi] o [Security][Windows] Address CVE-2019-1552 in OpenSSL by building with the  prefix "C:\Program Files (x86)\Nmap\OpenSSL". This should prevent  unauthorized users from modifying OpenSSL defaults by writing  configuration to this directory. o [Security][GH#1147][GH#1108] Reduced LibPCRE resource limits so that  version detection can't use as much of the stack. Previously Nmap could  crash when run on low-memory systems against target services which are  intentionally or accidentally difficult to match. Someone assigned  CVE-2018-15173 for this issue. [Daniel Miller] o [GH#1361] Deprecate and disable the -PR (ARP ping) host discovery  option. ARP ping is already used whenever possible, and the -PR option  would not force it to be used in any other case. [Daniel Miller] o [NSE] bin.lua is officially deprecated. Lua 5.3, added 2 years ago in Nmap  7.25BETA2, has native support for binary data packing via string.pack and  string.unpack. All existing scripts and libraries have been updated.  [Daniel Miller] o [NSE] Completely removed the bit.lua NSE library. All of its functions are  replaced by native Lua bitwise operations, except for `arshift`  (arithmetic shift) which has been moved to the bits.lua library. [Daniel  Miller] o [NSE][GH#1571] The HTTP library is now enforcing a size limit on the  received response body. The default limit can be adjusted with a script  argument, which applies to all scripts, and can be overridden case-by-case  with an HTTP request option. (See https://github.com/nmap/nmap/pull/1571  for details.)  [nnposter] o [NSE][GH#1648] CR characters are no longer treated as illegal in script  XML output. [nnposter] o [GH#1659] Allow resuming nmap scan with lengthy command line [Clément  Notin] o [NSE][GH#1614] Add TLS support to rdp-enum-encryption. Enables determining  protocol version against servers that require TLS and lays ground work for  some NLA/CredSSP information collection. [Tom Sellers] o [NSE][GH#1611] Address two protocol parsing issues in rdp-enum-encryption  and the RDP nse library which broke scanning of Windows XP. Clarify  protocol types [Tom Sellers] o [NSE][GH#1608] Script http-fileupload-exploiter failed to locate its  resource file unless executed from a specific working  directory. [nnposter] o [NSE][GH#1467] Avoid clobbering the "severity" and "ignore_404" values of  fingerprints in http-enum. None of the standard fingerprints uses these  fields. [Kostas Milonas] o [NSE][GH#1077] Fix a crash caused by a double-free of libssh2 session data  when running SSH NSE scripts against non-SSH services. [Seth Randall] o [NSE][GH#1565] Updates the execution rule of the mongodb scripts to be  able to run on alternate ports. [Paulino Calderon] o [Ncat][GH#1560] Allow Ncat to connect to servers on port 0, provided that  the socket implementation allows this. [Daniel Miller] o Update the included libpcap to 1.9.0. [Daniel Miller] o [NSE][GH#1544] Fix a logic error that resulted in scripts not honoring the  smbdomain script-arg when the target provided a domain in the NTLM  challenge.  [Daniel Miller] o [Nsock][GH#1543] Avoid a crash (Protocol not supported) caused by trying  to reconnect with SSLv2 when an error occurs during DTLS connect. [Daniel  Miller] o [NSE][GH#1534] Removed OSVDB references from scripts and replaced them  with BID references where possible. [nnposter] o [NSE][GH#1504] Updates TN3270.lua and adds argument to disable TN3270E  [Soldier of Fortran] o [GH#1504] RMI parser could crash when encountering invalid input [Clément  Notin] o [GH#863] Avoid reporting negative latencies due to matching an ARP or ND  response to a probe sent after it was recieved. [Daniel Miller] o [Ncat][GH#1441] To avoid confusion and to support non-default proxy ports,  option --proxy now requires a literal IPv6 address to be specified using  square-bracket notation, such as --proxy [2001:db8::123]:456. [nnposter] o [Ncat][GH#1214][GH#1230][GH#1439] New ncat option provides control over  whether proxy destinations are resolved by the remote proxy server or  locally, by Ncat itself. See option --proxy-dns. [nnposter] o [NSE][GH#1478] Updated script ftp-syst to prevent potential endless  looping.  [nnposter] o [GH#1454] New service probes and match lines for v1 and v2 of the Ubiquiti  Discovery protocol. Devices often leave the related service open and it  exposes significant amounts of information as well as the risk of being  used as part of a DDoS. New nmap-payload entry for v1 of the  protocol. [Tom Sellers] o [NSE] Removed hostmap-ip2hosts.nse as the API has been broken for a while  and the service was completely shutdown on Feb 17th, 2019. [Paulino  Calderon] o [NSE][GH#1318] Adds TN3270E support and additional improvements to  tn3270.lua and updates tn3270-screen.nse to display the new  setting. [mainframed] o [NSE][GH#1346] Updates product codes and adds a check for response length  in enip-info.nse. The script now uses string.unpack. [NothinRandom] o [Ncat][GH#1310][GH#1409] Temporary RSA keys are now 2048-bit to resolve a  compatibility issue with OpenSSL library configured with security level 2,  as seen on current Debian or Kali.  [Adrian Vollmer, nnposter] o [NSE][GH#1227] Fix a crash (double-free) when using SSH scripts against  non-SSH services. [Daniel Miller] o [Zenmap] Fix a crash when Nmap executable cannot be found and the system  PATH contains non-UTF-8 bytes, such as on Windows. [Daniel Miller] o [Zenmap] Fix a crash in results search when using the dir: operator:    AttributeError: 'SearchDB' object has no attribute 'match_dir' [Daniel    Miller] o [Ncat][GH#1372] Fixed an issue with Ncat -e on Windows that caused early  termination of connections. [Alberto Garcia Illera] o [NSE][GH#1359] Fix a false-positive in http-phpmyadmin-dir-traversal when  the server responds with 200 status to a POST request to any  URI. [Francesco Soncina] o [NSE] New vulnerability state in vulns.lua, UNKNOWN, is used to indicate  that testing could not rule out vulnerability. [Daniel Miller] o [GH#1355] When searching for Lua header files, actually use them where  they are found instead of forcing /usr/include. [Fabrice Fontaine, Daniel  Miller] o [NSE][GH#1331] Script traceroute-geolocation no longer crashes when  www.GeoPlugin.net returns null coordinates [Michal Kubenka, nnposter] o Limit verbose -v and debugging -d levels to a maximum of 10. Nmap does not  use higher levels internally. [Daniel Miller] o [NSE] tls.lua when creating a client_hello message will now only use a  SSLv3 record layer if the protocol version is SSLv3. Some TLS  implementations will not handshake with a client offering less than  TLSv1.0. Scripts will have to manually fall back to SSLv3 to talk to  SSLv3-only servers. [Daniel Miller] o [NSE][GH#1322] Fix a few false-positive conditions in  ssl-ccs-injection. TLS implementations that responded with fatal alerts  other than "unexpected message" had been falsely marked as  vulnerable. [Daniel Miller] o Emergency fix to Nmap's birthday announcement so Nmap wishes itself a  "Happy 21st Birthday" rather than "Happy 21th" in verbose mode (-v) on  September 1, 2018. [Daniel Miller] o [GH#1150] Start host timeout clocks when the first probe is sent to a  host, not when the hostgroup is started. Sometimes a host doesn't get  probes until late in the hostgroup, increasing the chance it will time  out. [jsiembida] o [NSE] Support for edns-client-subnet (ECS) in dns.lua has been improved by:  - [GH#1271] Using ECS code compliant with RFC 7871 [John Bond]  - Properly trimming ECS address, as mandated by RFC 7871 [nnposter]  - Fixing a bug that prevented using the same ECS option table more than    once [nnposter] o [Ncat][GH#1267] Fixed communication with commands launched with -e or -c  on Windows, especially when --ssl is used. [Daniel Miller] o [NSE] Script http-default-accounts can now select more than one  fingerprint category. It now also possible to select fingerprints by name  to support very specific scanning. [nnposter] o [NSE] Script http-default-accounts was not able to run against more than  one target host/port. [nnposter] o [NSE][GH#1251] New script-arg `http.host` allows users to force a  particular value for the Host header in all HTTP requests. o [NSE][GH#1258] Use smtp.domain script arg or target's domain name instead  of "example.com" in EHLO command used for STARTTLS. [gwire] o [NSE][GH#1233] Fix brute.lua's BruteSocket wrapper, which was crashing  Nmap with an assertion failure due to socket mixup [Daniel Miller]: nmap:  nse_nsock.cc:672: int receive_buf(lua_State*, int, lua_KContext):  Assertion `lua_gettop(L) == 7' failed. o [NSE][GH#1254] Handle an error condition in smb-vuln-ms17-010 caused by  IPS closing the connection. [Clément Notin] o [Ncat][GH#1237] Fixed literal IPv6 URL format for connecting through HTTP  proxies. [Phil Dibowitz] o [NSE][GH#1212] Updates vendors from ODVA list for enip-info. [NothinRandom] o [NSE][GH#1191] Add two common error strings that improve MySQL detection  by the script http-sql-injection. [Robert Taylor, Paulino Calderon] o [NSE][GH#1220] Fix bug in http-vuln-cve2006-3392 that prevented the script  to generate the vulnerability report correctly. [rewardone] o [NSE][GH#1218] Fix bug related to screen rendering in NSE library  tn3270. This patch also improves the brute force script  tso-brute. [mainframed] o [NSE][GH#1209] Fix SIP, SASL, and HTTP Digest authentication when the  algorithm contains lowercase characters. [Jeswin Mathai] o [GH#1204] Nmap could be fooled into ignoring TCP response packets if they  used an unknown TCP Option, which would misalign the validation, causing  it to fail. [Clément Notin, Daniel Miller] o [NSE]The HTTP response parser now tolerates status lines without a reason  phrase, which improves compatibility with some HTTP servers. [nnposter] o [NSE][GH#1169][GH#1170][GH#1171]][GH#1198] Parser for HTTP Set-Cookie header  is now more compliant with RFC 6265:  - empty attributes are tolerated  - double quotes in cookie and/or attribute values are treated literally  - attributes with empty values and value-less attributes are parsed equally  - attributes named "name" or "value" are ignored  [nnposter] o [NSE][GH#1158] Fix parsing http-grep.match script-arg. [Hans van den  Bogert] o [Zenmap][GH#1177] Avoid a crash when recent_scans.txt cannot be written  to.  [Daniel Miller] o Fixed --resume when the path to Nmap contains spaces. Reported on Windows  by Adriel Desautels. [Daniel Miller] o New service probe and match lines for adb, the Android Debug Bridge, which  allows remote code execution and is left enabled by default on many  devices. [Daniel Miller] Enjoy this new release and please do let us know if you find any problems! Download link: https://nmap.org/download.html Cheers, Fyodor
Source code: https://seclists.org/nmap-announce/2019/0
When more information is available our blog will be updated.
Read More Cyber New’s Visit Our Facebook Page Click the Link :   https://www.facebook.com/pages/Cyber-crew/780504721973461
Read More Cyber New’sVisit Our Twitter Page Click the Link :   https://twitter.com/Cyber0Crew
~R@@T @CCE$$~
1 note · View note
joelby · 6 years ago
Text
Using Splunk SmartStore with MinIO
Update - MinIO now provide a guide for setting up Splunk SmartStore with MinIO
Your Splunk server is gobbling up disk space and you don’t want to upgrade your disks. What to do?
My initial solution was to use frozen buckets to limit data retention by setting the following in ~splunk/etc/system/local/indexes.conf:
[default] coldToFrozenDir = /opt/frozen-archives frozenTimePeriodInSecs = 39000000
Any buckets older than the frozen time period would be moved into this frozen-archives directory, from where I would periodically archive them to another machine, from which they would be completely inaccessible but I felt a little better knowing that they hadn't just been deleted.
Eventually due to increasing data volumes, I found that I was still running low on disk space. The next option was to split indexes up and apply different retention policues to these. There are some log data which are useful for short term investigations, but I don't need much retention. These were set to just erase data after the frozen time period. Eventually I started running out of disk space again!
A while back, Splunk added support for Amazon S3 storage using something called SmartStore, which copies warm buckets to remote storage. It can then evict them from the local index at its leisure, and then bring them back when they are needed. This sounds like just what I want! I had a heck of a time getting it working though.
The first problem was that I wanted to use MinIO instead of Amazon S3. S3 is pretty inexpensive, but in this case I had another 'storage' VPS at the same hosting provider with plenty of disk space and free gigabit intra-VPS transmit. Here's how I got it working!
On the storage server
Download minio
Create configuration for it:
MINIO_VOLUMES="/home/minio/data" MINIO_OPTS="--address 127.0.0.1:9000 --compat" MINIO_ACCESS_KEY=??? MINOI_SECRET_KEY=???
Set up a systemd unit file for it
Set up nginx as a reverse proxy. This is mainly so that I can use certbot and Let's Encrypt:
server { listen 9001 ssl http2; listen [::]:9001 ssl http2; server_name minio.example.com; ssl_certificate /etc/letsencrypt/live/minio.example.com/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/minio.example.com/privkey.pem; # managed by Certbot ignore_invalid_headers off; # Allow any size file to be uploaded. # Set to a value such as 1000m; to restrict file size to a specific value client_max_body_size 0; # To disable buffering proxy_buffering off; location / { proxy_set_header Host $http_host; proxy_pass http://localhost:9000; # health_check uri=/minio/health/ready; } }
On the Splunk server
In indexes.conf, configure a remote store:
[volume:remote_store] storageType = remote path = s3://splunk/ # Replace splunk with whatever you want to call it remote.s3.access_key = # from minio's configuration remote.s3.secret_key = # from minio's configuration remote.s3.endpoint = https://minio.example.com:9001/ remote.s3.auth_region = us-east-1
Add some file to the MinIO container (I think you can just copy something in there manually on the filesystem) and then confirm that Splunk can see the file:
~splunk/bin/splunk cmd splunkd rfs -- ls --starts-with volume:remote_store
Then you can add the following for a specific index
[fooindex] remotePath = volume:remote_store/fooindex # The following two lines are apparently required, but ignored coldPath = $SPLUNK_DB/main/colddb thawedPath = $SPLUNK_DB/main/thaweddb
Restart Splunk
Monitor splunkd.log for activity by S3Client, BucketMover, and CacheManager
The configuration settings are a bit opaque and confusing. As far as I can tell, it will just start uploading warm buckets as it feels like it, and then evict them on a least recently used basis when you get close to your minimum free disk space.
Gotchas
Do not apply a remotePath to the [default] index. I did this (by mistake) and while data started uploading happily, it seemed like the frozen time retention policies of different indexes started to stomp over each other, so files were frozen (actually - deleted) from remote storage, probably according to the retention policy of the index with the smallest retention time. This was a bit of a disaster. It might work if you do remotePath=volume:remote_store/$_index_name.
It seems like Splunk requires MinIO to be run with the --compat parameter. I had a lot of trouble getting it to work without that.
MinIO isn't quite as full-featured as S3 - if you want to have different access keys for different users/purposes, I guess you're meant to run another instance of the servers. I wasted a lot of time trying to set up different keys and access levels, all of which kind of look like they are supported but don't really work the way you expect.
0 notes