#Continue Keyword in PowerShell
Explore tagged Tumblr posts
Text
Download Eurydice Informatique Laptops & Desktops Driver

Download Eurydice Informatique Laptops Pour
Download Eurydice Informatique Laptops Tunisie
Download Eurydice Informatique Laptops Sur
Download Eurydice Informatique Laptops Avec
Pageof 6
To install ESET Smart Security Premium on your computer, follow these instructions: Uninstall any previously installed antivirus software on your system. Download ESET Smart Security Premium here. Download heilig & schwab gmbh & c0. kg usb devices drivers. Double click on the downloaded file and perform installation. After installation, activate your ESET antivirus with the help of your license key.
Continuous Bernoulli distribution---simulator and test statistic
Kuan-Sian Wang; Mei-Yu Lee | Computer Sciences Rating: Rated: 0 times Format: PDF
This is an advanced and completely descriptive book for the continuous Bernoulli distribution that is very important to deep learning and variational autoencoder. This book is free to read and contains (1) the continuous Bernoulli distribution about sufficient statistic, point estimator, test..
Entity-Oriented Search
The personal data of thousands of Dutch citizens has been leaked from the systems of Municipal Health Services (GGD), the organisation that coordinates Covid-19 testing and vaccination policy in. Download drivers You can refine your search by entering the file name, type, description or company in the 'Keywords' field. Don't use words like 'and' & 'or' and there is a maximum of 4 words. This template is about computer repair and bugs. The theme is well utilized because it includes the appropriate background and shapes. Driver's education hilo. This template can also be used for educational and business purposes and can be used for commercial purposes.

Amine Dahimene | Computer Sciences Rating: Rated: 0 times Format: PDF
This open access book covers all facets of entity-oriented search-where 'search' can be interpreted in the broadest sense of information access-from a unified point of view, and provides a coherent and comprehensive overview of the state of the art. It represents the first synthesis of research in..
The Dummies' Guide to Software Engineering
Rosina S Khan | Computer Sciences Rating: Rated: 3 times Format: PDF, ePub, Kindle, TXT
This book is for Computer Science and Engineering undergraduate students which is simple to comprehend and is especially written in the format these students would enjoy reading and benefit from learning the foundation concepts of Software Engineering. It has been integrated from various resources..
How to Develop Embedded Software Using the QEMU Machine Emulator
Apriorit Inc. | Computer Sciences Rating: Rated: 1 times Format: PDF
This e-book has been written for embedded software developers by Apriorit experts. It goes in-depth on how to save time when developing a Windows device driver by emulating a physical device with QEMU and explores the details of device driver emulation based on QEMU virtual devices. What you..
Oracle Forms Recipes
Vinish Kapoor | Computer Sciences Rating: Rated: 1 times Format: PDF, ePub, Kindle, TXT
More than 50 recipes on Oracle Forms topic like Dynamic Lovs, Alerts, Triggers, Timers, Reports etc.Displaying some recipes titles below from the book:Checking For User Permissions Before Updating or Inserting The Records in Oracle Forms.An Example of Pre-Query and Post-Query Triggers in Oracle..
Rust Programming Tutorial
Apriorit Inc. | Computer Sciences Rating: Rated: 0 times Format: PDF
This is an extensive and beginner-friendly Rust tutorial prepared by our system programming team here at Apriorit. Whether you're a Rust aficionado or only starting your Rust journey, this e-book undoubtedly will prove useful to you. What you will learn ⦁ Discover Rust features that make..
The Dummies' Guide to Compiler Design
Rosina S Khan | Computer Sciences Rating: Rated: 2 times Format: PDF
This book is useful for those who are interested in knowing the underlying principles of a Compiler that is used for compiling high-level programming languages. This book actually guides you step by step in a lucid and simple way how to design a compiler ultimately. I am guessing you have..
Download Eurydice Informatique Laptops Pour
Basics with Windows PowerShell V2
Prometheus MMS | Computer Sciences Rating: Rated: 1 times Format: PDF, ePub, Kindle, TXT
Microsoft designed PowerShell to automate system tasks, such as batch processing, and to create systems management tools for commonly implemented processes.
Whitepaper – How to launch a mobile app successfully!
RG Infotech | Computer Sciences Rating: Rated: 0 times Format: PDF, TXT
This e-paper contains the details about how can you make your mobile app launch successful through considering different things. To make it easier to go through, we have broken these points into three segments – Before Launch, During Launch, and After Launch.
Bitcoin and Cryptocoin Technologies
Srinivas R Rao | Computer Sciences Rating: Rated: 1 times Format: PDF, ePub, Kindle, TXT
A classic disposition of a book which really has put in a great deal of substance in it as an eBook and a real structured book.Please see the contents and read them clearly with the figures given in each chapter clearly and read them . Memorize them , the concepts clearly...then u can easily..
Download drivers for modem for Fujitsu LIFEBOOK T725 laptop (Windows 7 x64), or download DriverPack Solution software for driver update. Are you tired of looking for the drivers for your devices? DriverPack Online will find and install the drivers you need automatically. Download files include installation/update procedure. The version of Endurance Software was V6.1.1 when customers purchased TX200FT S2. The latest version of PRIMERGY FT-model hotfix for Endurance Software V6.1.2 is Release V1.0L40. Fujitsu recommends. Download fujitsu modems driver. Download drivers for modem for Fujitsu LIFEBOOK T725 laptop for Windows 7, XP, 10, 8, and 8.1, or download DriverPack Solution software for driver update. Popular Drivers. Gateway EC14D modems Gateway M-6824 modems Gateway M-6884H modems Gateway ML3706 modems. Fi Series Software Downloads page. IMPORTANT NOTICE. It has recently come to our attention that certain websites are using “Fujitsu” and “ScanSnap” names without our permission, apparently stating that they are providing product information and software downloads. Download drivers for modem for Fujitsu LifeBook S Series laptop (Windows 10 x64), or download DriverPack Solution software for driver update. Are you tired of looking for the drivers for your devices? DriverPack Online will find and install the drivers you need automatically.
Page
Download Eurydice Informatique Laptops Tunisie
X Close
LMC Instruction Set
Download Eurydice Informatique Laptops Sur
Note that in the following table “xx” refers to a memory address (aka mailbox) in the RAM. The online LMC simulator has 100 different mailboxes in the RAM ranging from 00 to 99.
Download Eurydice Informatique Laptops Avec
MnemonicNameDescriptionOp CodeINPINPUTRetrieve user input and stores it in the accumulator.901OUTOUTPUTOutput the value stored in the accumulator.902LDALOADLoad the Accumulator with the contents of the memory address given.5xxSTASTOREStore the value in the Accumulator in the memory address given.3xxADDADDAdd the contents of the memory address to the Accumulator1xxSUBSUBTRACTSubtract the contents of the memory address from the Accumulator 2xxBRPBRANCH IF POSITIVEBranch/Jump to the address given if the Accumulator is zero or positive.8xxBRZBRANCH IF ZEROBranch/Jump to the address given if the Accumulator is zero.7xxBRABRANCH ALWAYSBranch/Jump to the address given.6xxHLTHALTStop the code000DATDATA LOCATIONUsed to associate a label to a free memory address. An optional value can also be used to be stored at the memory address.

0 notes
Text
Outlook Mobile Is Getting Voice Feature
oice command is a useful feature and people are using it more for there day to day activity.
This increased the sales of Amazon and Google’s speakers with Siri and Alexa listening to you and fulfilling your needs. When these tech companies are giving users a personalized experience, why not the leading tech giant Microsoft. Yes, now Microsoft came up with a feature that will surely save your screen time as you don’t have to look continuously on screen for writing the mail. This feature will be firstly available for iOS users but this doesn’t mean that you are not getting this amazing feature it will be available for other platforms also.

Microsoft has announced the inclusion of the Cortana Support Voice feature in its Outlook mobile app. This will make it easier for users to write emails, schedule meetings, and search. Available in Outlook for iOS initially, outlook mobile will appear to activate a new icon voice mode.
Users can also use their voice to respond to messages and compose new emails. Microsoft said in a statement Tuesday that weekly meetings have usually increased by 148 percent, with an average time of 6 to 29 minutes. A user-friendly company always tracks the demand and came up with something useful and unique. Microsoft also made it to enhance the user experience, and also saving time for users.
A person is switching from one screen to the other in the form of mobiles, laptops, and TV. As per the elitecontentmarketer.com survey, the average US adult is spending more than 3 hours on mobile. This is a mobile stats but working people spent several hours watching screens.
This new feature will firstly go to help those who don’t want to write more and schedule by voice commands.
The scheduler has created a new Microsoft 365 service to make online meetings easier. “The scheduler understands what you’re writing,” the company stated.. So you can tell Your Request to Cortana just as you ask someone to help schedule a meeting in an email.” As the scheduler learns, you get answers from Cortana asking you to clarify some questions or provide more details to meet your request.
Scheduler Microsoft is the first artificial intelligence service with human assistance in 365 that allows Cortana to operate without specific keywords.
“With Scheduler, Cortana has access to the same information that you do when scheduling a meeting,” Microsoft explained. This includes free/empty time slots of the outlook of you and your colleague. Busy availability is included, which is without access to any other details.” The scheduler has been enabled independently of other Cortana services, such as briefing emails from Cortana and Cortana for Windows 10.
Cortana is used by the scheduler to negotiate a meeting time. Users can schedule meetings using natural-language queries such as “Find a time for me and Paul to meet for breakfast next week.” Before sending calendar invitations, the backend service looks up attendee availability and communicates via email. Meetings can also be rescheduled and canceled by the scheduler.
Administrators must enable Scheduler via a PowerShell commandlet in order to create a custom mailbox and configure it to handle meeting requests. The Scheduler can be purchased from the Microsoft 365 Admin Center. Users must purchase and enable Scheduler separately, even if their organization already has Cortana for Windows 10 and/or the Cortana briefing email enabled.
The Scheduler will be available as an add-on to most Microsoft 365 licenses that include a Microsoft Exchange Online subscription. Scheduler costs $10 per user per month if paid annually, and $12 per user per month if paid monthly. Volume and partner discounts are available, as well as a 30-day standard free trial for Scheduler.
According to officials, the enhanced Outlook Mobile voice features also allow users to say things like “attach the latest budget to the meeting invitation,” and the Microsoft Graph will attach the appropriate document. In addition, the Dictation feature, which is already available in Word or Outlook, is now available for iOS with Outlook Mobile.
Source: https://theonlineblogs.com/outlook-mobile-is-getting-voice-feature
0 notes
Text
How to Launch Printers and Devices on Windows 10
Control Panel provides a detailed setup panel to organize and manage various features and equipment. Printers and Drivers are a section of classic control panels that enables the user to manage and add various peripherals in Windows 10. The section gives the users detailed information regarding leftover documents available in the queue for the printing process and other setup options.
In case you don’t know how to launch the Control Panel to adjust settings such as devices and printers, then this blog is for you. It will guide you on how to access these two sections.
Launching Devices and Printers Section
The panel of Devices and Printers allows the users to set up various kinds of stuff and tools such as Fax, printing Machines, Mouse; Bluetooth enabled devices and USB Ports. You may manage all these stuff here and check how they are working. In case an error occurs, you may troubleshoot various errors having specific tools.

Launching Devices and Printers by Using Control Panel
First and foremost, hit the Window logo and then write the keywords “control panel” into the search bar.
Tap on the Enter key to proceed.
After that, you have to set the option of “View by” to large icons.
Now, hit the Devices and Printers section from the provided links.
Launching Devices and Printers via Settings App
Hit the Window logo, and I tab simultaneously to launch the Settings page.
Now, hit the Devices tab there.
Once the relevant page appears on your screen, you have to navigate towards the right side direction. It should be the right of Bluetooth and Other Devices.
Hit the provided link “Devices and Printers” situated at the right-hand side edge of your screen.
Launching Devices and Printers Section via Command Prompt
In the beginning, hit the Windows tab and then press the R button but you have to make sure that you should tap these two buttons at one go to launch the Run dialogue box.
After that, type “cmd” into the search bar followed by the Enter key there.
When you have entered into the black panel, then type the keywords “control printers” followed by the tab “Enter” in order to launch the Devices and Printers section.
Launching Devices and Printers section through Run Dialog Box
Firstly, users have to hit the right click on the subsequent Start tab.
Then, choose the Run Dialog Box.
Now, enter “control printers” into the command box and hit the OK button to go forward.
Launching Devices and Printers via PowerShell
First and foremost, tap Window and “S” tab simultaneously to expand the taskbar search option.
Then, input “PowerShell” into the provided bar and then hit the Enter button there.
After that, you have to wait for a while until the program executes the command.
Now, type control printers just next to the blinking cursor.
Hit the Enter key to proceed.
Launching Devices and Printers Using Pin Devices and Printers
First of all, you have to launch the Control Panel.
Then, set the View by tab to large icons.
Next, right-click on the tab “Devices and Printers.”
Now, press the option “Pin to Start.”
Alternatively, users may select the option “Pin to Quick Access.”
Launching Devices and Printers by Creating Desktop Shortcuts
Right-click the empty space and then select the New tab.
After that, press the Shortcut option there.
After a bit, a window will appear, and you have to enter “control printers” into the provided space bar.
Then, hit the Enter tab to proceed.
Now, you have to provide a name to the shortcut such as “Devices and Printers.”
At last, hit the Finish tab there.
The device will create a desktop shortcut quickly, and you will require to double click on the relevant option there.
Launching Devices and Printers by Creating Shortcut Key
In the beginning, create a shortcut for Devices and Printers by abiding the procedures mentioned above.
Once you have created a shortcut, then right-click on the same option and then select the tab “properties.”
After a while, a pop-out wizard will appear on your display, and then you have to hit the bar related to a shortcut key.
Now, hit the tab that you wish.
Next, you will notice that your device has synced the shortcut key “Ctrl + Alt” just prior to the pressed tab. Then, the shortcut key will appear as “Ctrl +Alt + key.” Here the key is the tab that you have assigned.
Now, press the Apply tab and hit the OK tab to continue.
Then, you have to hit the keyboard shortcut simultaneously to launch the Devices and Printers section.
SOURCE:- How to Launch Printers and Devices on Windows 10
0 notes
Text
How to Use Microsoft’s New Windows File Recovery Tool in Windows 10?
Windows File Recovery is a tool provided by Microsoft and it is used for recovering removed files from the hard disks, USB drives, SD Cards, etc.
“Windows File Recovery” tool doesn’t hold any graphical interface as it’s a command-line utility tool only. Microsoft offers this important tool for recovering a file that you have recently removed, and it is based on the drive. In case you have stored lots of data in the system, then the file’s data might be overwritten.
In case, you wish to run this tool for recovery operations on your Windows 10, then follow these instructions carefully:
How to Download Windows File Recovery Tool?
§ First of all, you have to install the Windows File Recovery Tool by navigating the Microsoft Store. To do so, one can launch the Microsoft Store and then locate the tool “Windows File Recovery.” Alternatively, hit the link to launch the Store.
§ When the program is successfully installed, then launch the Start menu there and search for the respective utility tool that is “File Recover.”
§ Get access to the Windows File Recovery shortcut and then hit the “Yes” tab for the confirmation that appears via the UAC prompt box.
§ Then, you will be able to view the Command Prompt page window with admin controls. It’s the location where you will run the particular recovery command.
Note: You may also use other command-line environments such as Windows Terminal and PowerShell but ensure to launch them with admin privileges. To launch any of the command line environments with admin rights, you have to hit the Start tab, then perform the right-click on the option you wish to use. Then tap the option “Run as Admin.”
How to Recover Removed Files and Documents on Windows 10?
§ Type the command line to execute the operation:
“Winfr_source_drive//destination_drive_switches.”
§ Once you have returned to the command page, the tool will automatically create the directory with the name “Recovery-_Date and time” on the particular destination you specify.
How to Recover the Particular File on Default Mode?
§ To apply the tool as default mode, you have to use the “/n” and enter the search path there:
§ In case you wish to locate the particular file with the name “document.docx,” you have to use the command keywords “/n_document.docx.” One can also mention the full path to the respective file like “/n_Users/Bob/Documents/document.docx.”
§ To locate the entire files situated into the Documents folder, you have to use a particular keyword as per your username. For example, if the username is “Bob,” then you have to use “/n_Users/Bob/Documents.”
If you wish to search the file via the wildcard, then use “a*.” For instance “\n_Users/Bob/Documents/*.docx.” This path will search for the entire documents with DOCX names located in the Documents folder.
§ Let’s find the documents on C drive and copy them to the “D” drive; you have to execute the following command-line:
“winfr_C: //D:: n *_.docx.”
§ To continue the process, enter the letter “y.”
§ After some time, you will get the recovered files located into the directory having the name “Recovery_date and time” on your destination drive mentioned into the path you entered into the Command page.
John Martin is a Norton expert and has been working in the technology industry since 2002. As a technical expert, she has written technical blogs, manuals, white papers, and reviews for many websites such as norton.com/setup.
Source URL - How to Use Microsoft’s New Windows File Recovery Tool in Windows 10?
0 notes
Text
HOW TO MAKE WINDOWS 10 AUTOMATICALLY CONNECT TO A VPN ON STARTUP
Virtual Private Network is a private network that provides various services to its consumers to transfer and accept data in both public or shared networks. It works if its computing devices are connected directly to private networks. It helps to connect various VPN enabled applications running on multiple devices such as smartphone, desktop or a laptop. VPN helps you to enhance security, app management and functionalities related to your private network.

VPN also provides another feature to auto-connect applications with the launching of VPN on Windows 10. The automatic features enable your application to auto-sync to VPN when you launch it.
Pursue these instructions if you wish to connect an application to VPN on the startup:
Adding VPN Auto Triggering Feature
First and foremost, set up your VPN connectivity on Windows 10.
Then, you will be required to expand the interface of PowerShell on your device. To do this, same, tap on Start tab from your system desktop.
Then, enter the keyword “Powershell” into the search menu.
After that, apply the right mouse click on the option “Windows PowerShell” and tap on “Run as Administrator” option by navigating the menu option.
Now, hit the “Yes” tab once the prompt appears, asking for your confirmation to access your system.
Then, navigate to the window of PowerShell and enter the under-mentioned keywords:
< VPN_Connection >
< App_Path >
After that, replace these with your VPN connection name and file location of the app that you wish to use accordingly.
“Add- Vpn_Connection_Trigger_Application – Name “< VPN_Connection >” –ApplicationID “< App_Path >.”
Note: Ensure that you have included quotation marks in the above path.
Now, tap the enter key to apply your command.
After that, a prompt will appear related to PowerShell notifying you that split tunnelling feature is set turned Off by default. If you wish to continue, give your confirmation and activate the respective feature before initiating for the trigger.
Now, you have to enter “Y” character once the pop-up shows on your screen followed by the Enter key to proceed.
Note: It is directed to turn On Split Tunneling feature for making it auto-trigger for quick connection every time whenever you launch VPN service.
Split Tunneling splits all the currently existing traffic and only allows the relevant data comes from your expanded or connected app to flow via VPN. It prevents Windows from being routed to all the network traffic via VPN once the app triggers to launch.
Now, enter the following command into the PowerShell and then replace <VPNConnection> with the name of your VPN:
” Set-Vpn_Connection- Name ” < VPN_Connection > ” -Split_Tunneling $_True.”
After that, tap on the Enter key.
If you don’t need to terminate your connection and unfortunately, the app gets closed by mistake, then you may set a timeout buffer for the application to make it restart.
After that, enter the command “Set-Vpn_Connection -Name “< VPN_Connection >” -Idle_Disconnect_Seconds < Idle_Seconds >” into your PowerShell followed by Enter key.
Then, you have to replace the command “< VPN_Connection > with the connection’s name. Also, replace the command < Idle_Seconds > with second’s count and wait for a little until you see a confirmation prompt related to the termination of your connection.
At last, once the app shuts off, your system’s Window will wait for about 10 seconds until VPN terminates successfully.
Checking Auto-Triggering Application to VPN
If you wish to check which application you have set for triggering for a start to your VPN, you have to different options:
Using PowerShell cmdlet
Navigating File Explorer and Editing Phonebook
Through Cmdlet of PowerShell
Firstly, tap the Start button and then enter “Powershell” into the search pane followed by entering the key.
Now, you have to apply right click on the option Windows PowerShell and then tap the menu.
Next, choose the option Run as Administrator.
After that, hit the “Yes” button appears via a prompt dialogue box, asking your confirmation for accessing on your device.
Now, enter the following command followed by enter key:
“Get+ Vpn_Connection_Trigger + Connection_Name < VPN_Connection >”
Then, replace the keyword < VPN_Connection > with your VPNs name and hit Enter.
Through File Explorer
First of all, expand File Explorer and enter the command in the address pane:
“C:\ Users\ <User>\ App_Data\ Roaming\ Microsoft\ Network\ Connections\ Pbk.”
Then, it would help if you replaced “< User >” with existing VPN username followed by the enter key.
Now, right-click the option “ras_phone.pbk,” and hit the desired text editor where you wish to launch your file.
Note: Most of the “Text Editor” holds the Find feature for those who don’t wish to scroll for searching each application.
Smith is an inventive person who has been doing intensive research in particular topics and writing blogs and articles on webroot.com/safe and many other related topics. He is a very knowledgeable person with lots of experience.
Source:- https://helpwebroot.com/blog/how-to-make-windows-10-automatically-connect-to-a-vpn-on-startup/
0 notes
Text
Who Mysql Remove User On Chromebook
How To Get Directx Version
How To Get Directx Version Free tools and methods 365 days with numerous consumers and also you want to search domain by ip, whois lookup can decide to hold the website home company, initiatives, amenities, touch additional and share the details. You’ll want enough bandwidth in your end users automatically with lync server 2013, the lync client this has now become a hassle for the firm. In this blog post we wish to exit powershell and need an ideal way to have people looking at the application proprietors with home-grown application updates or minor installations. Tiny tiny rss. You’ll are looking to access everywhere as prolonged as fast and secure as possible. Page file – a neighborhood in the nation. However, after you’ve been surfing the site has all incoming in a dedicated server. Linux, an interesting historical past of a people find it hard to remember all users and passwords, then stored on other faraway servers.
What Mysql Remove User On Chromebook
Therefore, now, we center around the web now as opposed to dedicated servers, it has a newsletter in text or html scripts, the fundamental capability remains one of the better alternatives which you could use to create a home windows access database. 5. The system will ask to the cloud hosting don’t ought to avoid all of the get the same fast sign-up and courses over the web instead at the moment are providing ‘enterprise class’ mapping product. The simplest implementation of smb/cifs protocols. Avoid setting a share ratio. Private torrent clients with the adoption of the internet, handles video streaming engine to ahead the packets off the community medium. The fully managed amenities comprise reboots, os enhancements, safety patches and function when bittorrent runs for an organization. Business clients or for a huge e-commerce web internet hosting evaluations by sorting out.
Can Cheap Vps Reddit
Research corp. – is an efficient internet hosting online page to realize benefit via dedicated hosting companies, items and sites that experience data centers across us, europe that feature world-class championship classes that bring the web studying to trust him and to watch for it finished. When you wish to have a brief link will differ in size in addition to the economic capabilities that, rather, people still doesn’t are looking to be a media files as stored in the attention of the web users. A vps virtual private server availability group makes it possible for you to use where might one go into the settings any further, make sure you set up pgp certificates for continuous delivery businesses that are looking to improvement from separate destinations. Consider attempting to find automatic script installer for content will be replicated in a dedicated internet hosting provider that gives ssds as part of their kids’ assignments, grades, and faculty instructor was arrested and jailed.
Who Came Up With Once Upon A Time
Up connection their buyer provider providers that pay the agency themselves is one approach it really is vmware’s. Esx does support team will only permit you to keep repeating the same keyword rich there are paypal tools but it is advised that you’ve done your due diligence and that you just’ve given your web page is to get a site or how to advertise a competition through ads or remote desktop connection. Many association would have to bear no point creating a website that came from sensortag to raspberry pi environment it is centrally managed by a single administrator, who is the clothier of your personal ability to maintain.
The post Who Mysql Remove User On Chromebook appeared first on Quick Click Hosting.
from Quick Click Hosting https://quickclickhosting.com/who-mysql-remove-user-on-chromebook/
0 notes
Text
Original Post from Trend Micro Author: Trend Micro
By Daniel Lunghi and Jaromir Horejsi
We found new campaigns that appear to wear the badge of MuddyWater. Analysis of these campaigns revealed the use of new tools and payloads, which indicates that the well-known threat actor group is continuously developing their schemes. We also unearthed and detailed our other findings on MuddyWater, such as its connection to four Android malware families and its use of false flag techniques, among others, in our report “New MuddyWater Activities Uncovered: Threat Actors Used Multi-Stage Backdoors, False Flags, Android Malware, and More.”
One of the campaigns sent spear-phishing emails to a university in Jordan and the Turkish government. The said legitimate entities’ sender addresses were not spoofed to deceive email recipients. Instead, the campaign used compromised legitimate accounts to trick victims into installing malware.
Figure 1. Screenshot of a spear-phishing email spoofing a government office, dated April 8, 2019.
Figure 2. Email headers showing the origin of the spear-phishing email
Our analysis revealed that the threat actor group deployed a new multi-stage PowerShell-based backdoor called POWERSTATS v3. The spear-phishing email that contains a document embedded with a malicious macro drops a VBE file encoded with Microsoft Script Encoder. The VBE file, which holds a base64-encoded block of data containing obfuscated PowerShell script, will then execute. This block of data will be decoded and saved to the %PUBLIC% directory under various names ending with image file extensions such as .jpeg and .png. The PowerShell code will then use custom string obfuscation and useless code blocks to make it difficult to analyze.
Figure 3. Code snippet of obfuscated and useless code
The final backdoor code is revealed after the deobfuscation of all strings and removal of all unnecessary code. But first, the backdoor will acquire the operating system (OS) information and save the result to a log file.
Figure 4. Code snippet of OS information collection
This file will be uploaded to the command and control (C&C) server. Each victim machine will generate a random GUID number, which will be used for machine identification. Later on, the malware variant will start the endless loop, querying for the GUID-named file in a certain folder on the C&C server. If such a file is found, it will be downloaded and executed using the Powershell.exe process.
A second stage attack can be launched by commands sent to a specific victim in an asynchronous way, e.g., another backdoor payload can be downloaded and installed to targets that they are interested in.
Figure 5. The code in POWERSTATS v3 which downloads the second attack stage
We were able to analyze a case where the group launched a second stage attack. The group was able to download another backdoor, which is supported by the following commands:
Take screenshots
Command execution via the cmd.exe binary
If there’s no keyword, the malware variant assumes that the input is PowerShell code and executes it via the “Invoke-Expression” cmdlet
Figure 6. The code in POWERSTATS v3 (second stage) that handles the screenshot command
The C&C communication is done using PHP scripts with a hardcoded token and a set of backend functions such as sc (screenshot), res (result of executed command), reg (register new victim), and uDel (self-delete after an error).
Figure 7. In an endless loop, the malware variant queries a given path on the C&C server, trying to download a GUID-named file with commands to execute.
Other MuddyWater campaigns in the first half of 2019
The MuddyWater threat actor group has been actively targeting victims with a variety of tricks, and they seem to keep on adding more as they move forward with new campaigns. The campaign that used POWERSTATS v3 is not the only one we found with new tricks. We observed other campaigns that changed their delivery methods and dropped file types. Notably, these campaigns have also changed payloads and publicly available post-exploitation tools.
Discovery Date Method for dropping malicious code Type of files dropped Final payload 2019-01 Macros EXE SHARPSTATS 2019-01 Macros INF, EXE DELPHSTATS 2019-03 Macros Base64 encoded, BAT POWERSTATS v2 2019-04 Template injection Document with macros POWERSTATS v1 or v2 2019-05 Macros VBE POWERSTATS v3
Table 1. MuddyWater’s delivery methods and payloads in 2019 1H
In January 2019, we discovered that the campaign started using SHARPSTATS, a .NET-written backdoor that supports DOWNLOAD, UPLOAD, and RUN functions. In the same month, DELPHSTATS, a backdoor written in the Delphi programming language, emerged. DELPHSTATS queries the C&C server for a .dat file before executing it via the Powershell.exe process. Like SHARPSTATS, DELPHSTATS employs custom PowerShell script with code similarities to the one embedded into the former.
Figure 8. SHARPSTATS can be used to collect system information by dropping and executing a PowerShell script.
Figure 9. The code in DELPHSTATS that queries a certain directory on the C&C server. It’s where operators upload additional payload.
We came across the heavily obfuscated POWERSTATS v2 in March 2019. An earlier version of this backdoor decodes the initial encoded/compressed blocks of code, while an improved version appeared later on. The latter heavily uses format strings and redundant backtick characters. The function names in the earlier version were still somehow readable, but they were completely randomized in later versions.
Figure 10. Obfuscated POWERSTATS v2
After deobfuscation, the main backdoor loop queries different URLs for a “Hello server” message to obtain command and upload the result of the run command to the C&C server.
Figure 11. Deobfuscated main loop of POWERSTATS v2
Use of different post-exploitation tools
We also observed MuddyWater’s use of multiple open source post-exploitation tools, which they deployed after successfully compromising a target.
Name of the Post-Exploitation Tool Programming language/Interpreter CrackMapExec Python, PyInstaller ChromeCookiesView Executable file chrome-passwords Executable file EmpireProject PowerShell, Python FruityC2 PowerShell Koadic JavaScript LaZagne Python, PyInstaller Meterpreter Reflective loader, executable file Mimikatz Executable file MZCookiesView Executable file PowerSploit PowerShell Shootback Python, PyInstaller Smbmap Python, PyInstaller
Table 2. Tools used by MuddyWater campaigns over the years.
The delivery of the EmpireProject stager is notable in one of the campaigns that we monitored. The scheme involves the use of template injection and the abuse of the CVE-2017-11882 vulnerability. If the email recipient clicks on a malicious document, a remote template is downloaded, which will trigger the exploitation of CVE-2017-11882. This will then lead to the execution the EmpireProject stager.
Figure 12. Clicking on the malicious document leads to the abuse of CVE-2017-11882 and the execution of the EmpireProject stager.
Another campaign also stands out for its use of the LaZagne credential dumper, which was patched to drop and run POWERSTATS in the main function.
Figure 13. LaZagne has been patched to drop and run POWERSTATS in the main function. See added intimoddumpers() function. Note the typo in the function name – its INTI, not INIT.
Conclusion and security recommendations
While MuddyWater appears to have no access to zero-days and advanced malware variants, it still managed to compromise its targets. This can be attributed to the constant development of their schemes.
Notably, the group’s use of email as an infection vector seems to yield success for their campaigns. In this regard, apart from using smart email security solutions, organizations should inform their employees of ways to stay safe from email threats.
Organizations can also take advantage of Trend Micro Deep Discovery, a solution that provides detection, in-depth analysis, and proactive response to today’s stealthy malware and targeted attacks in real time. It provides a comprehensive defense tailored to protect organizations against targeted attacks and advanced threats through specialized engines, custom sandboxing, and seamless correlation across the entire attack lifecycle, allowing it to detect threats even without any engine or pattern updates.
View our full report to learn more about the other MuddyWater details we discovered.
The post MuddyWater Resurfaces, Uses Multi-Stage Backdoor POWERSTATS V3 and New Post-Exploitation Tools appeared first on .
#gallery-0-5 { margin: auto; } #gallery-0-5 .gallery-item { float: left; margin-top: 10px; text-align: center; width: 33%; } #gallery-0-5 img { border: 2px solid #cfcfcf; } #gallery-0-5 .gallery-caption { margin-left: 0; } /* see gallery_shortcode() in wp-includes/media.php */
Go to Source Author: Trend Micro MuddyWater Resurfaces, Uses Multi-Stage Backdoor POWERSTATS V3 and New Post-Exploitation Tools Original Post from Trend Micro Author: Trend Micro By Daniel Lunghi and Jaromir Horejsi We found new campaigns that appear to wear the badge of MuddyWater.
0 notes
Text
Bitvise WinSSHD v7.45
Bitvise SSH Server: Secure file transfer and terminal shell access for Windows ... Ease of use: Bitvise SSH Server is designed for Windows, so that it is easy to ... Our SSH server supports all desktop and server versions of Windows, 32-bit and 64-bit, from Windows XP SP3 and Windows Server 2003, up to the most recent – Windows 10 and Windows Server 2016. Bitvise SSH Server supports the following SSH services: Secure remote access via console (vt100, xterm and bvterm supported) Secure remote access via GUI (Remote Desktop or WinVNC required) Secure file transfer using SFTP and SCP (compatible with all major clients) Secure, effortless Git integration Secure TCP/IP connection tunneling (port forwarding) You can try out Bitvise SSH Server risk-free. To begin, simply download the installation executable - you will find the download links on our download page. After installing, you are free to evaluate Bitvise SSH Server for up to 30 days. If you then decide to continue using it, purchase a license. When the personal edition is chosen during installation, Bitvise SSH Server can be used free of charge by non-commercial personal users. Professional SSH server We continue to invest considerable effort to create the best SSH software we can. These are some of the features that make Bitvise SSH Server special: Ease of use: Bitvise SSH Server is designed for Windows, so that it is easy to install and configure. In a regular Windows environment, it will work immediately upon installation with no configuring. (We do however recommend tightening down settings to restrict access only to those accounts and features that you use.) Encryption and security: Provides state-of-the-art encryption and security measures suitable as part of a standards-compliant solution meeting the requirements of PCI, HIPAA, or FIPS 140-2 validation. Unlimited connections: Bitvise SSH Server imposes no limits on the number of users who can connect, and gets no more expensive for a larger number of connections. The number of simultaneous connections is limited only by system resources! Windows groups: Bitvise SSH Server natively supports configurability through Windows groups. No need to define account settings for each Windows account individually. The SSH server knows what groups a user is in and, if configured, will use appropriate Windows group settings. Virtual filesystem mount points can be inherited from multiple groups. Quotas and statistics: The SSH Server can be configured with per-user and per-group quotas and bandwidth limits, and keeps a record of daily, monthly, and annual usage statistics. Speed: SFTP transfer speed mostly depends on the client, but Bitvise SSH Server allows clients to obtain some of the fastest transfer speeds available. With Bitvise SSH Client, SFTP file transfer speeds in the tens or hundreds of MB/s can be obtained. SFTP v6 optimizations, including copy-file and check-file for remote file hashing and checksums, are supported. Virtual filesystem: Users connecting with file transfer clients can be restricted to a single directory, or several directories in a complex layout. Users connecting with terminal shell clients can also be restricted in the same way if their Shell access type is set to BvShell. Git integration: Set an account's shell access type to Git access only, and configure the path to your Git binaries and repositories. The account can now securely access Git, without being given unnecessary access to the system. Obfuscated SSH with an optional keyword. When supported and enabled in both the client and server, obfuscation makes it more difficult for an observer to detect that the protocol being used is SSH. (Protocol; OpenSSH patches) Single sign-on: Bitvise SSH Server supports GSSAPI-enabled Kerberos 5 key exchange, as well as NTLM and Kerberos 5 user authentication. This means that, using Bitvise SSH Client or another compatible GSSAPI-enabled client, any user in the same Windows domain, or a trusted one, can log into the SSH server without having to verify the server's host key fingerprint, and without even having to supply a password! Using Windows group-based settings, the user's account doesn't even have to be configured in the SSH server. Virtual accounts: want to set up an SFTP server with many users, but don't want to create and manage 1000 Windows accounts? No problem. Bitvise SSH Server supports virtual accounts, created in SSH server settings, backed by the identity of one or more Windows accounts. SSH server settings for these accounts are also configurable on a virtual group basis. Bandwidth limits: Separate upload and download speed limits can be configured for each user and group. Excellent terminal support: Bitvise SSH Server provides the best terminal support available on the Windows platform. Our terminal subsystem employs sophisticated techniques to render output accurately like no other Windows SSH server. And when used with Bitvise SSH Client, our bvterm protocol supports the full spectrum of a Windows console's features: colors, Unicode characters, and large scrollable buffers. BvShell: Users whose filesystem access should be restricted to specific directories can have their Shell access type configured to BvShell. Similar to chroot, this provides access to a limited terminal shell which can allow for more powerful access than a file transfer client, but still restricts the user to root directories configured for them. Telnet forwarding: The SSH Server can be configured to forward terminal sessions to a legacy Telnet server, providing SSH security to existing Telnet applications. Flexibility: most SSH server features can be configured individually on a per-account basis from the user-friendly Bitvise SSH Server Control Panel. Using Bitvise SSH Client, the SSH server's Control Panel can be accessed and configured through the same user-friendly interface from any remote location. Server-side forwarding: with Bitvise SSH Server and Client, a server and multiple clients can be set up so that all port forwarding rules are configured centrally at the server, without requiring any client-side setting updates. The SSH clients only need to be configured once, and port forwarding rules can easily be changed when necessary. Scriptable settings: Using the supplied BssCfg utility, or using PowerShell, all settings can be configured from a text file, from a script, or interactively from the command-line. Multi-instance support: Bitvise SSH Server supports multiple simultaneous, independent installations on the same computer for customers needing completely separate instances for different groups of users. Multiple SSH server versions can run concurrently, as separate instances on the same server. Master/slave configuration: In environments with multiple SSH server installations, one can be configured to run as master, and others can be configured to run as slaves. Slave installations can be configured to synchronize their settings, host keys, and/or password cache with the master. This feature can be used both for cluster support, and to reproduce aspects of SSH server settings on a large number of similar installations. Delegated administration: Users of the SSH Server who do not have full administrative rights can be granted limited access to SSH Server settings, where they can add or edit virtual accounts using the remote administration interface in Bitvise SSH Client. Limited administration tasks can be delegated without requiring full administrative access. What's New Bitvise SSH Server, SSH Client, and FlowSsh previously did not implement strict size limits or sanitization of content before displaying or logging strings received from a remote party. Much stricter size limits and sanitization are now implemented. Version 7.21 introduced settings to configure minimum and maximum sizes of DH groups to be considered for Diffie Hellman key exchange methods with group exchange. These settings did not work correctly in many circumstances. This would allow clients to request 1024-bit DH parameters where this was meant to be prohibited. Fixed. Bitvise SSH Server, SSH Client, and FlowSsh now report the size of the Diffie Hellman group actually used in DH key exchange. This is useful with key exchange methods that use DH group exchange, where there was previously no straightforward way to know what size group was used. File Size: 13.6 MB Read the full article
0 notes
Text
How Microsoft tools and partners support GDPR compliance
This post is authored by Daniel Grabski,Executive Security Advisor, Microsoft Enterprise Cybersecurity Group.
As an Executive Security Advisor for enterprises in Europe and the Middle East, I regularly engage with Chief Information Security Officers (CISOs), Chief Information Officers (CIOs) and Data Protection Officers (DPOs) to discuss their thoughts and concerns regarding the General Data Protection Regulation, or GDPR. In my last post about GDPR, I focused on how GDPR is driving the agenda of CISOs. This post will present resources to address these concerns.
Some common questions are How can Microsoft help our customers to be compliant with GDPR? and, Does Microsoft have tools and services to support the GDPR journey? Another is, How can I engage current investments in Microsoft technology to address GDPR requirements?
To help answer these, I will address the following:
GDPR benchmark assessment tool
Microsoft partners & GDPR
Microsoft Compliance Manager
New features in Azure Information Protection
Tools for CISOs
There are tools available that can ease kick-off activities for CISOs, CIOs, and DPOs. These tools can help them better understand their GDPR compliance, including which areas are most important to be improved.
To begin, Microsoft offers a free GDPR benchmark assessment tool which is available online to any business or organization.The assessment questions are designed to assist our customers to identify technologies and steps that can be implemented to simplify GDPR compliance efforts. It is also a tool allowing increased visibility and understanding of features available in Microsoft technologies that may already be available in existing infrastructures. The tool can reveal what already exists and what is not addressed to support each GDPR journey. As an outcome of the assessment, a full report is sentan example of which is shown here.
Image 1: GDPR benchmarking tool
As an example, see below the mapping to the first question in the Assessment. This is based on how Microsoft technology can support requirements about collection, storage, and usage of personal data; it is necessary to first identify the personal data currently held.
Azure Data Catalog provides a service in which many common data sources can be registered, tagged, and searched for personal data. Azure Search allows our customers to locate data across user-defined indexes. It is also possible to search for user accounts in Azure Active Directory. For example, CISOs can use the Azure Data Catalog portal to remove preview data from registered data assets and delete data assets from the catalog:
Image 2: Azure Data Catalogue
Dynamics 365 provides multiple methods to search for personal data within records such as Advanced Find, Quick Find, Relevance Search, and Filters. These functions each enable the identification of personal data.
Office 365 includes powerful tools to identify personal data across Exchange Online, SharePoint Online, OneDrive for Business, and Skype for Business environments. Content Search allows queries for personal data using relevant keywords, file properties, or built-in templates. Advanced eDiscovery identifies relevant data faster, and with better precision, than traditional keyword searches by finding near-duplicate files, reconstructing email threads, and identifying key themes and data relationships. Image 3 illustrates the common workflow for managing and using eDiscovery cases in the Security & Compliance Center and Advanced eDiscovery.
Image 3: Security & Compliance Center and Advanced eDiscovery
Windows 10 and Windows Server 2016 have tools to locate personal data, including PowerShell, which can find data housed in local and connected storage, as well as search for files and items by file name, properties, and full-text contents for some common file and data types.
A sample outcome, based on one of the questions regarding GDPR requirements, as shown in Image 4.
Image 4: example of the GDPR requirements mapped with features in the Microsoft platform
Resources for CISOs
Microsofts approach to GDPR relies heavily on working together with partners. Therefore, we built a broader version of the GDPR benchmarking tool available to customers through the extensive Microsoft Partner Network. The tool provides an in-depth analysis of an organizations readiness and offers actionable guidance on how to prepare for compliance, including how Microsoft products and features can help simplify the journey.
The Microsoft GDPR Detailed Assessmentis intended to be used by Microsoft partners who are assisting customers to assess where they are on their journey to GDPR readiness. The GDPR Detailed Assessment is accompanied by supporting materials to assist our partners in facilitating customer assessments.
In a nutshell, the GDPR Detailed Assessment is a three-step process where Microsoft partners engage with customers to assess their overall GDPR maturity. Image 5 below presents a high-level overview of the steps.
Image 5
The duration for the partner engagement is expected to last 3-4 weeks, while the total effort is estimated to be 10 to 20 hours, depending on the complexity of the organization and the number of participants as you can see below.
Image 6: Duration of the engagement
The Microsoft GDPR Detailed Assessment is intended for use by Microsoft partners to assess their customers overall GDPR maturity. It is not offered as a GDPR compliance attestation. Customers are responsible to ensure their own GDPR compliance and are advised to consult their legal and compliance teams for guidance. This tool is intended to highlight resources that can be used by partners to support a customers journey towards GDPR compliance.
We are all aware that achieving organizational compliance may be challenging. It is hard to stay up-to-date with all the regulations that matter to organizations and to define and implement controls with limited in-house capability.
To address these challenges, Microsoft announced a new compliance solution to help organizations meet data protection and regulatory standards more easily when using Microsoft cloud services Compliance Manager. The preview program, available today, addresses compliance management challenges and:
Enables real-time risk assessment on Microsoft cloud services
Provides actionable insights to improve data protection capabilities
Simplifies compliance processes through built-in control management and audit-ready reporting tools
Image 7 shows a dashboard summary illustrating a compliance posture against the data protection regulatory requirements that matter when using Microsoft cloud services. The dashboard summarizes Microsofts and your performance on control implementation on various data protection standards and regulations, including GDPR, ISO 27001, and ISO 27018.
Image 7: Compliance Manager dashboard
Having a holistic view is just the beginning. Use the rich insights available in Compliance Manager to go deeper to understand what should be done and improved. Each Microsoft-managed control illuminates the implementation and testing details, test date, and results. The tool provides recommended actions with step-by-step guidance. It aides better understanding of how to use the Microsoft cloud features to efficiently implement the controls managed by your organization. Image 8 shows an example of the insight provided by the tool.
Image 8: Information to help you improve your data protection capabilities
During the recentMicrosoft Ignite conference, Microsoft announced Azure Information Protection scanner. The feature is now available in public preview. This will help to manage and protect significant on-premise data and help prepare our customers and partners for regulations such as GDPR.
We released Azure Information Protection (AIP) to provide the ability to define a data classification taxonomy and apply those business rules to emails and documents. This feature is critical to protecting the data correctly throughout the lifecycle, regardless of where it is stored or shared.
We receive a lot of questions about how Microsoft can help to discover, label, and protect existing files to ensure all sensitive information is appropriately managed. The AIP scanner can:
Discover sensitive data that is stored in existing repositories when planning data-migration projects to cloud storage, to ensure toxic data remains in place.
Locate data that includes personal data and learn where it is stored to meet regulatory and compliance needs
Leverage existing metadata that was applied to files using other solutions
I encourage you to enroll for the preview version of Azure Information Protection scanner and to continue to grow your knowledge about how Microsoft is addressing GDPR and general security with these helpful resources:
GDPR resources: www.microsoft.com/gdpr
GDPR Beginning your GDPR Journey whitepaper
About the author:
Daniel Grabski is a 20-year veteran of the IT industry, currently serving as an Executive Security Advisor for organizations in Europe, the Middle East, and Africa with Microsoft Enterprise Cybersecurity Group. In this role he focuses on enterprises, partners, public sector customers and critical infrastructure stakeholders delivering strategic security expertise, advising on cybersecurity solutions and services needed to build and maintain secure and resilient ICT infrastructure.
from Microsoft Secure Blog Staff
0 notes
Text
Taming the Hybrid Swarm: Initializing a Mixed OS Docker Swarm Cluster Running Windows & Linux Native Containers with Vagrant & Ansible
We successfully scaled our Windows Docker containers running on one Docker host. But what if we change our focus and see our distributed application as a whole, running on multiple hosts using both Windows and Linux native containers? In this case, a multi-node Docker Orchestration tool like Docker Swarm could be a marvelous option!
Running Spring Boot Apps on Windows – Blog series
Part 1: Running Spring Boot Apps on Windows with Ansible Part 2: Running Spring Boot Apps on Docker Windows Containers with Ansible: A Complete Guide incl Packer, Vagrant & Powershell Part 3: Scaling Spring Boot Apps on Docker Windows Containers with Ansible: A Complete Guide incl Spring Cloud Netflix and Docker Compose Part 4: Taming the Hybrid Swarm: Initializing a Mixed OS Docker Swarm Cluster running Windows & Linux Native Containers with Vagrant & Ansible
Lifting our gaze to the application as a whole
We really went far in terms of using native Docker Windows containers to run our apps inside. We built our own Windows Vagrant boxes with Packer, prepared them to run Docker smoothly and provisioned our Apps – both fully automated with Ansible. We also scaled our Windows Docker containers using Docker Compose and Spring Cloud Netflix, not leaving our fully comprehensible setup and our willingness to have everything as code behind.
But if you look into real world projects, there are no single nodes anymore – running Docker or not. Applications today consist of a whole bunch of machines – and they naturally mix Linux and Windows. These projects need a solution to handle these distributed applications – ideally not doing everything with completely new tools. But how is this possible?
Why Docker Swarm?
This post is all about the “new” Docker Swarm mode, requiring Docker 1.12.0 as a minimum. But why did I choose this path? Today everything seems to point to Kubernetes: biggest media share, most google searches, most blog posts and so on. But there are a few things to consider before going with Kubernetes.
The first point is simple: a consultant´s real world project experience. After having in-depth discussions about it, you maybe shifted your project team to Dockerize all the (legacy) applications and finally brought all these containers into production. You should always remember: This is huge! And at least according to my experience not every team member already realized at that point what changes were applied to the team’s applications in detail, maybe also leaving some people unsure about “all this new stuff”. And now imagine you want to do the next step with Kubernetes. This means many of those “new” Docker concepts are again thrown over the pile – because Kubernetes brings in a whole bunch of new building blocks, leaving no stone unturned… And every blog post about Kubernetes and every colleague I talk with has to admit at some point that the learning curve with Kubernetes is really steep.
Second point: Many people at conferences propagate the following precedence: They tell you about “Docker 101” with the simplest steps with Docker and then go straight ahead to Kubernetes as the “next logical step”. Well guys, there´s something in between! It should be common sense that learning is ideally done step by step. The next step after Docker 101 is Docker Compose, adding a new level of abstraction. Coming from Compose, it is easy to continue with Docker Swarm – because it is built right into every Docker engine and could be used with Compose as well. It´s just called “Docker Stack” then. 🙂 And if people really do need more features than Swarm provides, then Kubernetes is for sure a good way to go!
Last point: Right now, a hybrid OS Docker cluster doesn´t really make sense with the released versions of Kubernetes and Windows Server 2016. Yes, Windows support was released on Kubernetes 1.6 (alpha with 1.5 already). But if you dive a bit deeper – and that always involves reading through the Microsoft documentation until you reach the part “current restrictions/limitations” – you´ll find the nasty things. As for now, the Windows network subsystem HNS isn´t really Kubernetes-ready, you have to plumb all the networking stuff (like routing tables) manually together. And one container per pod does not really make sense if you want to leverage the power of Kubernetes! Because the Windows SIG ist doing a really great job, these restrictions will not last for long any more and it is planned to have most of them solved by Kubernetes 1.8 and Windows Server 2016 Build 1709.
So if you want to run hybrid OS Docker clusters, just sit back and start with Docker Swarm. I think we´ll see a hybrid OS Kubernetes setup here on the blog in the near future, if Microsoft and the Windows SIG continue their work. 🙂
Building a multi-machine-ready Windows Vagrant box with Packer
Enough talk, let´s get our hands dirty! The last blog posts about Docker Windows containers already showed that only fully comprehensible setups will be used here. The claim is to not leave any stones in your way to get from zero to a running Docker Swarm at the end of this article. Therefore the already well-known GitHub repository ansible-windows-docker-springboot was extended by the next step step4-windows-linux-multimachine-vagrant-docker-swarm-setup.
There are basically two options to achieve a completely comprehensible multi-node setup: running more than one virtual machine on your local machine or using some cloud infrastructure. As I really came to love Vagrant as a tool to handle my virtual machines, why not use it again? And thanks to a colleague of mine´s hint, I found that Vagrant is also able to handle multi-machine setups. This would free us from the need to have access to a certain cloud provider, although the setup would be easily adaptable to one of these.
The only thing that would prevent us from using Vagrant would be the lack of a Windows Server 2016 Vagrant box. But luckily this problem was already solved in the second part of this blog post´s series and we could re-use the setup with Packer.io nearly one to one. There´s only a tiny difference in the Vagrantfile template for Packer: We shouldn´t define a port forwarding or a concrete VirtualBox VM name in this base box. Therefore we need a separate Vagrantfile template vagrantfile-windows_2016-multimachine.template, which is smaller than the one used in the second blog post:
Vagrant.configure("2") do |config| config.vm.box = "windows_2016_docker_multi" config.vm.guest = :windows config.windows.halt_timeout = 15 # Configure Vagrant to use WinRM instead of SSH config.vm.communicator = "winrm" # Configure WinRM Connectivity config.winrm.username = "vagrant" config.winrm.password = "vagrant" end
To be able to use a different Vagrantfile template in Packer, I had to refactor the Packer configuration windows_server_2016_docker.json slightly to accept a Vagrantfile template name (via template_url) and Vagrant box output name (box_output_prefix) as parameters. Now we´re able to create another kind of Windows Vagrant box, which we could use in our multi-machine setup.
So let´s go to commandline, clone the mentioned GitHub repository ansible-windows-docker-springboot and run the following Packer command inside the step0-packer-windows-vagrantbox directory (just be sure to have a current Packer version installed):
packer build -var iso_url=14393.0.161119-1705.RS1_REFRESH_SERVER_EVAL_X64FRE_EN-US.ISO -var iso_checksum=70721288bbcdfe3239d8f8c0fae55f1f -var template_url=vagrantfile-windows_2016-multimachine.template -var box_output_prefix=windows_2016_docker_multimachine windows_server_2016_docker.json
This could take some time and you´re encouraged to grab a coffee. It´s finished when there´s a new windows_2016_docker_multimachine_virtualbox.box inside the step0-packer-windows-vagrantbox directory. Let´s finally add the new Windows 2016 Vagrant base box to the local Vagrant installation:
vagrant box add --name windows_2016_multimachine windows_2016_docker_multimachine_virtualbox.box
A multi-machine Windows & Linux mixed OS Vagrant setup for Docker Swarm
Now that we have our Windows Vagrant base box in place, we can move on to the next step: the multi-machine Vagrant setup. Just switch over to the step4-windows-linux-multimachine-vagrant-docker-swarm-setup directory and have a look at the Vagrantfile there. Here´s a shortened version where we can see the basic structure with the defintion of our local Cloud infrastructure:
Vagrant.configure("2") do |config| # One Master / Manager Node with Linux config.vm.define "masterlinux" do |masterlinux| masterlinux.vm.box = "ubuntu/trusty64" ... end # One Worker Node with Linux config.vm.define "workerlinux" do |workerlinux| workerlinux.vm.box = "ubuntu/trusty64" ... end # One Master / Manager Node with Windows Server 2016 config.vm.define "masterwindows" do |masterwindows| masterwindows.vm.box = "windows_2016_multimachine" ... end # One Worker Node with Windows Server 2016 config.vm.define "workerwindows" do |workerwindows| workerwindows.vm.box = "windows_2016_multimachine" ... end end
It defines four machines to show the many possible solutions in a hybrid Docker Swarm cluster containing Windows and Linux boxes: Manager and Worker nodes, also both as Windows and Linux machines.
logo sources: Windows icon, Linux logo, Packer logo, Vagrant logo, VirtualBox logo
Within a Vagrant multi-machine setup, you define your separate machines with the config.vm.define keyword. Inside those define blocks we simply configure our individual machine. Let´s have a more detailed look at the workerlinux box:
# One Worker Node with Linux config.vm.define "workerlinux" do |workerlinux| workerlinux.vm.box = "ubuntu/trusty64" workerlinux.vm.hostname = "workerlinux01" workerlinux.ssh.insert_key = false workerlinux.vm.network "forwarded_port", guest: 22, host: 2232, host_ip: "127.0.0.1", id: "ssh" workerlinux.vm.network "private_network", ip: "172.16.2.11" workerlinux.vm.provider :virtualbox do |virtualbox| virtualbox.name = "WorkerLinuxUbuntu" virtualbox.gui = true virtualbox.memory = 2048 virtualbox.cpus = 2 virtualbox.customize ["modifyvm", :id, "--ioapic", "on"] virtualbox.customize ["modifyvm", :id, "--vram", "16"] end end
The first configuration statements are usual ones like configuring the Vagrant box to use or the VM´s hostname. But the forwarded port configuration is made explicit because we need to rely on the exact port later in our Ansible scripts. This isn´t possible with Vagrant’s default Port Correction feature. Since you won´t be able to use a port on your host machine more than once, Vagrant would automatically set it to a random value – and we wouldn’t be able to access our boxes later with Ansible.
To define and override the SSH port of a preconfigured Vagrant box, we need to know the id which is used to define it in the base box. Using Linux boxes, this is ssh – and with Windows this is winrm-ssl (which I found slightly un-documented…).
Networking between the Vagrant boxes
The next tricky part is the network configuration between the Vagrant boxes. As they need to talk to each other and also to the host, the so-called Host-only networking should be the way to go here (there´s a really good overview in this post, german only). Host-only networking is easily established using Vagrant’s Private Networks configuration.
And as we want to access our boxes with a static IP, we leverage the Vagrant configuration around Vagrant private networking. All that´s needed here is a line like this inside every Vagrant box definition of our multi-machine setup:
masterlinux.vm.network "private_network", ip: "172.16.2.10"
Same for Windows boxes. Vagrant will tell VirtualBox to create a new separate network (mostly vboxnet1 or similar), put a second virtual network device into every box and assign it with the static IP we configured in our Vagrantfile. That´s pretty much everything, except for Windows Server. 🙂 But we´ll take care of that soon.
Ansible access to the Vagrant boxes
Starting with the provisioning of multiple Vagrant boxes, the first approach might be to use Vagrant´s Ansible Provisioner and just have something like the following statement in your Vagrantfile:
config.vm.provision "ansible" do |ansible| ansible.playbook = "playbook.yml" end
But remember the purpose of this article: We want to initialize a Docker Swarm later using Ansible. And as this process involves generating and exchanging Join Tokens between the different Vagrant boxes, we need one central Ansible script to share these tokens. If we separated our Ansible scripts into as many as machines as our Cluster has (here: four), we would lose many advantages of Ansible and wouldn´t be able to share the tokens. Additionally, it would be great if we could fire up our entire application with one Ansible command, no matter if it´s distributed over a hybrid Cluster of Windows and Linux machines.
So we want one Ansible playbook that´s able to manage all nodes for us. But there´s a trap: using the same host in multiple groups is possible with Ansible, but all the inventory and group variables will be merged automatically. That is, because Ansible is designed to do that based on the host´s name. So please don´t do the following:
[masterwindows] 127.0.0.1 [masterlinux] 127.0.0.1 [workerwindows] 127.0.0.1
We somehow need to give Ansible a different hostname for our servers, although they are all local and share the same IP. Because a later-stage-based setup wouldn´t have this problem any more, we only need a solution for our local development environment with Vagrant. And there´s a quite simple one: just edit your etc/hosts on MacOS/Linux or Windows\system32\drivers\etc\hosts on Windows and add the following entries:
127.0.0.1 masterlinux01 127.0.0.1 workerlinux01 127.0.0.1 masterwindows01 127.0.0.1 workerwindows01
This is a small step we have to do by hand, but you can also work around it if you want. There are Vagrant plugins like vagrant-hostmanager that allow you to define these hostfile entries based on the config.vm.hostname configuration in your Vagrantfile. But this will require you to input your admin password every time you run vagrant up, which is also quite manually. Another alternative would have been to use the static IPs we configured in our host-only network. But it is really nice to see those aliases like masterlinux01 or workerwindows01 later beeing provisioned in the Ansible playbooks runs – you always know what machine is currently in action 🙂
Now we´re where we wanted to be: We have a Vagrant multi-machine setup in place that fires up a mixed OS cluster with a simple command. All we have to do is to run a well-known vagrant up:
Just be sure to have at least 8 GB of RAM to spare because every box has 2048 MB configured. You could also tweak that configuration in the Vagrantfile – but don´t go too low 🙂 And never mind, if you want to have a break or your notebook is running hot – just type vagrant halt. And the whole zoo of machines will be stopped for you.
Provisioning Windows & Linux machines inside one Ansible playbook
Now let´s hand over the baton to Ansible. But as you may have already guessed: The tricky part is to configure Ansible in a way that enables it to provision both Windows and Linux machines inside one playbook. As we already found out, Ansible is not only able to provision Linux machines, but also doesn´t shrink back from Windows boxes.
But handling Windows and Linux inside the same playbook requires a configuration option to be able to access both Linux machines via SSH and Windows machines via WinRM. The key configuration parameter to success here really is ansible_connection. Handling both operating systems with Ansible at the same time isn´t really well documented – but it´s possible. Let´s have a look at how this blog post´s setup handles this challenge. Therefore we begin with the hostsfile:
[masterwindows] masterwindows01 [masterlinux] masterlinux01 [workerwindows] workerwindows01 [workerlinux] workerlinux01 [linux:children] masterlinux workerlinux [windows:children] masterwindows workerwindows
The first four definitions simply order our Vagrant box machine names (which we defined inside our etc/hosts file) according to the four possible categories in a Windows/Linux mixed OS environment. As already said, these are Manager/Master nodes (masterwindows and masterlinux) and Worker nodes (workerwindows and workerlinux) with both Windows and Linux. The last two entries bring Ansible´s “Group of Groups” feature into the game. As all the machines of the group’s masterlinux and workerlinux are based on Linux, we configure them with the help of the suffix :children to belong to the supergroup linux. The same procedure applies to windows group of groups.
This gives us the following group variables structure:
The all.yml inherits configuration that should be applied to all machines in our Cluster, regardless if they are Windows or Linux boxes. And as the user and password are always the same with Vagrant boxes, we configure them there:
ansible_user: vagrant ansible_password: vagrant
In the windows.yml and linux.yml we finally use the mentioned ansible_connection configuration option to distinguish between both connection types. The linux.yml is simple:
ansible_connection: ssh
Besides the needed protocol definition through ansible_connection, the windows.yml adds a second configuration option for the WinRM connection to handle self-signed certificates:
ansible_connection: winrm ansible_winrm_server_cert_validation: ignore
The last thing to configure so that Ansible is able to access our Vagrant boxes is the correct port configuration. Let´s have a look into workerwindows.yml:
ansible_port: 55996
We need this configuration for every machine in the cluster. To be 100 % sure what port Vagrant uses to forward for SSH or WinRM on the specific machine, we need to configure it inside the Vagrantfile. As already mentioned in the paragraph A Multi-machine Windows- & Linux- mixed OS Vagrant setup for Docker Swarm above, this is done through a forwarded_port configuration (always remember to use the correct configuration options id: "ssh" (Linux) or id: "winrm-ssl" (Windows)):
workerwindows.vm.network "forwarded_port", guest: 5986, host: 55996, host_ip: "127.0.0.1", id: "winrm-ssl"
With this configuration, we´re finally able to access both Windows and Linux boxes within one Ansible playbook. Let´s try this! Just be sure to have fired up all the machines in the cluster via vagrant up. To try the Ansible connectivity e.g. to the Windows Worker node, run the following:
ansible workerwindows -i hostsfile -m win_ping
Testing the Ansible connectivity to a Linux node, e.g. the Linux Manager node, is nearly as easy:
ansible masterlinux -i hostsfile -m ping
Only on the first run, you need to wrap the command with setting and unsetting an environment variable that enables Ansible to successfully add the new Linux host to its known hosts. So in the first run instead of just firing up one command, execute these three (as recommended here):
export ANSIBLE_HOST_KEY_CHECKING=False ansible masterlinux -i hostsfile -m ping unset ANSIBLE_HOST_KEY_CHECKING
If you don´t want to hassle with generating keys, you maybe want to install sshpass (e.g. via brew install http://ift.tt/23yg1Lz on a Mac, as there´s no brew install sshpass). In this case, you should also set and unset the environment variable as described.
And voilà: We now have Ansible configured in a way that we can control and provision our cluster with only one playbook.
logo sources: Windows icon, Linux logo, Packer logo, Vagrant logo, VirtualBox logo, Ansible logo
Prepare Docker engines on all nodes
Ansible is now able to connect to every box of our multi-machine Vagrant setup. There are roughly two steps left: First we need to install and configure Docker on all nodes, so that we can initialize our Docker Swarm in a second step.
Therefore, the example project´s GitHub repository has two main Ansible playbooks prepare-docker-nodes.yml and initialize-docker-swarm.yml. The first one does all the groundwork needed to be able to initialize a Docker Swarm successfully afterwards, which is done in the second one. So let´s have a more detailed look at what´s going on inside these two scripts!
As Ansible empowers us to abstract from the gory details, we should be able to understand what´s going on inside the prepare-docker-nodes.yml:
- hosts: all tasks: - name: Checking Ansible connectivity to Windows nodes win_ping: when: inventory_hostname in groups['windows'] - name: Checking Ansible connectivity to Linux nodes ping: when: inventory_hostname in groups['linux'] - name: Allow Ping requests on Windows nodes (which is by default disabled in Windows Server 2016) win_shell: "netsh advfirewall firewall add rule name='ICMP Allow incoming V4 echo request' protocol=icmpv4:8,any dir=in action=allow" when: inventory_hostname in groups['windows'] - name: Prepare Docker on Windows nodes include: "../step1-prepare-docker-windows/prepare-docker-windows.yml host=windows" - name: Prepare Docker on Linux nodes include: prepare-docker-linux.yml host=linux - name: Allow local http Docker registry include: allow-http-docker-registry.yml
This blog post always tries to outline a fully comprehensible setup. So if you want to give it a try, just run the playbook inside the step4-windows-linux-multimachine-vagrant-docker-swarm-setup directory:
ansible-playbook -i hostsfile prepare-docker-nodes.yml
During the execution of the complete playbook, we should continue to dive into the playbooks structure. The first line is already quite interesting. With the hosts: all configuration we tell Ansible to simultaneously use all configured hosts at the same time. This means the script will be executed with masterlinux01, masterwindows01, workerlinux01 and workerwindows01 in parallel. The following two tasks represent a best practice with Ansible: Always check the connectivity to all our machines at the beginning – and stop the provisioning if a machine isn´t reachable.
As the Ansible modules for Linux and Windows are separated by design and non-compatible to each other, we need to always know on what kind of servers we want to execute our scripts. We could use Ansible conditionals with the when statement in that case. The conditional
when: inventory_hostname in groups['linux']
ensures that the present Ansible module is only executed on machines that are listed in the group linux. And as we defined the subgroups masterlinux and workerlinux below from linux, only the hosts masterlinux01 and workerlinux01 should be used here. masterwindows01 and workerwindows01 are skipped. Obviously the opposite is true when we use the following conditional:
when: inventory_hostname in groups['windows']
The next task is an Windows Server 2016 exclusive one. Because we want our Vagrant boxes to be accessible from each other, we have to allow the very basic command everybody starts with: the ping. That one is blocked by the Windows firewall as a default and we have to allow this with the following Powershell command:
- name: Allow Ping requests on Windows nodes (which is by default disabled in Windows Server 2016) win_shell: "netsh advfirewall firewall add rule name='ICMP Allow incoming V4 echo request' protocol=icmpv4:8,any dir=in action=allow" when: inventory_hostname in groups['windows']
The following tasks finally installs Docker on all of our nodes. Luckily we can rely on work that was already done here. The post Running Spring Boot Apps on Docker Windows Containers with Ansible: A Complete Guide incl Packer, Vagrant & Powershell elaborates on how to prepare Docker on Windows in depth. The only thing we have to do here is to re-use the Ansible script with host=windows appended:
- name: Prepare Docker on Windows nodes include: "../step1-prepare-docker-windows/prepare-docker-windows.yml host=windows"
The Linux counterpart is a straightforward Ansible implementation of the official “Get Docker CE for Ubuntu” Guide. The called prepare-docker-linux.yml is included from the main playbook with the host=linux setting:
- name: Prepare Docker on Linux nodes include: prepare-docker-linux.yml host=linux
If you want to use a different Linux distribution, just add the appropriate statements inside prepare-docker-linux.yml or search for an appropriate role to use on Ansible Galaxy.
Allowing http-based local Docker Registries
The last task in the prepare-docker-nodes.yml playbook seems to be rather surprising. The reason is simple: We can´t follow our old approach when building our Docker images on single Docker host any more, because in this way, we would be forced to build each image on all of our cluster´s nodes again and again, which leads to a heavy overhead. A different approach is needed here. With the help of a local Docker registry, we would only need to build an image once and push it to the registry. Then the image would be ready to run on all of our nodes.
How to run a Docker registry will be covered in a later step, but we have to take care of some groundwork here already. The simplest possible solution is to start with a plain http registry, which shouldn´t be a big security risk inside our isolated environment and also in many on-premises installations. Just be sure to update to https with TLS certificates if you´re going into the Cloud or if you want to provide your registry to other users outside the Docker Swarm.
Every Docker engine has to be configured to allow for interaction with a plain http registry. Therefore we have to add a daemon.json file into the appropriate folders which contains the following entry:
{ "insecure-registries" : ["172.16.2.10:5000"] }
As we want to run our Docker Swarm local registry on the Linux Manager node, we configure its IP address 172.16.2.10 here. Remember this address was itself configured inside the Vagrantfile.
But since we´re using Ansible, this step is also fully automated inside the included playbook allow-http-docker-registry.yml – including the correct daemon.json paths:
- name: Template daemon.json to /etc/docker/daemon.json on Linux nodes for later Registry access template: src: "templates/daemon.j2" dest: "/etc/docker/daemon.json" become: yes when: inventory_hostname in groups['linux'] - name: Template daemon.json to C:\ProgramData\docker\config\daemon.json on Windows nodes for later Registry access win_template: src: "templates/daemon.j2" dest: "C:\\ProgramData\\docker\\config\\daemon.json" when: inventory_hostname in groups['windows']
After that last step we now have every node ready with a running Docker engine and are finally able to initialize our Swarm.
logo sources: Windows icon, Linux logo, Vagrant logo, VirtualBox logo, Ansible logo, Docker logo
Initializing a Docker Swarm
Wow, this was quite a journey until we finally got where we wanted to be in the first place. Since Docker is prepared on all nodes, we could continue with the mentioned second part of the example project´s GitHub repository. The playbook initialize-docker-swarm.yml inherits everything that´s needed to initialize a fully functional Docker Swarm. So let´s have look at how this is done:
- hosts: all vars: masterwindows_ip: 172.16.2.12 tasks: - name: Checking Ansible connectivity to Windows nodes win_ping: when: inventory_hostname in groups['windows'] - name: Checking Ansible connectivity to Linux nodes ping: when: inventory_hostname in groups['linux'] - name: Open Ports in firewalls needed for Docker Swarm include: prepare-firewalls-for-swarm.yml - name: Initialize Swarm and join all Swarm nodes include: initialize-swarm-and-join-all-nodes.yml - name: Label underlying operation system to each node include: label-os-specific-nodes.yml - name: Run Portainer as Docker and Docker Swarm Visualizer include: run-portainer.yml - name: Run Docker Swarm local Registry include: run-swarm-registry.yml - name: Display the current Docker Swarm status include: display-swarm-status.yml
Before we go into any more details, let´s run this playbook also:
ansible-playbook -i hostsfile initialize-docker-swarm.yml
We´ll return to your fully initialized and running Docker Swarm cluster after we have had a look into the details of this playbook. 🙂 The first two tasks are already familiar to us. Remember that connectivity checks should always be the first thing to do. After these checks, the prepare-firewalls-for-swarm.yml playbook opens up essential ports for the later running Swarm. This part is mentioned pretty much at the end of the Docker docs if you read them through. There are basically three firewall configurations needed. TCP port 2377 is needed to allow the connection of all Docker Swarm nodes to the Windows Manager node, where we will initialize our Swarm later on. Therefore we use the conditional when: inventory_hostname in groups['masterwindows'], which means that this port is only opened up on the Windows Manager node. The following two configurations are mentioned in the docs:
“[…] you need to have the following ports open between the swarm nodes before you enable swarm mode.”
So we need to do this before even when initializing our Swarm! These are TCP/UDP port 7946 for Docker Swarm Container network discovery and UDP port 4789 for Docker Swarm overlay network traffic.
Join the Force… erm, Swarm!
The following task of our main initialize-docker-swarm.yml includes the initialize-swarm-and-join-all-nodes.yml playbook and does the heavy work needed to initialize a Docker Swarm with Ansible. Let´s go through all the steps here in detail:
- name: Leave Swarm on Windows master node, if there was a cluster before win_shell: "docker swarm leave --force" ignore_errors: yes when: inventory_hostname == "masterwindows01" - name: Initialize Docker Swarm cluster on Windows master node win_shell: "docker swarm init --advertise-addr= --listen-addr :2377" ignore_errors: yes when: inventory_hostname == "masterwindows01" - name: Pause a few seconds after new Swarm cluster initialization to prevent later errors on obtaining tokens to early pause: seconds: 5 ...
If you´re a frequent reader of this blog posts´ series, you´re already aware that there are many steps inside Ansible playbooks that are irrelevant for the first execution. And leaving the Swarm in the first step is such a case. If you run the playbook the next time, you will know what that is all about. It´s not a problem that this step will fail at the first exectution. The ignore_errors: yes configuration takes care of that.
The magic follows inside the next step. It runs the needed command to initialize a leading Docker Swarm Manager node, which we chose our Windows Manager node for. Both advertise-addr and listen-addr have to be set to the Windows Manager node in this case. As the initialization process of a Swarm takes some time, this step is followed by a pause module. We just give our Swarm some seconds in order to get itself together.
The reason for this are the following two steps, which obtain the later needed Join Tokens (and these steps occasionally fail, if you run them right after the docker swarm init step). The commands to get these tokens are docker swarm join-token worker -q for Worker nodes oder docker swarm join-token manager -q for Manager nodes.
... - name: Obtain worker join-token from Windows master node win_shell: "docker swarm join-token worker -q" register: worker_token_result ignore_errors: yes when: inventory_hostname == "masterwindows01" - name: Obtain manager join-token from Windows master node win_shell: "docker swarm join-token manager -q" register: manager_token_result ignore_errors: yes when: inventory_hostname == "masterwindows01" - name: Syncing the worker and manager join-token results to the other hosts set_fact: worker_token_result_host_sync: "" manager_token_result_host_sync: "" - name: Extracting and saving worker and manager join-tokens in variables for joining other nodes later set_fact: worker_jointoken: "" manager_jointoken: "" - name: Join-tokens... debug: msg: - "The worker join-token is: ''" - "The manager join-token is: ''" ...
As both steps run scoped via the conditional when: inventory_hostname == "masterwindows01" only on the host masterwindows01, they are not easy to hand over to the other hosts. But as we need them there, so that they are able to join the Swarm, we need to “synchronize” them with the help of the set_fact Ansible module and the definition of variables that are assigned the Join Tokens. To access the tokens from masterwindows01, we grab them with the following trick:
worker_token_result_host_sync: ""
The hostvars['masterwindows01'] statement gives us access to the masterwindows01 variables. The trailing ['worker_token_result'] points us to the registered result of the docker swarm join-token commands. And inside the following set_fact module, the only needed value is extracted with worker_token_result_host_sync.stdout.splitlines()[0]. Now looking onto the console output, the debug module prints all the extracted tokens for us.
Now we´re able to join all the other nodes to our Swarm – which again is prefixed with the leaving of a Swarm, not relevant to the first execution of the playbook. To join a Worker to the Swarm, the docker swarm join --token command has to be executed. To join a new Manager, a very similar docker swarm join --token is needed.
... - name: Leave Swarm on Windows worker nodes, if there was a cluster before win_shell: "docker swarm leave" ignore_errors: yes when: inventory_hostname in groups['workerwindows'] - name: Add Windows worker nodes to Docker Swarm cluster win_shell: "docker swarm join --token " ignore_errors: yes when: inventory_hostname in groups['workerwindows'] - name: Leave Swarm on Linux worker nodes, if there was a cluster before shell: "docker swarm leave" ignore_errors: yes when: inventory_hostname in groups['workerlinux'] - name: Add Linux worker nodes to Docker Swarm cluster shell: "docker swarm join --token " ignore_errors: yes when: inventory_hostname in groups['workerlinux'] - name: Leave Swarm on Linux manager nodes, if there was a cluster before shell: "docker swarm leave --force" ignore_errors: yes when: inventory_hostname in groups['masterlinux'] - name: Add Linux manager nodes to Docker Swarm cluster shell: "docker swarm join --token " ignore_errors: yes when: inventory_hostname in groups['masterlinux'] ...
At this point we have managed to initialize a fully functional Docker Swarm already!
logo sources: Windows icon, Linux logo, Vagrant logo, VirtualBox logo, Ansible logo, Docker & Docker Swarm logo
Congratulations! 🙂 But why do we need a few more steps?
Visualize the Swarm with Portainer
It´s always good to know what´s going on inside our Swarm! We are already able to obtain all the information with the help of Docker Swarm’s CLI, e.g. through docker service ls or docker service ps [yourServiceNameHere]. But it won’t hurt to also have a visual equivalent in place.
Docker´s own Swarm visualizer doesn´t look that neat compared to another tool called Portainer. There´s a good comparison available on stackshare if you´re interested. To me, Portainer seems to be the right choice when it comes to Docker and Docker Swarm visualization. And as soon as I read the following quote, I needed to get my hands on it:
“[Portainer] can be deployed as Linux container or a Windows native container.”
The Portainer configuration is already included in this setup here. The run-portainer.yml does all that´s needed:
- name: Create directory for later volume mount into Portainer service on Linux Manager node if it doesn´t exist file: path: /mnt/portainer state: directory mode: 0755 when: inventory_hostname in groups['linux'] sudo: yes - name: Run Portainer Docker and Docker Swarm Visualizer on Linux Manager node as Swarm service shell: "docker service create --name portainer --publish 9000:9000 --constraint 'node.role == manager' --constraint 'node.labels.os==linux' --mount type=bind,src=/var/run/docker.sock,dst=/var/run/docker.sock --mount type=bind,src=/mnt/portainer,dst=/data portainer/portainer:latest -H unix:///var/run/docker.sock" ignore_errors: yes when: inventory_hostname == "masterlinux01"
This will deploy a Portainer instance onto our Linux Manager node and connect it directly to the Swarm. For more details, see the Portainer docs. But there´s one thing that could lead to frustration: Use a current browser to access Portainer UI inside your Windows boxes! It doesn´t work inside the pre-installed Internet Explorer! Just head to http://172.16.2.10:9000 if you want to access Portainer from within the cluster.
But as we have the port forwarding configuration masterlinux.vm.network "forwarded_port", guest: 9000, host: 49000, host_ip: "127.0.0.1", id: "portainer" inside our Vagrantfile, we can also access the Portainer UI from our Vagrant host by simply pointing our browser to http://localhost:49000/:
Run a local Registry as Docker Swarm service
As already stated in the paragraph “Allowing http-based local Docker Registries”, we configured every Docker engine on every Swarm node to access http-based Docker registries. Although a local registry is only relevant for later application deployments, it´s something like a basic step when in comes to initializing a Docker Swarm Cluster. So let´s start our Docker Swarm Registry Service as mentioned in the docs. There were some errors in those docs that should be fixed by now (http://ift.tt/2fvdJgp, http://ift.tt/2fvk77s & http://ift.tt/2fv6Xan). Everything needed is done inside the run-swarm-registry.yml:
- name: Specify to run Docker Registry on Linux Manager node shell: "docker node update --label-add registry=true masterlinux01" ignore_errors: yes when: inventory_hostname == "masterlinux01" - name: Create directory for later volume mount into the Docker Registry service on Linux Manager node if it doesn´t exist file: path: /mnt/registry state: directory mode: 0755 when: inventory_hostname in groups['linux'] sudo: yes - name: Run Docker Registry on Linux Manager node as Swarm service shell: "docker service create --name swarm-registry --constraint 'node.labels.registry==true' --mount type=bind,src=/mnt/registry,dst=/var/lib/registry -e REGISTRY_HTTP_ADDR=0.0.0.0:5000 -p 5000:5000 --replicas 1 registry:2" ignore_errors: yes when: inventory_hostname == "masterlinux01"
As we want to run our registry on our Linux Manager node, we need to set a label to it at first. This is done with the docker node update --label-add command. Then we create a mountpoint inside the Linux Manager Docker host for later usage in the registry Docker container. The last step is the crucial one. It creates a Docker Swarm service running our local Registry, configured to run on port 5000 and only on the Linux Manager node with the help of --constraint 'node.labels.registry==true'.
If you manually check the Swarm´s services after this command, you´ll notice a running Swarm service called swarm-registry. Or we could simply go to our Portainer UI on http://localhost:49000/ and have a look:
The Swarm is ready for action!
We´ve reached the last step in our playbook initialize-docker-swarm.yml. The last task includes the playbook display-swarm-status.yml, which doesn´t really do anything on our machines – but outputs the current Swarm status to the console that executes our playbook:
This means that our Docker Swarm cluster is ready for the deployment of our applications! Wow, this was quite a journey we did in this post. But I think we´ve already reached a lot of our goals. Again, we have a completely comprehensible setup in place. Messing up something on the way to our running Swarm is not a problem any more. Just delete everything an start fresh! As we use the “Infrastructure as code” paradigm here, everything is automated and 100 % transparent. Just have a look into the GitHub repository or the commandline output. So no “I have played around with Swarm and everything worked out on my machine” speech. This setup works. And if not, fix it with a pull request. 🙂 I can´t emphasize this enough in the context of the fast-moving development of Docker Windows Containers right now.
This brings us to the second goal: We have a fully functional mixed OS hybrid Docker Swarm cluster in place which provides every potentially needed basis for our applications inside Docker containers – be it native Windows or native Linux. And by leveraging the power of Vagrant´s multi-machine setups, everything can be executed locally on your laptop while at the same time opening up to any possible cloud solution out there. So this setup will provide us with a local environment which is as near to staging or even production as possible.
So what´s left? We haven´t deployed an application so far! We for sure want to deploy a lot of microservices to our Docker Swarm cluster and let them automatically be scaled out. We also need to know how we can access applications running inside the Swarm and how we can do things like rolling updates without generating any downtime to our consumers. There are many things left to talk about, but maybe there will also be a second part to this blog post.
The post Taming the Hybrid Swarm: Initializing a Mixed OS Docker Swarm Cluster Running Windows & Linux Native Containers with Vagrant & Ansible appeared first on codecentric AG Blog.
Taming the Hybrid Swarm: Initializing a Mixed OS Docker Swarm Cluster Running Windows & Linux Native Containers with Vagrant & Ansible published first on http://ift.tt/2fA8nUr
0 notes
Text
Taming the Hybrid Swarm: Initializing a mixed OS Docker Swarm Cluster running Windows & Linux native Containers with Vagrant & Ansible
We successfully scaled our Windows Docker containers running on one Docker host. But what if we change our focus and see our distributed application as a whole, running on multiple hosts using both Windows and Linux native containers? In this case, a multi-node Docker Orchestration tool like Docker Swarm could be a marvelous option!
Running Spring Boot Apps on Windows – Blog series
Part 1: Running Spring Boot Apps on Windows with Ansible Part 2: Running Spring Boot Apps on Docker Windows Containers with Ansible: A Complete Guide incl Packer, Vagrant & Powershell Part 3: Scaling Spring Boot Apps on Docker Windows Containers with Ansible: A Complete Guide incl Spring Cloud Netflix and Docker Compose Part 4: Taming the Hybrid Swarm: Initializing a mixed OS Docker Swarm Cluster running Windows & Linux native Containers with Vagrant & Ansible
Lifting our gaze to the application as a whole
We really went far in terms of using native Docker Windows containers to run our apps inside. We built our own Windows Vagrant boxes with Packer, prepared them to run Docker smoothly and provisioned our Apps – both fully automated with Ansible. We also scaled our Windows Docker containers using Docker Compose and Spring Cloud Netflix, not leaving our fully comprehensible setup and our willingness to have everything as code behind.
But if you look into real world projects, there are no single nodes anymore – running Docker or not. Applications today consist of a whole bunch of machines – and they naturally mix Linux and Windows. These projects need a solution to handle these distributed applications – ideally not doing everything with completely new tools. But how is this possible?
Why Docker Swarm?
This post is all about the “new” Docker Swarm mode, requiring Docker 1.12.0 as a minimum. But why did I choose this path? Today everything seems to point to Kubernetes: biggest media share, most google searches, most blog posts and so on. But there are a few things to consider before going with Kubernetes.
The first point is simple: a consultant´s real world project experience. After having in-depth discussions about it, you maybe shifted your project team to Dockerize all the (legacy) applications and finally brought all these containers into production. You should always remember: This is huge! And at least according to my experience not every team member already realized at that point what changes were applied to the team’s applications in detail, maybe also leaving some people unsure about “all this new stuff”. And now imagine you want to do the next step with Kubernetes. This means many of those “new” Docker concepts are again thrown over the pile – because Kubernetes brings in a whole bunch of new building blocks, leaving no stone unturned… And every blog post about Kubernetes and every colleague I talk with has to admit at some point that the learning curve with Kubernetes is really steep.
Second point: Many people at conferences propagate the following precedence: They tell you about “Docker 101” with the simplest steps with Docker and then go straight ahead to Kubernetes as the “next logical step”. Well guys, there´s something in between! It should be common sense that learning is ideally done step by step. The next step after Docker 101 is Docker Compose, adding a new level of abstraction. Coming from Compose, it is easy to continue with Docker Swarm – because it is built right into every Docker engine and could be used with Compose as well. It´s just called “Docker Stack” then. 🙂 And if people really do need more features than Swarm provides, then Kubernetes is for sure a good way to go!
Last point: Right now, a hybrid OS Docker cluster doesn´t really make sense with the released versions of Kubernetes and Windows Server 2016. Yes, Windows support was released on Kubernetes 1.6 (alpha with 1.5 already). But if you dive a bit deeper – and that always involves reading through the Microsoft documentation until you reach the part “current restrictions/limitations” – you´ll find the nasty things. As for now, the Windows network subsystem HNS isn´t really Kubernetes-ready, you have to plumb all the networking stuff (like routing tables) manually together. And one container per pod does not really make sense if you want to leverage the power of Kubernetes! Because the Windows SIG ist doing a really great job, these restrictions will not last for long any more and it is planned to have most of them solved by Kubernetes 1.8 and Windows Server 2016 Build 1709.
So if you want to run hybrid OS Docker clusters, just sit back and start with Docker Swarm. I think we´ll see a hybrid OS Kubernetes setup here on the blog in the near future, if Microsoft and the Windows SIG continue their work. 🙂
Building a multi-machine-ready Windows Vagrant box with Packer
Enough talk, let´s get our hands dirty! The last blog posts about Docker Windows containers already showed that only fully comprehensible setups will be used here. The claim is to not leave any stones in your way to get from zero to a running Docker Swarm at the end of this article. Therefore the already well-known GitHub repository ansible-windows-docker-springboot was extended by the next step step4-windows-linux-multimachine-vagrant-docker-swarm-setup.
There are basically two options to achieve a completely comprehensible multi-node setup: running more than one virtual machine on your local machine or using some cloud infrastructure. As I really came to love Vagrant as a tool to handle my virtual machines, why not use it again? And thanks to a colleague of mine´s hint, I found that Vagrant is also able to handle multi-machine setups. This would free us from the need to have access to a certain cloud provider, although the setup would be easily adaptable to one of these.
The only thing that would prevent us from using Vagrant would be the lack of a Windows Server 2016 Vagrant box. But luckily this problem was already solved in the second part of this blog post´s series and we could re-use the setup with Packer.io nearly one to one. There´s only a tiny difference in the Vagrantfile template for Packer: We shouldn´t define a port forwarding or a concrete VirtualBox VM name in this base box. Therefore we need a separate Vagrantfile template vagrantfile-windows_2016-multimachine.template, which is smaller than the one used in the second blog post:
Vagrant.configure("2") do |config| config.vm.box = "windows_2016_docker_multi" config.vm.guest = :windows config.windows.halt_timeout = 15 # Configure Vagrant to use WinRM instead of SSH config.vm.communicator = "winrm" # Configure WinRM Connectivity config.winrm.username = "vagrant" config.winrm.password = "vagrant" end
To be able to use a different Vagrantfile template in Packer, I had to refactor the Packer configuration windows_server_2016_docker.json slightly to accept a Vagrantfile template name (via template_url) and Vagrant box output name (box_output_prefix) as parameters. Now we´re able to create another kind of Windows Vagrant box, which we could use in our multi-machine setup.
So let´s go to commandline, clone the mentioned GitHub repository ansible-windows-docker-springboot and run the following Packer command inside the step0-packer-windows-vagrantbox directory (just be sure to have a current Packer version installed):
packer build -var iso_url=14393.0.161119-1705.RS1_REFRESH_SERVER_EVAL_X64FRE_EN-US.ISO -var iso_checksum=70721288bbcdfe3239d8f8c0fae55f1f -var template_url=vagrantfile-windows_2016-multimachine.template -var box_output_prefix=windows_2016_docker_multimachine windows_server_2016_docker.json
This could take some time and you´re encouraged to grab a coffee. It´s finished when there´s a new windows_2016_docker_multimachine_virtualbox.box inside the step0-packer-windows-vagrantbox directory. Let´s finally add the new Windows 2016 Vagrant base box to the local Vagrant installation:
vagrant box add --name windows_2016_multimachine windows_2016_docker_multimachine_virtualbox.box
A multi-machine Windows & Linux mixed OS Vagrant setup for Docker Swarm
Now that we have our Windows Vagrant base box in place, we can move on to the next step: the multi-machine Vagrant setup. Just switch over to the step4-windows-linux-multimachine-vagrant-docker-swarm-setup directory and have a look at the Vagrantfile there. Here´s a shortened version where we can see the basic structure with the defintion of our local Cloud infrastructure:
Vagrant.configure("2") do |config| # One Master / Manager Node with Linux config.vm.define "masterlinux" do |masterlinux| masterlinux.vm.box = "ubuntu/trusty64" ... end # One Worker Node with Linux config.vm.define "workerlinux" do |workerlinux| workerlinux.vm.box = "ubuntu/trusty64" ... end # One Master / Manager Node with Windows Server 2016 config.vm.define "masterwindows" do |masterwindows| masterwindows.vm.box = "windows_2016_multimachine" ... end # One Worker Node with Windows Server 2016 config.vm.define "workerwindows" do |workerwindows| workerwindows.vm.box = "windows_2016_multimachine" ... end end
It defines four machines to show the many possible solutions in a hybrid Docker Swarm cluster containing Windows and Linux boxes: Manager and Worker nodes, also both as Windows and Linux machines.
logo sources: Windows icon, Linux logo, Packer logo, Vagrant logo, VirtualBox logo
Within a Vagrant multi-machine setup, you define your separate machines with the config.vm.define keyword. Inside those define blocks we simply configure our individual machine. Let´s have a more detailed look at the workerlinux box:
# One Worker Node with Linux config.vm.define "workerlinux" do |workerlinux| workerlinux.vm.box = "ubuntu/trusty64" workerlinux.vm.hostname = "workerlinux01" workerlinux.ssh.insert_key = false workerlinux.vm.network "forwarded_port", guest: 22, host: 2232, host_ip: "127.0.0.1", id: "ssh" workerlinux.vm.network "private_network", ip: "172.16.2.11" workerlinux.vm.provider :virtualbox do |virtualbox| virtualbox.name = "WorkerLinuxUbuntu" virtualbox.gui = true virtualbox.memory = 2048 virtualbox.cpus = 2 virtualbox.customize ["modifyvm", :id, "--ioapic", "on"] virtualbox.customize ["modifyvm", :id, "--vram", "16"] end end
The first configuration statements are usual ones like configuring the Vagrant box to use or the VM´s hostname. But the forwarded port configuration is made explicit because we need to rely on the exact port later in our Ansible scripts. This isn´t possible with Vagrant’s default Port Correction feature. Since you won´t be able to use a port on your host machine more than once, Vagrant would automatically set it to a random value – and we wouldn’t be able to access our boxes later with Ansible.
To define and override the SSH port of a preconfigured Vagrant box, we need to know the id which is used to define it in the base box. Using Linux boxes, this is ssh – and with Windows this is winrm-ssl (which I found slightly un-documented…).
Networking between the Vagrant boxes
The next tricky part is the network configuration between the Vagrant boxes. As they need to talk to each other and also to the host, the so-called Host-only networking should be the way to go here (there´s a really good overview in this post, german only). Host-only networking is easily established using Vagrant’s Private Networks configuration.
And as we want to access our boxes with a static IP, we leverage the Vagrant configuration around Vagrant private networking. All that´s needed here is a line like this inside every Vagrant box definition of our multi-machine setup:
masterlinux.vm.network "private_network", ip: "172.16.2.10"
Same for Windows boxes. Vagrant will tell VirtualBox to create a new separate network (mostly vboxnet1 or similar), put a second virtual network device into every box and assign it with the static IP we configured in our Vagrantfile. That´s pretty much everything, except for Windows Server. 🙂 But we´ll take care of that soon.
Ansible access to the Vagrant boxes
Starting with the provisioning of multiple Vagrant boxes, the first approach might be to use Vagrant´s Ansible Provisioner and just have something like the following statement in your Vagrantfile:
config.vm.provision "ansible" do |ansible| ansible.playbook = "playbook.yml" end
But remember the purpose of this article: We want to initialize a Docker Swarm later using Ansible. And as this process involves generating and exchanging Join Tokens between the different Vagrant boxes, we need one central Ansible script to share these tokens. If we separated our Ansible scripts into as many as machines as our Cluster has (here: four), we would lose many advantages of Ansible and wouldn´t be able to share the tokens. Additionally, it would be great if we could fire up our entire application with one Ansible command, no matter if it´s distributed over a hybrid Cluster of Windows and Linux machines.
So we want one Ansible playbook that´s able to manage all nodes for us. But there´s a trap: using the same host in multiple groups is possible with Ansible, but all the inventory and group variables will be merged automatically. That is, because Ansible is designed to do that based on the host´s name. So please don´t do the following:
[masterwindows] 127.0.0.1 [masterlinux] 127.0.0.1 [workerwindows] 127.0.0.1
We somehow need to give Ansible a different hostname for our servers, although they are all local and share the same IP. Because a later-stage-based setup wouldn´t have this problem any more, we only need a solution for our local development environment with Vagrant. And there´s a quite simple one: just edit your etc/hosts on MacOS/Linux or Windows\system32\drivers\etc\hosts on Windows and add the following entries:
127.0.0.1 masterlinux01 127.0.0.1 workerlinux01 127.0.0.1 masterwindows01 127.0.0.1 workerwindows01
This is a small step we have to do by hand, but you can also work around it if you want. There are Vagrant plugins like vagrant-hostmanager that allow you to define these hostfile entries based on the config.vm.hostname configuration in your Vagrantfile. But this will require you to input your admin password every time you run vagrant up, which is also quite manually. Another alternative would have been to use the static IPs we configured in our host-only network. But it is really nice to see those aliases like masterlinux01 or workerwindows01 later beeing provisioned in the Ansible playbooks runs – you always know what machine is currently in action 🙂
Now we´re where we wanted to be: We have a Vagrant multi-machine setup in place that fires up a mixed OS cluster with a simple command. All we have to do is to run a well-known vagrant up:
Just be sure to have at least 8 GB of RAM to spare because every box has 2048 MB configured. You could also tweak that configuration in the Vagrantfile – but don´t go too low 🙂 And never mind, if you want to have a break or your notebook is running hot – just type vagrant halt. And the whole zoo of machines will be stopped for you.
Provisioning Windows & Linux machines inside one Ansible playbook
Now let´s hand over the baton to Ansible. But as you may have already guessed: The tricky part is to configure Ansible in a way that enables it to provision both Windows and Linux machines inside one playbook. As we already found out, Ansible is not only able to provision Linux machines, but also doesn´t shrink back from Windows boxes.
But handling Windows and Linux inside the same playbook requires a configuration option to be able to access both Linux machines via SSH and Windows machines via WinRM. The key configuration parameter to success here really is ansible_connection. Handling both operating systems with Ansible at the same time isn´t really well documented – but it´s possible. Let´s have a look at how this blog post´s setup handles this challenge. Therefore we begin with the hostsfile:
[masterwindows] masterwindows01 [masterlinux] masterlinux01 [workerwindows] workerwindows01 [workerlinux] workerlinux01 [linux:children] masterlinux workerlinux [windows:children] masterwindows workerwindows
The first four definitions simply order our Vagrant box machine names (which we defined inside our etc/hosts file) according to the four possible categories in a Windows/Linux mixed OS environment. As already said, these are Manager/Master nodes (masterwindows and masterlinux) and Worker nodes (workerwindows and workerlinux) with both Windows and Linux. The last two entries bring Ansible´s “Group of Groups” feature into the game. As all the machines of the group’s masterlinux and workerlinux are based on Linux, we configure them with the help of the suffix :children to belong to the supergroup linux. The same procedure applies to windows group of groups.
This gives us the following group variables structure:
The all.yml inherits configuration that should be applied to all machines in our Cluster, regardless if they are Windows or Linux boxes. And as the user and password are always the same with Vagrant boxes, we configure them there:
ansible_user: vagrant ansible_password: vagrant
In the windows.yml and linux.yml we finally use the mentioned ansible_connection configuration option to distinguish between both connection types. The linux.yml is simple:
ansible_connection: ssh
Besides the needed protocol definition through ansible_connection, the windows.yml adds a second configuration option for the WinRM connection to handle self-signed certificates:
ansible_connection: winrm ansible_winrm_server_cert_validation: ignore
The last thing to configure so that Ansible is able to access our Vagrant boxes is the correct port configuration. Let´s have a look into workerwindows.yml:
ansible_port: 55996
We need this configuration for every machine in the cluster. To be 100 % sure what port Vagrant uses to forward for SSH or WinRM on the specific machine, we need to configure it inside the Vagrantfile. As already mentioned in the paragraph A Multi-machine Windows- & Linux- mixed OS Vagrant setup for Docker Swarm above, this is done through a forwarded_port configuration (always remember to use the correct configuration options id: "ssh" (Linux) or id: "winrm-ssl" (Windows)):
workerwindows.vm.network "forwarded_port", guest: 5986, host: 55996, host_ip: "127.0.0.1", id: "winrm-ssl"
With this configuration, we´re finally able to access both Windows and Linux boxes within one Ansible playbook. Let´s try this! Just be sure to have fired up all the machines in the cluster via vagrant up. To try the Ansible connectivity e.g. to the Windows Worker node, run the following:
ansible workerwindows -i hostsfile -m win_ping
Testing the Ansible connectivity to a Linux node, e.g. the Linux Manager node, is nearly as easy:
ansible masterlinux -i hostsfile -m ping
Only on the first run, you need to wrap the command with setting and unsetting an environment variable that enables Ansible to successfully add the new Linux host to its known hosts. So in the first run instead of just firing up one command, execute these three (as recommended here):
export ANSIBLE_HOST_KEY_CHECKING=False ansible masterlinux -i hostsfile -m ping unset ANSIBLE_HOST_KEY_CHECKING
If you don´t want to hassle with generating keys, you maybe want to install sshpass (e.g. via brew install http://ift.tt/23yg1Lz on a Mac, as there´s no brew install sshpass). In this case, you should also set and unset the environment variable as described.
And voilà: We now have Ansible configured in a way that we can control and provision our cluster with only one playbook.
logo sources: Windows icon, Linux logo, Packer logo, Vagrant logo, VirtualBox logo, Ansible logo
Prepare Docker engines on all nodes
Ansible is now able to connect to every box of our multi-machine Vagrant setup. There are roughly two steps left: First we need to install and configure Docker on all nodes, so that we can initialize our Docker Swarm in a second step.
Therefore, the example project´s GitHub repository has two main Ansible playbooks prepare-docker-nodes.yml and initialize-docker-swarm.yml. The first one does all the groundwork needed to be able to initialize a Docker Swarm successfully afterwards, which is done in the second one. So let´s have a more detailed look at what´s going on inside these two scripts!
As Ansible empowers us to abstract from the gory details, we should be able to understand what´s going on inside the prepare-docker-nodes.yml:
- hosts: all tasks: - name: Checking Ansible connectivity to Windows nodes win_ping: when: inventory_hostname in groups['windows'] - name: Checking Ansible connectivity to Linux nodes ping: when: inventory_hostname in groups['linux'] - name: Allow Ping requests on Windows nodes (which is by default disabled in Windows Server 2016) win_shell: "netsh advfirewall firewall add rule name='ICMP Allow incoming V4 echo request' protocol=icmpv4:8,any dir=in action=allow" when: inventory_hostname in groups['windows'] - name: Prepare Docker on Windows nodes include: "../step1-prepare-docker-windows/prepare-docker-windows.yml host=windows" - name: Prepare Docker on Linux nodes include: prepare-docker-linux.yml host=linux - name: Allow local http Docker registry include: allow-http-docker-registry.yml
This blog post always tries to outline a fully comprehensible setup. So if you want to give it a try, just run the playbook inside the step4-windows-linux-multimachine-vagrant-docker-swarm-setup directory:
ansible-playbook -i hostsfile prepare-docker-nodes.yml
During the execution of the complete playbook, we should continue to dive into the playbooks structure. The first line is already quite interesting. With the hosts: all configuration we tell Ansible to simultaneously use all configured hosts at the same time. This means the script will be executed with masterlinux01, masterwindows01, workerlinux01 and workerwindows01 in parallel. The following two tasks represent a best practice with Ansible: Always check the connectivity to all our machines at the beginning – and stop the provisioning if a machine isn´t reachable.
As the Ansible modules for Linux and Windows are separated by design and non-compatible to each other, we need to always know on what kind of servers we want to execute our scripts. We could use Ansible conditionals with the when statement in that case. The conditional
when: inventory_hostname in groups['linux']
ensures that the present Ansible module is only executed on machines that are listed in the group linux. And as we defined the subgroups masterlinux and workerlinux below from linux, only the hosts masterlinux01 and workerlinux01 should be used here. masterwindows01 and workerwindows01 are skipped. Obviously the opposite is true when we use the following conditional:
when: inventory_hostname in groups['windows']
The next task is an Windows Server 2016 exclusive one. Because we want our Vagrant boxes to be accessible from each other, we have to allow the very basic command everybody starts with: the ping. That one is blocked by the Windows firewall as a default and we have to allow this with the following Powershell command:
- name: Allow Ping requests on Windows nodes (which is by default disabled in Windows Server 2016) win_shell: "netsh advfirewall firewall add rule name='ICMP Allow incoming V4 echo request' protocol=icmpv4:8,any dir=in action=allow" when: inventory_hostname in groups['windows']
The following tasks finally installs Docker on all of our nodes. Luckily we can rely on work that was already done here. The post Running Spring Boot Apps on Docker Windows Containers with Ansible: A Complete Guide incl Packer, Vagrant & Powershell elaborates on how to prepare Docker on Windows in depth. The only thing we have to do here is to re-use the Ansible script with host=windows appended:
- name: Prepare Docker on Windows nodes include: "../step1-prepare-docker-windows/prepare-docker-windows.yml host=windows"
The Linux counterpart is a straightforward Ansible implementation of the official “Get Docker CE for Ubuntu” Guide. The called prepare-docker-linux.yml is included from the main playbook with the host=linux setting:
- name: Prepare Docker on Linux nodes include: prepare-docker-linux.yml host=linux
If you want to use a different Linux distribution, just add the appropriate statements inside prepare-docker-linux.yml or search for an appropriate role to use on Ansible Galaxy.
Allowing http-based local Docker Registries
The last task in the prepare-docker-nodes.yml playbook seems to be rather surprising. The reason is simple: We can´t follow our old approach when building our Docker images on single Docker host any more, because in this way, we would be forced to build each image on all of our cluster´s nodes again and again, which leads to a heavy overhead. A different approach is needed here. With the help of a local Docker registry, we would only need to build an image once and push it to the registry. Then the image would be ready to run on all of our nodes.
How to run a Docker registry will be covered in a later step, but we have to take care of some groundwork here already. The simplest possible solution is to start with a plain http registry, which shouldn´t be a big security risk inside our isolated environment and also in many on-premises installations. Just be sure to update to https with TLS certificates if you´re going into the Cloud or if you want to provide your registry to other users outside the Docker Swarm.
Every Docker engine has to be configured to allow for interaction with a plain http registry. Therefore we have to add a daemon.json file into the appropriate folders which contains the following entry:
{ "insecure-registries" : ["172.16.2.10:5000"] }
As we want to run our Docker Swarm local registry on the Linux Manager node, we configure its IP address 172.16.2.10 here. Remember this address was itself configured inside the Vagrantfile.
But since we´re using Ansible, this step is also fully automated inside the included playbook allow-http-docker-registry.yml – including the correct daemon.json paths:
- name: Template daemon.json to /etc/docker/daemon.json on Linux nodes for later Registry access template: src: "templates/daemon.j2" dest: "/etc/docker/daemon.json" become: yes when: inventory_hostname in groups['linux'] - name: Template daemon.json to C:\ProgramData\docker\config\daemon.json on Windows nodes for later Registry access win_template: src: "templates/daemon.j2" dest: "C:\\ProgramData\\docker\\config\\daemon.json" when: inventory_hostname in groups['windows']
After that last step we now have every node ready with a running Docker engine and are finally able to initialize our Swarm.
logo sources: Windows icon, Linux logo, Vagrant logo, VirtualBox logo, Ansible logo, Docker logo
Initializing a Docker Swarm
Wow, this was quite a journey until we finally got where we wanted to be in the first place. Since Docker is prepared on all nodes, we could continue with the mentioned second part of the example project´s GitHub repository. The playbook initialize-docker-swarm.yml inherits everything that´s needed to initialize a fully functional Docker Swarm. So let´s have look at how this is done:
- hosts: all vars: masterwindows_ip: 172.16.2.12 tasks: - name: Checking Ansible connectivity to Windows nodes win_ping: when: inventory_hostname in groups['windows'] - name: Checking Ansible connectivity to Linux nodes ping: when: inventory_hostname in groups['linux'] - name: Open Ports in firewalls needed for Docker Swarm include: prepare-firewalls-for-swarm.yml - name: Initialize Swarm and join all Swarm nodes include: initialize-swarm-and-join-all-nodes.yml - name: Label underlying operation system to each node include: label-os-specific-nodes.yml - name: Run Portainer as Docker and Docker Swarm Visualizer include: run-portainer.yml - name: Run Docker Swarm local Registry include: run-swarm-registry.yml - name: Display the current Docker Swarm status include: display-swarm-status.yml
Before we go into any more details, let´s run this playbook also:
ansible-playbook -i hostsfile initialize-docker-swarm.yml
We´ll return to your fully initialized and running Docker Swarm cluster after we have had a look into the details of this playbook. 🙂 The first two tasks are already familiar to us. Remember that connectivity checks should always be the first thing to do. After these checks, the prepare-firewalls-for-swarm.yml playbook opens up essential ports for the later running Swarm. This part is mentioned pretty much at the end of the Docker docs if you read them through. There are basically three firewall configurations needed. TCP port 2377 is needed to allow the connection of all Docker Swarm nodes to the Windows Manager node, where we will initialize our Swarm later on. Therefore we use the conditional when: inventory_hostname in groups['masterwindows'], which means that this port is only opened up on the Windows Manager node. The following two configurations are mentioned in the docs:
“[…] you need to have the following ports open between the swarm nodes before you enable swarm mode.”
So we need to do this before even when initializing our Swarm! These are TCP/UDP port 7946 for Docker Swarm Container network discovery and UDP port 4789 for Docker Swarm overlay network traffic.
Join the Force… erm, Swarm!
The following task of our main initialize-docker-swarm.yml includes the initialize-swarm-and-join-all-nodes.yml playbook and does the heavy work needed to initialize a Docker Swarm with Ansible. Let´s go through all the steps here in detail:
- name: Leave Swarm on Windows master node, if there was a cluster before win_shell: "docker swarm leave --force" ignore_errors: yes when: inventory_hostname == "masterwindows01" - name: Initialize Docker Swarm cluster on Windows master node win_shell: "docker swarm init --advertise-addr= --listen-addr :2377" ignore_errors: yes when: inventory_hostname == "masterwindows01" - name: Pause a few seconds after new Swarm cluster initialization to prevent later errors on obtaining tokens to early pause: seconds: 5 ...
If you´re a frequent reader of this blog posts´ series, you´re already aware that there are many steps inside Ansible playbooks that are irrelevant for the first execution. And leaving the Swarm in the first step is such a case. If you run the playbook the next time, you will know what that is all about. It´s not a problem that this step will fail at the first exectution. The ignore_errors: yes configuration takes care of that.
The magic follows inside the next step. It runs the needed command to initialize a leading Docker Swarm Manager node, which we chose our Windows Manager node for. Both advertise-addr and listen-addr have to be set to the Windows Manager node in this case. As the initialization process of a Swarm takes some time, this step is followed by a pause module. We just give our Swarm some seconds in order to get itself together.
The reason for this are the following two steps, which obtain the later needed Join Tokens (and these steps occasionally fail, if you run them right after the docker swarm init step). The commands to get these tokens are docker swarm join-token worker -q for Worker nodes oder docker swarm join-token manager -q for Manager nodes.
... - name: Obtain worker join-token from Windows master node win_shell: "docker swarm join-token worker -q" register: worker_token_result ignore_errors: yes when: inventory_hostname == "masterwindows01" - name: Obtain manager join-token from Windows master node win_shell: "docker swarm join-token manager -q" register: manager_token_result ignore_errors: yes when: inventory_hostname == "masterwindows01" - name: Syncing the worker and manager join-token results to the other hosts set_fact: worker_token_result_host_sync: "" manager_token_result_host_sync: "" - name: Extracting and saving worker and manager join-tokens in variables for joining other nodes later set_fact: worker_jointoken: "" manager_jointoken: "" - name: Join-tokens... debug: msg: - "The worker join-token is: ''" - "The manager join-token is: ''" ...
As both steps run scoped via the conditional when: inventory_hostname == "masterwindows01" only on the host masterwindows01, they are not easy to hand over to the other hosts. But as we need them there, so that they are able to join the Swarm, we need to “synchronize” them with the help of the set_fact Ansible module and the definition of variables that are assigned the Join Tokens. To access the tokens from masterwindows01, we grab them with the following trick:
worker_token_result_host_sync: ""
The hostvars['masterwindows01'] statement gives us access to the masterwindows01 variables. The trailing ['worker_token_result'] points us to the registered result of the docker swarm join-token commands. And inside the following set_fact module, the only needed value is extracted with worker_token_result_host_sync.stdout.splitlines()[0]. Now looking onto the console output, the debug module prints all the extracted tokens for us.
Now we´re able to join all the other nodes to our Swarm – which again is prefixed with the leaving of a Swarm, not relevant to the first execution of the playbook. To join a Worker to the Swarm, the docker swarm join --token command has to be executed. To join a new Manager, a very similar docker swarm join --token is needed.
... - name: Leave Swarm on Windows worker nodes, if there was a cluster before win_shell: "docker swarm leave" ignore_errors: yes when: inventory_hostname in groups['workerwindows'] - name: Add Windows worker nodes to Docker Swarm cluster win_shell: "docker swarm join --token " ignore_errors: yes when: inventory_hostname in groups['workerwindows'] - name: Leave Swarm on Linux worker nodes, if there was a cluster before shell: "docker swarm leave" ignore_errors: yes when: inventory_hostname in groups['workerlinux'] - name: Add Linux worker nodes to Docker Swarm cluster shell: "docker swarm join --token " ignore_errors: yes when: inventory_hostname in groups['workerlinux'] - name: Leave Swarm on Linux manager nodes, if there was a cluster before shell: "docker swarm leave --force" ignore_errors: yes when: inventory_hostname in groups['masterlinux'] - name: Add Linux manager nodes to Docker Swarm cluster shell: "docker swarm join --token " ignore_errors: yes when: inventory_hostname in groups['masterlinux'] ...
At this point we have managed to initialize a fully functional Docker Swarm already!
logo sources: Windows icon, Linux logo, Vagrant logo, VirtualBox logo, Ansible logo, Docker & Docker Swarm logo
Congratulations! 🙂 But why do we need a few more steps?
Visualize the Swarm with Portainer
It´s always good to know what´s going on inside our Swarm! We are already able to obtain all the information with the help of Docker Swarm’s CLI, e.g. through docker service ls or docker service ps [yourServiceNameHere]. But it won’t hurt to also have a visual equivalent in place.
Docker´s own Swarm visualizer doesn´t look that neat compared to another tool called Portainer. There´s a good comparison available on stackshare if you´re interested. To me, Portainer seems to be the right choice when it comes to Docker and Docker Swarm visualization. And as soon as I read the following quote, I needed to get my hands on it:
“[Portainer] can be deployed as Linux container or a Windows native container.”
The Portainer configuration is already included in this setup here. The run-portainer.yml does all that´s needed:
- name: Create directory for later volume mount into Portainer service on Linux Manager node if it doesn´t exist file: path: /mnt/portainer state: directory mode: 0755 when: inventory_hostname in groups['linux'] sudo: yes - name: Run Portainer Docker and Docker Swarm Visualizer on Linux Manager node as Swarm service shell: "docker service create --name portainer --publish 9000:9000 --constraint 'node.role == manager' --constraint 'node.labels.os==linux' --mount type=bind,src=/var/run/docker.sock,dst=/var/run/docker.sock --mount type=bind,src=/mnt/portainer,dst=/data portainer/portainer:latest -H unix:///var/run/docker.sock" ignore_errors: yes when: inventory_hostname == "masterlinux01"
This will deploy a Portainer instance onto our Linux Manager node and connect it directly to the Swarm. For more details, see the Portainer docs. But there´s one thing that could lead to frustration: Use a current browser to access Portainer UI inside your Windows boxes! It doesn´t work inside the pre-installed Internet Explorer! Just head to http://172.16.2.10:9000 if you want to access Portainer from within the cluster.
But as we have the port forwarding configuration masterlinux.vm.network "forwarded_port", guest: 9000, host: 49000, host_ip: "127.0.0.1", id: "portainer" inside our Vagrantfile, we can also access the Portainer UI from our Vagrant host by simply pointing our browser to http://localhost:49000/:
Run a local Registry as Docker Swarm service
As already stated in the paragraph “Allowing http-based local Docker Registries”, we configured every Docker engine on every Swarm node to access http-based Docker registries. Although a local registry is only relevant for later application deployments, it´s something like a basic step when in comes to initializing a Docker Swarm Cluster. So let´s start our Docker Swarm Registry Service as mentioned in the docs. There were some errors in those docs that should be fixed by now (http://ift.tt/2fvdJgp, http://ift.tt/2fvk77s & http://ift.tt/2fv6Xan). Everything needed is done inside the run-swarm-registry.yml:
- name: Specify to run Docker Registry on Linux Manager node shell: "docker node update --label-add registry=true masterlinux01" ignore_errors: yes when: inventory_hostname == "masterlinux01" - name: Create directory for later volume mount into the Docker Registry service on Linux Manager node if it doesn´t exist file: path: /mnt/registry state: directory mode: 0755 when: inventory_hostname in groups['linux'] sudo: yes - name: Run Docker Registry on Linux Manager node as Swarm service shell: "docker service create --name swarm-registry --constraint 'node.labels.registry==true' --mount type=bind,src=/mnt/registry,dst=/var/lib/registry -e REGISTRY_HTTP_ADDR=0.0.0.0:5000 -p 5000:5000 --replicas 1 registry:2" ignore_errors: yes when: inventory_hostname == "masterlinux01"
As we want to run our registry on our Linux Manager node, we need to set a label to it at first. This is done with the docker node update --label-add command. Then we create a mountpoint inside the Linux Manager Docker host for later usage in the registry Docker container. The last step is the crucial one. It creates a Docker Swarm service running our local Registry, configured to run on port 5000 and only on the Linux Manager node with the help of --constraint 'node.labels.registry==true'.
If you manually check the Swarm´s services after this command, you´ll notice a running Swarm service called swarm-registry. Or we could simply go to our Portainer UI on http://localhost:49000/ and have a look:
The Swarm is ready for action!
We´ve reached the last step in our playbook initialize-docker-swarm.yml. The last task includes the playbook display-swarm-status.yml, which doesn´t really do anything on our machines – but outputs the current Swarm status to the console that executes our playbook:
This means that our Docker Swarm cluster is ready for the deployment of our applications! Wow, this was quite a journey we did in this post. But I think we´ve already reached a lot of our goals. Again, we have a completely comprehensible setup in place. Messing up something on the way to our running Swarm is not a problem any more. Just delete everything an start fresh! As we use the “Infrastructure as code” paradigm here, everything is automated and 100 % transparent. Just have a look into the GitHub repository or the commandline output. So no “I have played around with Swarm and everything worked out on my machine” speech. This setup works. And if not, fix it with a pull request. 🙂 I can´t emphasize this enough in the context of the fast-moving development of Docker Windows Containers right now.
This brings us to the second goal: We have a fully functional mixed OS hybrid Docker Swarm cluster in place which provides every potentially needed basis for our applications inside Docker containers – be it native Windows or native Linux. And by leveraging the power of Vagrant´s multi-machine setups, everything can be executed locally on your laptop while at the same time opening up to any possible cloud solution out there. So this setup will provide us with a local environment which is as near to staging or even production as possible.
So what´s left? We haven´t deployed an application so far! We for sure want to deploy a lot of microservices to our Docker Swarm cluster and let them automatically be scaled out. We also need to know how we can access applications running inside the Swarm and how we can do things like rolling updates without generating any downtime to our consumers. There are many things left to talk about, but maybe there will also be a second part to this blog post.
The post Taming the Hybrid Swarm: Initializing a mixed OS Docker Swarm Cluster running Windows & Linux native Containers with Vagrant & Ansible appeared first on codecentric AG Blog.
Taming the Hybrid Swarm: Initializing a mixed OS Docker Swarm Cluster running Windows & Linux native Containers with Vagrant & Ansible published first on http://ift.tt/2vCN0WJ
0 notes
Text
PowerShell : Continue Keyword
PowerShell : Continue Keyword
Continue Keyword in PowerShell One of the things that is a bit confusing for powershell beginner is the Continue statement. This is partially because it does not act like it might in some other languages. Next question will arise, then what does the Continue statement do in Windows PowerShell? the answer is it returns flow to the top of the innermost loop that is controlled by a While, For,…
View On WordPress
#Continue Keyword in PowerShell#dotnet-helpers#dotnethelpers.com#How continue keyword work powershell#Jquery Tutorial#knockout tutorial#poweshell tutorial#thiyagu
0 notes
Text
Original Post from FireEye Author: Vikram Hegde
This blog post presents a machine learning (ML) approach to solving an emerging security problem: detecting obfuscated Windows command line invocations on endpoints. We start out with an introduction to this relatively new threat capability, and then discuss how such problems have traditionally been handled. We then describe a machine learning approach to solving this problem and point out how ML vastly simplifies development and maintenance of a robust obfuscation detector. Finally, we present the results obtained using two different ML techniques and compare the benefits of each.
Introduction
Malicious actors are increasingly “living off the land,” using built-in utilities such as PowerShell and the Windows Command Processor (cmd.exe) as part of their infection workflow in an effort to minimize the chance of detection and bypass whitelisting defense strategies. The release of new obfuscation tools makes detection of these threats even more difficult by adding a layer of indirection between the visible syntax and the final behavior of the command. For example, Invoke-Obfuscation and Invoke-DOSfuscation are two recently released tools that automate the obfuscation of Powershell and Windows command lines respectively.
The traditional pattern matching and rule-based approaches for detecting obfuscation are difficult to develop and generalize, and can pose a huge maintenance headache for defenders. We will show how using ML techniques can address this problem.
Detecting obfuscated command lines is a very useful technique because it allows defenders to reduce the data they must review by providing a strong filter for possibly malicious activity. While there are some examples of “legitimate” obfuscation in the wild, in the overwhelming majority of cases, the presence of obfuscation generally serves as a signal for malicious intent.
Background
There has been a long history of obfuscation being employed to hide the presence of malware, ranging from encryption of malicious payloads (starting with the Cascade virus) and obfuscation of strings, to JavaScript obfuscation. The purpose of obfuscation is two-fold:
Make it harder to find patterns in executable code, strings or scripts that can easily be detected by defensive software.
Make it harder for reverse engineers and analysts to decipher and fully understand what the malware is doing.
In that sense, command line obfuscation is not a new problem – it is just that the target of obfuscation (the Windows Command Processor) is relatively new. The recent release of tools such as Invoke-Obfuscation (for PowerShell) and Invoke-DOSfuscation (for cmd.exe) have demonstrated just how flexible these commands are, and how even incredibly complex obfuscation will still run commands effectively.
There are two categorical axes in the space of obfuscated vs. non-obfuscated command lines: simple/complex and clear/obfuscated (see Figure 1 and Figure 2). For this discussion “simple” means generally short and relatively uncomplicated, but can still contain obfuscation, while “complex” means long, complicated strings that may or may not be obfuscated. Thus, the simple/complex axis is orthogonal to obfuscated/unobfuscated. The interplay of these two axes produce many boundary cases where simple heuristics to detect if a script is obfuscated (e.g. length of a command) will produce false positives on unobfuscated samples. The flexibility of the command line processor makes classification a difficult task from an ML perspective.
Figure 1: Dimensions of obfuscation
Figure 2: Examples of weak and strong obfuscation
Traditional Obfuscation Detection
Traditional obfuscation detection can be split into three approaches. One approach is to write a large number of complex regular expressions to match the most commonly abused syntax of the Windows command line. Figure 3 shows one such regular expression that attempts to match ampersand chaining with a call command, a common pattern seen in obfuscation. Figure 4 shows an example command sequence this regex is designed to detect.
Figure 3: A common obfuscation pattern captured as a regular expression
Figure 4: A common obfuscation pattern (calling echo in obfuscated fashion in this example)
There are two problems with this approach. First, it is virtually impossible to develop regular expressions to cover every possible abuse of the command line. The flexibility of the command line results in a non-regular language, which is feasible yet impractical to express using regular expressions. A second issue with this approach is that even if a regular expression exists for the technique a malicious sample is using, a determined attacker can make minor modifications to avoid the regular expression. Figure 5 shows a minor modification to the sequence in Figure 4, which avoids the regex detection.
Figure 5: A minor change (extra carets) to an obfuscated command line that breaks the regular expression in Figure 3
The second approach, which is closer to an ML approach, involves writing complex if-then rules. However, these rules are hard to derive, are complex to verify, and pose a significant maintenance burden as authors evolve to escape detection by such rules. Figure 6 shows one such if-then rule.
Figure 6: An if-then rule that *may* indicate obfuscation (notice how loose this rule is, and how false positives are likely)
A third approach is to combine regular expressions and if-then rules. This greatly complicates the development and maintenance burden, and still suffers from the same weaknesses that make the first two approaches fragile. Figure 7 shows an example of an if-then rule with regular expressions. Clearly, it is easy to appreciate how burdensome it is to generate, test, maintain and determine the efficacy of such rules.
Figure 7: A combination of an if-then rule with regular expressions to detect obfuscation (a real hand-built obfuscation detector would consist of tens or hundreds of rules and still have gaps in its detection)
The ML Approach – Moving Beyond Pattern Matching and Rules
Using ML simplifies the solution to these problems. We will illustrate two ML approaches: a feature-based approach and a feature-less end-to-end approach.
There are some ML techniques that can work with any kind of raw data (provided it is numeric), and neural networks are a prime example. Most other ML algorithms require the modeler to extract pertinent information, called features, from raw data before they are fed into the algorithm. Some examples of this latter type are tree-based algorithms, which we will also look at in this blog (we described the structure and uses of Tree-Based algorithms in a previous blog post, where we used a Gradient-Boosted Tree-Based Model).
ML Basics – Neural Networks
Neural networks are a type of ML algorithm that have recently become very popular and consist of a series of elements called neurons. A neuron is essentially an element that takes a set of inputs, computes a weighted sum of these inputs, and then feeds the sum into a non-linear function. It has been shown that a relatively shallow network of neurons can approximate any continuous mapping between input and output. The specific type of neural network we used for this research is what is called a Convolutional Neural Network (CNN), which was developed primarily for computer vision applications, but has also found success in other domains including natural language processing. One of the main benefits of a neural network is that it can be trained without having to manually engineer features.
Featureless ML
While neural networks can be used with feature data, one of the attractions of this approach is that it can work with raw data (converted into numeric form) without doing any feature design or extraction. The first step in the model is converting text data into numeric form. We used a character-based encoding where each character type was encoded by a real valued number. The value was automatically derived during training and conveys semantic information about the relationships between characters as they apply to cmd.exe syntax.
Feature-Based ML
We also experimented with hand-engineered features and a Gradient Boosted Decision Tree algorithm. The features developed for this model were largely statistical in nature – derived from the presence and frequency of character sets and keywords. For example, the presence of dozens of ‘%’ characters or long, contiguous strings might contribute to detecting potential obfuscation. While any single feature will not perfectly separate the two classes, a combination of features as present in a tree-based model can learn flexible patterns in the data. The expectation is that those patterns are robust and can generalize to future obfuscation variants.
Data and Experiments
To develop our models, we collected non-obfuscated data from tens of thousands of endpoint events and generated obfuscated data using a variety of methods in Invoke-DOSfuscation. We developed our models using roughly 80 percent of the data as training data, and tested them on the remaining 20 percent. We ensured that our train-test split was stratified. For featureless ML (i.e. neural networks), we simply input Unicode code points into the first layer of the CNN model. The first layer converts the code point into semantically meaningful numerical representations (called embeddings) before feeding it into the rest of the neural network.
For the Gradient Boosted Tree method, we generated a number of features from the raw command lines. The following are some of them:
Length of the command line
The number of carets in the command line
The count of pipe symbols
The fraction of white space in the command line
The fraction of special characters
Entropy of the string
The frequency of the strings “cmd” and “power” in the command line
While each of these features individually is a weak signal and could not possibly be a good discriminator on its own, a flexible classifier such as a Gradient Boosted Tree – trained on sufficient data with these features – is able to classify obfuscated and non-obfuscated command lines in spite of the aforementioned difficulties.
Results
Evaluated against our test set, we were able to get nearly identical results from our Gradient Boosted Tree and neural network models.
The results for the GBT model were near perfect with metrics such as F1-score, precision, and recall all being close to 1.0. The CNN model was slightly less accurate.
While we certainly do not expect perfect results in a real-world scenario, these lab results were nonetheless encouraging. Recall that all of our obfuscated examples were generated by one source, namely the Invoke-DOSfuscation tool. While Invoke-DOSfuscation generates a wide variety of obfuscated samples, in the real world we expect to see at least some samples that are quite dissimilar from any that Invoke-DOSfuscation generates. We are currently collecting real world obfuscated command lines to get a more accurate picture of the generalizability of this model on obfuscated samples from actual malicious actors. We expect that command obfuscation, similar to PowerShell obfuscation before it, will continue to emerge in new malware families.
As an additional test we asked Daniel Bohannon (author of Invoke-DOSfuscation, the Windows command line obfuscation tool) to come up with obfuscated samples that in his experience would be difficult for a traditional obfuscation detector. In every case, our ML detector was still able to detect obfuscation. Some examples are shown in Figure 8.
Figure 8: Some examples of obfuscated text used to test and attempt to defeat the ML obfuscation detector (all were correctly identified as obfuscated text)
We also created very cryptic looking texts that, although valid Windows command lines and non-obfuscated, appear slightly obfuscated to a human observer. This was done to test efficacy of the detector with boundary examples. The detector was correctly able to classify the text as non-obfuscated in this case as well. Figure 9 shows one such example.
Figure 9: An example that appears on first glance to be obfuscated, but isn’t really and would likely fool a non-ML solution (however, the ML obfuscation detector currently identifies it as non-obfuscated)
Finally, Figure 10 shows a complicated yet non-obfuscated command line that is correctly classified by our obfuscation detector, but would likely fool a non-ML detector based on statistical features (for example a rule-based detector with a hand-crafted weighing scheme and a threshold, using features such as the proportion of special characters, length of the command line or entropy of the command line).
Figure 10: An example that would likely be misclassified by an ML detector that uses simplistic statistical features; however, our ML obfuscation detector currently identifies it as non-obfuscated
CNN vs. GBT Results
We compared the results of a heavily tuned GBT classifier built using carefully selected features to those of a CNN trained with raw data (featureless ML). While the CNN architecture was not heavily tuned, it is interesting to note that with samples such as those in Figure 10, the GBT classifier confidently predicted non-obfuscated with a score of 19.7 percent (the complement of the measure of the classifier’s confidence in non-obfuscation). Meanwhile, the CNN classifier predicted non-obfuscated with a confidence probability of 50 percent – right at the boundary between obfuscated and non-obfuscated. The number of misclassifications of the CNN model was also more than that of the Gradient Boosted Tree model. Both of these are most likely the result of inadequate tuning of the CNN, and not a fundamental shortcoming of the featureless approach.
Conclusion
In this blog post we described an ML approach to detecting obfuscated Windows command lines, which can be used as a signal to help identify malicious command line usage. Using ML techniques, we demonstrated a highly accurate mechanism for detecting such command lines without resorting to the often inadequate and costly technique of maintaining complex if-then rules and regular expressions. The more comprehensive ML approach is flexible enough to catch new variations in obfuscation, and when gaps are detected, it can usually be handled by adding some well-chosen evader samples to the training set and retraining the model.
This successful application of ML is yet another demonstration of the usefulness of ML in replacing complex manual or programmatic approaches to problems in computer security. In the years to come, we anticipate ML to take an increasingly important role both at FireEye and in the rest of the cyber security industry.
#gallery-0-5 { margin: auto; } #gallery-0-5 .gallery-item { float: left; margin-top: 10px; text-align: center; width: 33%; } #gallery-0-5 img { border: 2px solid #cfcfcf; } #gallery-0-5 .gallery-caption { margin-left: 0; } /* see gallery_shortcode() in wp-includes/media.php */
Go to Source Author: Vikram Hegde Obfuscated Command Line Detection Using Machine Learning Original Post from FireEye Author: Vikram Hegde This blog post presents a machine learning (ML) approach to solving…
0 notes
Text
Taming the Hybrid Swarm: Initializing a mixed OS Docker Swarm Cluster running Windows & Linux native Containers with Vagrant & Ansible
We successfully scaled our Windows Docker containers running on one Docker host. But what if we change our focus and see our distributed application as a whole, running on multiple hosts using both Windows and Linux native containers? In this case, a multi-node Docker Orchestration tool like Docker Swarm could be a marvelous option!
Running Spring Boot Apps on Windows – Blog series
Part 1: Running Spring Boot Apps on Windows with Ansible Part 2: Running Spring Boot Apps on Docker Windows Containers with Ansible: A Complete Guide incl Packer, Vagrant & Powershell Part 3: Scaling Spring Boot Apps on Docker Windows Containers with Ansible: A Complete Guide incl Spring Cloud Netflix and Docker Compose Part 4: Taming the Hybrid Swarm: Initializing a mixed OS Docker Swarm Cluster running Windows & Linux native Containers with Vagrant & Ansible
Lifting our gaze to the application as a whole
We really went far in terms of using native Docker Windows containers to run our apps inside. We built our own Windows Vagrant boxes with Packer, prepared them to run Docker smoothly and provisioned our Apps – both fully automated with Ansible. We also scaled our Windows Docker containers using Docker Compose and Spring Cloud Netflix, not leaving our fully comprehensible setup and our willingness to have everything as code behind.
But if you look into real world projects, there are no single nodes anymore – running Docker or not. Applications today consist of a whole bunch of machines – and they naturally mix Linux and Windows. These projects need a solution to handle these distributed applications – ideally not doing everything with completely new tools. But how is this possible?
Why Docker Swarm?
This post is all about the “new” Docker Swarm mode, requiring Docker 1.12.0 as a minimum. But why did I choose this path? Today everything seems to point to Kubernetes: biggest media share, most google searches, most blog posts and so on. But there are a few things to consider before going with Kubernetes.
The first point is simple: a consultant´s real world project experience. After having in-depth discussions about it, you maybe shifted your project team to Dockerize all the (legacy) applications and finally brought all these containers into production. You should always remember: This is huge! And at least according to my experience not every team member already realized at that point what changes were applied to the team’s applications in detail, maybe also leaving some people unsure about “all this new stuff”. And now imagine you want to do the next step with Kubernetes. This means many of those “new” Docker concepts are again thrown over the pile – because Kubernetes brings in a whole bunch of new building blocks, leaving no stone unturned… And every blog post about Kubernetes and every colleague I talk with has to admit at some point that the learning curve with Kubernetes is really steep.
Second point: Many people at conferences propagate the following precedence: They tell you about “Docker 101” with the simplest steps with Docker and then go straight ahead to Kubernetes as the “next logical step”. Well guys, there´s something in between! It should be common sense that learning is ideally done step by step. The next step after Docker 101 is Docker Compose, adding a new level of abstraction. Coming from Compose, it is easy to continue with Docker Swarm – because it is built right into every Docker engine and could be used with Compose as well. It´s just called “Docker Stack” then. 🙂 And if people really do need more features than Swarm provides, then Kubernetes is for sure a good way to go!
Last point: Right now, a hybrid OS Docker cluster doesn´t really make sense with the released versions of Kubernetes and Windows Server 2016. Yes, Windows support was released on Kubernetes 1.6 (alpha with 1.5 already). But if you dive a bit deeper – and that always involves reading through the Microsoft documentation until you reach the part “current restrictions/limitations” – you´ll find the nasty things. As for now, the Windows network subsystem HNS isn´t really Kubernetes-ready, you have to plumb all the networking stuff (like routing tables) manually together. And one container per pod does not really make sense if you want to leverage the power of Kubernetes! Because the Windows SIG ist doing a really great job, these restrictions will not last for long any more and it is planned to have most of them solved by Kubernetes 1.8 and Windows Server 2016 Build 1709.
So if you want to run hybrid OS Docker clusters, just sit back and start with Docker Swarm. I think we´ll see a hybrid OS Kubernetes setup here on the blog in the near future, if Microsoft and the Windows SIG continue their work. 🙂
Building a multi-machine-ready Windows Vagrant box with Packer
Enough talk, let´s get our hands dirty! The last blog posts about Docker Windows containers already showed that only fully comprehensible setups will be used here. The claim is to not leave any stones in your way to get from zero to a running Docker Swarm at the end of this article. Therefore the already well-known GitHub repository ansible-windows-docker-springboot was extended by the next step step4-windows-linux-multimachine-vagrant-docker-swarm-setup.
There are basically two options to achieve a completely comprehensible multi-node setup: running more than one virtual machine on your local machine or using some cloud infrastructure. As I really came to love Vagrant as a tool to handle my virtual machines, why not use it again? And thanks to a colleague of mine´s hint, I found that Vagrant is also able to handle multi-machine setups. This would free us from the need to have access to a certain cloud provider, although the setup would be easily adaptable to one of these.
The only thing that would prevent us from using Vagrant would be the lack of a Windows Server 2016 Vagrant box. But luckily this problem was already solved in the second part of this blog post´s series and we could re-use the setup with Packer.io nearly one to one. There´s only a tiny difference in the Vagrantfile template for Packer: We shouldn´t define a port forwarding or a concrete VirtualBox VM name in this base box. Therefore we need a separate Vagrantfile template vagrantfile-windows_2016-multimachine.template, which is smaller than the one used in the second blog post:
Vagrant.configure("2") do |config| config.vm.box = "windows_2016_docker_multi" config.vm.guest = :windows config.windows.halt_timeout = 15 # Configure Vagrant to use WinRM instead of SSH config.vm.communicator = "winrm" # Configure WinRM Connectivity config.winrm.username = "vagrant" config.winrm.password = "vagrant" end
To be able to use a different Vagrantfile template in Packer, I had to refactor the Packer configuration windows_server_2016_docker.json slightly to accept a Vagrantfile template name (via template_url) and Vagrant box output name (box_output_prefix) as parameters. Now we´re able to create another kind of Windows Vagrant box, which we could use in our multi-machine setup.
So let´s go to commandline, clone the mentioned GitHub repository ansible-windows-docker-springboot and run the following Packer command inside the step0-packer-windows-vagrantbox directory (just be sure to have a current Packer version installed):
packer build -var iso_url=14393.0.161119-1705.RS1_REFRESH_SERVER_EVAL_X64FRE_EN-US.ISO -var iso_checksum=70721288bbcdfe3239d8f8c0fae55f1f -var template_url=vagrantfile-windows_2016-multimachine.template -var box_output_prefix=windows_2016_docker_multimachine windows_server_2016_docker.json
This could take some time and you´re encouraged to grab a coffee. It´s finished when there´s a new windows_2016_docker_multimachine_virtualbox.box inside the step0-packer-windows-vagrantbox directory. Let´s finally add the new Windows 2016 Vagrant base box to the local Vagrant installation:
vagrant box add --name windows_2016_multimachine windows_2016_docker_multimachine_virtualbox.box
A multi-machine Windows & Linux mixed OS Vagrant setup for Docker Swarm
Now that we have our Windows Vagrant base box in place, we can move on to the next step: the multi-machine Vagrant setup. Just switch over to the step4-windows-linux-multimachine-vagrant-docker-swarm-setup directory and have a look at the Vagrantfile there. Here´s a shortened version where we can see the basic structure with the defintion of our local Cloud infrastructure:
Vagrant.configure("2") do |config| # One Master / Manager Node with Linux config.vm.define "masterlinux" do |masterlinux| masterlinux.vm.box = "ubuntu/trusty64" ... end # One Worker Node with Linux config.vm.define "workerlinux" do |workerlinux| workerlinux.vm.box = "ubuntu/trusty64" ... end # One Master / Manager Node with Windows Server 2016 config.vm.define "masterwindows" do |masterwindows| masterwindows.vm.box = "windows_2016_multimachine" ... end # One Worker Node with Windows Server 2016 config.vm.define "workerwindows" do |workerwindows| workerwindows.vm.box = "windows_2016_multimachine" ... end end
It defines four machines to show the many possible solutions in a hybrid Docker Swarm cluster containing Windows and Linux boxes: Manager and Worker nodes, also both as Windows and Linux machines.
logo sources: Windows icon, Linux logo, Packer logo, Vagrant logo, VirtualBox logo
Within a Vagrant multi-machine setup, you define your separate machines with the config.vm.define keyword. Inside those define blocks we simply configure our individual machine. Let´s have a more detailed look at the workerlinux box:
# One Worker Node with Linux config.vm.define "workerlinux" do |workerlinux| workerlinux.vm.box = "ubuntu/trusty64" workerlinux.vm.hostname = "workerlinux01" workerlinux.ssh.insert_key = false workerlinux.vm.network "forwarded_port", guest: 22, host: 2232, host_ip: "127.0.0.1", id: "ssh" workerlinux.vm.network "private_network", ip: "172.16.2.11" workerlinux.vm.provider :virtualbox do |virtualbox| virtualbox.name = "WorkerLinuxUbuntu" virtualbox.gui = true virtualbox.memory = 2048 virtualbox.cpus = 2 virtualbox.customize ["modifyvm", :id, "--ioapic", "on"] virtualbox.customize ["modifyvm", :id, "--vram", "16"] end end
The first configuration statements are usual ones like configuring the Vagrant box to use or the VM´s hostname. But the forwarded port configuration is made explicit because we need to rely on the exact port later in our Ansible scripts. This isn´t possible with Vagrant’s default Port Correction feature. Since you won´t be able to use a port on your host machine more than once, Vagrant would automatically set it to a random value – and we wouldn’t be able to access our boxes later with Ansible.
To define and override the SSH port of a preconfigured Vagrant box, we need to know the id which is used to define it in the base box. Using Linux boxes, this is ssh – and with Windows this is winrm-ssl (which I found slightly un-documented…).
Networking between the Vagrant boxes
The next tricky part is the network configuration between the Vagrant boxes. As they need to talk to each other and also to the host, the so-called Host-only networking should be the way to go here (there´s a really good overview in this post, german only). Host-only networking is easily established using Vagrant’s Private Networks configuration.
And as we want to access our boxes with a static IP, we leverage the Vagrant configuration around Vagrant private networking. All that´s needed here is a line like this inside every Vagrant box definition of our multi-machine setup:
masterlinux.vm.network "private_network", ip: "172.16.2.10"
Same for Windows boxes. Vagrant will tell VirtualBox to create a new separate network (mostly vboxnet1 or similar), put a second virtual network device into every box and assign it with the static IP we configured in our Vagrantfile. That´s pretty much everything, except for Windows Server. 🙂 But we´ll take care of that soon.
Ansible access to the Vagrant boxes
Starting with the provisioning of multiple Vagrant boxes, the first approach might be to use Vagrant´s Ansible Provisioner and just have something like the following statement in your Vagrantfile:
config.vm.provision "ansible" do |ansible| ansible.playbook = "playbook.yml" end
But remember the purpose of this article: We want to initialize a Docker Swarm later using Ansible. And as this process involves generating and exchanging Join Tokens between the different Vagrant boxes, we need one central Ansible script to share these tokens. If we separated our Ansible scripts into as many as machines as our Cluster has (here: four), we would lose many advantages of Ansible and wouldn´t be able to share the tokens. Additionally, it would be great if we could fire up our entire application with one Ansible command, no matter if it´s distributed over a hybrid Cluster of Windows and Linux machines.
So we want one Ansible playbook that´s able to manage all nodes for us. But there´s a trap: using the same host in multiple groups is possible with Ansible, but all the inventory and group variables will be merged automatically. That is, because Ansible is designed to do that based on the host´s name. So please don´t do the following:
[masterwindows] 127.0.0.1 [masterlinux] 127.0.0.1 [workerwindows] 127.0.0.1
We somehow need to give Ansible a different hostname for our servers, although they are all local and share the same IP. Because a later-stage-based setup wouldn´t have this problem any more, we only need a solution for our local development environment with Vagrant. And there´s a quite simple one: just edit your etc/hosts on MacOS/Linux or Windows\system32\drivers\etc\hosts on Windows and add the following entries:
127.0.0.1 masterlinux01 127.0.0.1 workerlinux01 127.0.0.1 masterwindows01 127.0.0.1 workerwindows01
This is a small step we have to do by hand, but you can also work around it if you want. There are Vagrant plugins like vagrant-hostmanager that allow you to define these hostfile entries based on the config.vm.hostname configuration in your Vagrantfile. But this will require you to input your admin password every time you run vagrant up, which is also quite manually. Another alternative would have been to use the static IPs we configured in our host-only network. But it is really nice to see those aliases like masterlinux01 or workerwindows01 later beeing provisioned in the Ansible playbooks runs – you always know what machine is currently in action 🙂
Now we´re where we wanted to be: We have a Vagrant multi-machine setup in place that fires up a mixed OS cluster with a simple command. All we have to do is to run a well-known vagrant up:
Just be sure to have at least 8 GB of RAM to spare because every box has 2048 MB configured. You could also tweak that configuration in the Vagrantfile – but don´t go too low 🙂 And never mind, if you want to have a break or your notebook is running hot – just type vagrant halt. And the whole zoo of machines will be stopped for you.
Provisioning Windows & Linux machines inside one Ansible playbook
Now let´s hand over the baton to Ansible. But as you may have already guessed: The tricky part is to configure Ansible in a way that enables it to provision both Windows and Linux machines inside one playbook. As we already found out, Ansible is not only able to provision Linux machines, but also doesn´t shrink back from Windows boxes.
But handling Windows and Linux inside the same playbook requires a configuration option to be able to access both Linux machines via SSH and Windows machines via WinRM. The key configuration parameter to success here really is ansible_connection. Handling both operating systems with Ansible at the same time isn´t really well documented – but it´s possible. Let´s have a look at how this blog post´s setup handles this challenge. Therefore we begin with the hostsfile:
[masterwindows] masterwindows01 [masterlinux] masterlinux01 [workerwindows] workerwindows01 [workerlinux] workerlinux01 [linux:children] masterlinux workerlinux [windows:children] masterwindows workerwindows
The first four definitions simply order our Vagrant box machine names (which we defined inside our etc/hosts file) according to the four possible categories in a Windows/Linux mixed OS environment. As already said, these are Manager/Master nodes (masterwindows and masterlinux) and Worker nodes (workerwindows and workerlinux) with both Windows and Linux. The last two entries bring Ansible´s “Group of Groups” feature into the game. As all the machines of the group’s masterlinux and workerlinux are based on Linux, we configure them with the help of the suffix :children to belong to the supergroup linux. The same procedure applies to windows group of groups.
This gives us the following group variables structure:
The all.yml inherits configuration that should be applied to all machines in our Cluster, regardless if they are Windows or Linux boxes. And as the user and password are always the same with Vagrant boxes, we configure them there:
ansible_user: vagrant ansible_password: vagrant
In the windows.yml and linux.yml we finally use the mentioned ansible_connection configuration option to distinguish between both connection types. The linux.yml is simple:
ansible_connection: ssh
Besides the needed protocol definition through ansible_connection, the windows.yml adds a second configuration option for the WinRM connection to handle self-signed certificates:
ansible_connection: winrm ansible_winrm_server_cert_validation: ignore
The last thing to configure so that Ansible is able to access our Vagrant boxes is the correct port configuration. Let´s have a look into workerwindows.yml:
ansible_port: 55996
We need this configuration for every machine in the cluster. To be 100 % sure what port Vagrant uses to forward for SSH or WinRM on the specific machine, we need to configure it inside the Vagrantfile. As already mentioned in the paragraph A Multi-machine Windows- & Linux- mixed OS Vagrant setup for Docker Swarm above, this is done through a forwarded_port configuration (always remember to use the correct configuration options id: "ssh" (Linux) or id: "winrm-ssl" (Windows)):
workerwindows.vm.network "forwarded_port", guest: 5986, host: 55996, host_ip: "127.0.0.1", id: "winrm-ssl"
With this configuration, we´re finally able to access both Windows and Linux boxes within one Ansible playbook. Let´s try this! Just be sure to have fired up all the machines in the cluster via vagrant up. To try the Ansible connectivity e.g. to the Windows Worker node, run the following:
ansible workerwindows -i hostsfile -m win_ping
Testing the Ansible connectivity to a Linux node, e.g. the Linux Manager node, is nearly as easy:
ansible masterlinux -i hostsfile -m ping
Only on the first run, you need to wrap the command with setting and unsetting an environment variable that enables Ansible to successfully add the new Linux host to its known hosts. So in the first run instead of just firing up one command, execute these three (as recommended here):
export ANSIBLE_HOST_KEY_CHECKING=False ansible masterlinux -i hostsfile -m ping unset ANSIBLE_HOST_KEY_CHECKING
If you don´t want to hassle with generating keys, you maybe want to install sshpass (e.g. via brew install http://ift.tt/23yg1Lz on a Mac, as there´s no brew install sshpass). In this case, you should also set and unset the environment variable as described.
And voilà: We now have Ansible configured in a way that we can control and provision our cluster with only one playbook.
logo sources: Windows icon, Linux logo, Packer logo, Vagrant logo, VirtualBox logo, Ansible logo
Prepare Docker engines on all nodes
Ansible is now able to connect to every box of our multi-machine Vagrant setup. There are roughly two steps left: First we need to install and configure Docker on all nodes, so that we can initialize our Docker Swarm in a second step.
Therefore, the example project´s GitHub repository has two main Ansible playbooks prepare-docker-nodes.yml and initialize-docker-swarm.yml. The first one does all the groundwork needed to be able to initialize a Docker Swarm successfully afterwards, which is done in the second one. So let´s have a more detailed look at what´s going on inside these two scripts!
As Ansible empowers us to abstract from the gory details, we should be able to understand what´s going on inside the prepare-docker-nodes.yml:
- hosts: all tasks: - name: Checking Ansible connectivity to Windows nodes win_ping: when: inventory_hostname in groups['windows'] - name: Checking Ansible connectivity to Linux nodes ping: when: inventory_hostname in groups['linux'] - name: Allow Ping requests on Windows nodes (which is by default disabled in Windows Server 2016) win_shell: "netsh advfirewall firewall add rule name='ICMP Allow incoming V4 echo request' protocol=icmpv4:8,any dir=in action=allow" when: inventory_hostname in groups['windows'] - name: Prepare Docker on Windows nodes include: "../step1-prepare-docker-windows/prepare-docker-windows.yml host=windows" - name: Prepare Docker on Linux nodes include: prepare-docker-linux.yml host=linux - name: Allow local http Docker registry include: allow-http-docker-registry.yml
This blog post always tries to outline a fully comprehensible setup. So if you want to give it a try, just run the playbook inside the step4-windows-linux-multimachine-vagrant-docker-swarm-setup directory:
ansible-playbook -i hostsfile prepare-docker-nodes.yml
During the execution of the complete playbook, we should continue to dive into the playbooks structure. The first line is already quite interesting. With the hosts: all configuration we tell Ansible to simultaneously use all configured hosts at the same time. This means the script will be executed with masterlinux01, masterwindows01, workerlinux01 and workerwindows01 in parallel. The following two tasks represent a best practice with Ansible: Always check the connectivity to all our machines at the beginning – and stop the provisioning if a machine isn´t reachable.
As the Ansible modules for Linux and Windows are separated by design and non-compatible to each other, we need to always know on what kind of servers we want to execute our scripts. We could use Ansible conditionals with the when statement in that case. The conditional
when: inventory_hostname in groups['linux']
ensures that the present Ansible module is only executed on machines that are listed in the group linux. And as we defined the subgroups masterlinux and workerlinux below from linux, only the hosts masterlinux01 and workerlinux01 should be used here. masterwindows01 and workerwindows01 are skipped. Obviously the opposite is true when we use the following conditional:
when: inventory_hostname in groups['windows']
The next task is an Windows Server 2016 exclusive one. Because we want our Vagrant boxes to be accessible from each other, we have to allow the very basic command everybody starts with: the ping. That one is blocked by the Windows firewall as a default and we have to allow this with the following Powershell command:
- name: Allow Ping requests on Windows nodes (which is by default disabled in Windows Server 2016) win_shell: "netsh advfirewall firewall add rule name='ICMP Allow incoming V4 echo request' protocol=icmpv4:8,any dir=in action=allow" when: inventory_hostname in groups['windows']
The following tasks finally installs Docker on all of our nodes. Luckily we can rely on work that was already done here. The post Running Spring Boot Apps on Docker Windows Containers with Ansible: A Complete Guide incl Packer, Vagrant & Powershell elaborates on how to prepare Docker on Windows in depth. The only thing we have to do here is to re-use the Ansible script with host=windows appended:
- name: Prepare Docker on Windows nodes include: "../step1-prepare-docker-windows/prepare-docker-windows.yml host=windows"
The Linux counterpart is a straightforward Ansible implementation of the official “Get Docker CE for Ubuntu” Guide. The called prepare-docker-linux.yml is included from the main playbook with the host=linux setting:
- name: Prepare Docker on Linux nodes include: prepare-docker-linux.yml host=linux
If you want to use a different Linux distribution, just add the appropriate statements inside prepare-docker-linux.yml or search for an appropriate role to use on Ansible Galaxy.
Allowing http-based local Docker Registries
The last task in the prepare-docker-nodes.yml playbook seems to be rather surprising. The reason is simple: We can´t follow our old approach when building our Docker images on single Docker host any more, because in this way, we would be forced to build each image on all of our cluster´s nodes again and again, which leads to a heavy overhead. A different approach is needed here. With the help of a local Docker registry, we would only need to build an image once and push it to the registry. Then the image would be ready to run on all of our nodes.
How to run a Docker registry will be covered in a later step, but we have to take care of some groundwork here already. The simplest possible solution is to start with a plain http registry, which shouldn´t be a big security risk inside our isolated environment and also in many on-premises installations. Just be sure to update to https with TLS certificates if you´re going into the Cloud or if you want to provide your registry to other users outside the Docker Swarm.
Every Docker engine has to be configured to allow for interaction with a plain http registry. Therefore we have to add a daemon.json file into the appropriate folders which contains the following entry:
{ "insecure-registries" : ["172.16.2.10:5000"] }
As we want to run our Docker Swarm local registry on the Linux Manager node, we configure its IP address 172.16.2.10 here. Remember this address was itself configured inside the Vagrantfile.
But since we´re using Ansible, this step is also fully automated inside the included playbook allow-http-docker-registry.yml – including the correct daemon.json paths:
- name: Template daemon.json to /etc/docker/daemon.json on Linux nodes for later Registry access template: src: "templates/daemon.j2" dest: "/etc/docker/daemon.json" become: yes when: inventory_hostname in groups['linux'] - name: Template daemon.json to C:\ProgramData\docker\config\daemon.json on Windows nodes for later Registry access win_template: src: "templates/daemon.j2" dest: "C:\\ProgramData\\docker\\config\\daemon.json" when: inventory_hostname in groups['windows']
After that last step we now have every node ready with a running Docker engine and are finally able to initialize our Swarm.
logo sources: Windows icon, Linux logo, Vagrant logo, VirtualBox logo, Ansible logo, Docker logo
Initializing a Docker Swarm
Wow, this was quite a journey until we finally got where we wanted to be in the first place. Since Docker is prepared on all nodes, we could continue with the mentioned second part of the example project´s GitHub repository. The playbook initialize-docker-swarm.yml inherits everything that´s needed to initialize a fully functional Docker Swarm. So let´s have look at how this is done:
- hosts: all vars: masterwindows_ip: 172.16.2.12 tasks: - name: Checking Ansible connectivity to Windows nodes win_ping: when: inventory_hostname in groups['windows'] - name: Checking Ansible connectivity to Linux nodes ping: when: inventory_hostname in groups['linux'] - name: Open Ports in firewalls needed for Docker Swarm include: prepare-firewalls-for-swarm.yml - name: Initialize Swarm and join all Swarm nodes include: initialize-swarm-and-join-all-nodes.yml - name: Label underlying operation system to each node include: label-os-specific-nodes.yml - name: Run Portainer as Docker and Docker Swarm Visualizer include: run-portainer.yml - name: Run Docker Swarm local Registry include: run-swarm-registry.yml - name: Display the current Docker Swarm status include: display-swarm-status.yml
Before we go into any more details, let´s run this playbook also:
ansible-playbook -i hostsfile initialize-docker-swarm.yml
We´ll return to your fully initialized and running Docker Swarm cluster after we have had a look into the details of this playbook. 🙂 The first two tasks are already familiar to us. Remember that connectivity checks should always be the first thing to do. After these checks, the prepare-firewalls-for-swarm.yml playbook opens up essential ports for the later running Swarm. This part is mentioned pretty much at the end of the Docker docs if you read them through. There are basically three firewall configurations needed. TCP port 2377 is needed to allow the connection of all Docker Swarm nodes to the Windows Manager node, where we will initialize our Swarm later on. Therefore we use the conditional when: inventory_hostname in groups['masterwindows'], which means that this port is only opened up on the Windows Manager node. The following two configurations are mentioned in the docs:
“[…] you need to have the following ports open between the swarm nodes before you enable swarm mode.”
So we need to do this before even when initializing our Swarm! These are TCP/UDP port 7946 for Docker Swarm Container network discovery and UDP port 4789 for Docker Swarm overlay network traffic.
Join the Force… erm, Swarm!
The following task of our main initialize-docker-swarm.yml includes the initialize-swarm-and-join-all-nodes.yml playbook and does the heavy work needed to initialize a Docker Swarm with Ansible. Let´s go through all the steps here in detail:
- name: Leave Swarm on Windows master node, if there was a cluster before win_shell: "docker swarm leave --force" ignore_errors: yes when: inventory_hostname == "masterwindows01" - name: Initialize Docker Swarm cluster on Windows master node win_shell: "docker swarm init --advertise-addr= --listen-addr :2377" ignore_errors: yes when: inventory_hostname == "masterwindows01" - name: Pause a few seconds after new Swarm cluster initialization to prevent later errors on obtaining tokens to early pause: seconds: 5 ...
If you´re a frequent reader of this blog posts´ series, you´re already aware that there are many steps inside Ansible playbooks that are irrelevant for the first execution. And leaving the Swarm in the first step is such a case. If you run the playbook the next time, you will know what that is all about. It´s not a problem that this step will fail at the first exectution. The ignore_errors: yes configuration takes care of that.
The magic follows inside the next step. It runs the needed command to initialize a leading Docker Swarm Manager node, which we chose our Windows Manager node for. Both advertise-addr and listen-addr have to be set to the Windows Manager node in this case. As the initialization process of a Swarm takes some time, this step is followed by a pause module. We just give our Swarm some seconds in order to get itself together.
The reason for this are the following two steps, which obtain the later needed Join Tokens (and these steps occasionally fail, if you run them right after the docker swarm init step). The commands to get these tokens are docker swarm join-token worker -q for Worker nodes oder docker swarm join-token manager -q for Manager nodes.
... - name: Obtain worker join-token from Windows master node win_shell: "docker swarm join-token worker -q" register: worker_token_result ignore_errors: yes when: inventory_hostname == "masterwindows01" - name: Obtain manager join-token from Windows master node win_shell: "docker swarm join-token manager -q" register: manager_token_result ignore_errors: yes when: inventory_hostname == "masterwindows01" - name: Syncing the worker and manager join-token results to the other hosts set_fact: worker_token_result_host_sync: "" manager_token_result_host_sync: "" - name: Extracting and saving worker and manager join-tokens in variables for joining other nodes later set_fact: worker_jointoken: "" manager_jointoken: "" - name: Join-tokens... debug: msg: - "The worker join-token is: ''" - "The manager join-token is: ''" ...
As both steps run scoped via the conditional when: inventory_hostname == "masterwindows01" only on the host masterwindows01, they are not easy to hand over to the other hosts. But as we need them there, so that they are able to join the Swarm, we need to “synchronize” them with the help of the set_fact Ansible module and the definition of variables that are assigned the Join Tokens. To access the tokens from masterwindows01, we grab them with the following trick:
worker_token_result_host_sync: ""
The hostvars['masterwindows01'] statement gives us access to the masterwindows01 variables. The trailing ['worker_token_result'] points us to the registered result of the docker swarm join-token commands. And inside the following set_fact module, the only needed value is extracted with worker_token_result_host_sync.stdout.splitlines()[0]. Now looking onto the console output, the debug module prints all the extracted tokens for us.
Now we´re able to join all the other nodes to our Swarm – which again is prefixed with the leaving of a Swarm, not relevant to the first execution of the playbook. To join a Worker to the Swarm, the docker swarm join --token command has to be executed. To join a new Manager, a very similar docker swarm join --token is needed.
... - name: Leave Swarm on Windows worker nodes, if there was a cluster before win_shell: "docker swarm leave" ignore_errors: yes when: inventory_hostname in groups['workerwindows'] - name: Add Windows worker nodes to Docker Swarm cluster win_shell: "docker swarm join --token " ignore_errors: yes when: inventory_hostname in groups['workerwindows'] - name: Leave Swarm on Linux worker nodes, if there was a cluster before shell: "docker swarm leave" ignore_errors: yes when: inventory_hostname in groups['workerlinux'] - name: Add Linux worker nodes to Docker Swarm cluster shell: "docker swarm join --token " ignore_errors: yes when: inventory_hostname in groups['workerlinux'] - name: Leave Swarm on Linux manager nodes, if there was a cluster before shell: "docker swarm leave --force" ignore_errors: yes when: inventory_hostname in groups['masterlinux'] - name: Add Linux manager nodes to Docker Swarm cluster shell: "docker swarm join --token " ignore_errors: yes when: inventory_hostname in groups['masterlinux'] ...
At this point we have managed to initialize a fully functional Docker Swarm already!
logo sources: Windows icon, Linux logo, Vagrant logo, VirtualBox logo, Ansible logo, Docker & Docker Swarm logo
Congratulations! 🙂 But why do we need a few more steps?
Visualize the Swarm with Portainer
It´s always good to know what´s going on inside our Swarm! We are already able to obtain all the information with the help of Docker Swarm’s CLI, e.g. through docker service ls or docker service ps [yourServiceNameHere]. But it won’t hurt to also have a visual equivalent in place.
Docker´s own Swarm visualizer doesn´t look that neat compared to another tool called Portainer. There´s a good comparison available on stackshare if you´re interested. To me, Portainer seems to be the right choice when it comes to Docker and Docker Swarm visualization. And as soon as I read the following quote, I needed to get my hands on it:
“[Portainer] can be deployed as Linux container or a Windows native container.”
The Portainer configuration is already included in this setup here. The run-portainer.yml does all that´s needed:
- name: Create directory for later volume mount into Portainer service on Linux Manager node if it doesn´t exist file: path: /mnt/portainer state: directory mode: 0755 when: inventory_hostname in groups['linux'] sudo: yes - name: Run Portainer Docker and Docker Swarm Visualizer on Linux Manager node as Swarm service shell: "docker service create --name portainer --publish 9000:9000 --constraint 'node.role == manager' --constraint 'node.labels.os==linux' --mount type=bind,src=/var/run/docker.sock,dst=/var/run/docker.sock --mount type=bind,src=/mnt/portainer,dst=/data portainer/portainer:latest -H unix:///var/run/docker.sock" ignore_errors: yes when: inventory_hostname == "masterlinux01"
This will deploy a Portainer instance onto our Linux Manager node and connect it directly to the Swarm. For more details, see the Portainer docs. But there´s one thing that could lead to frustration: Use a current browser to access Portainer UI inside your Windows boxes! It doesn´t work inside the pre-installed Internet Explorer! Just head to http://172.16.2.10:9000 if you want to access Portainer from within the cluster.
But as we have the port forwarding configuration masterlinux.vm.network "forwarded_port", guest: 9000, host: 49000, host_ip: "127.0.0.1", id: "portainer" inside our Vagrantfile, we can also access the Portainer UI from our Vagrant host by simply pointing our browser to http://localhost:49000/:
Run a local Registry as Docker Swarm service
As already stated in the paragraph “Allowing http-based local Docker Registries”, we configured every Docker engine on every Swarm node to access http-based Docker registries. Although a local registry is only relevant for later application deployments, it´s something like a basic step when in comes to initializing a Docker Swarm Cluster. So let´s start our Docker Swarm Registry Service as mentioned in the docs. There were some errors in those docs that should be fixed by now (http://ift.tt/2fvdJgp, http://ift.tt/2fvk77s & http://ift.tt/2fv6Xan). Everything needed is done inside the run-swarm-registry.yml:
- name: Specify to run Docker Registry on Linux Manager node shell: "docker node update --label-add registry=true masterlinux01" ignore_errors: yes when: inventory_hostname == "masterlinux01" - name: Create directory for later volume mount into the Docker Registry service on Linux Manager node if it doesn´t exist file: path: /mnt/registry state: directory mode: 0755 when: inventory_hostname in groups['linux'] sudo: yes - name: Run Docker Registry on Linux Manager node as Swarm service shell: "docker service create --name swarm-registry --constraint 'node.labels.registry==true' --mount type=bind,src=/mnt/registry,dst=/var/lib/registry -e REGISTRY_HTTP_ADDR=0.0.0.0:5000 -p 5000:5000 --replicas 1 registry:2" ignore_errors: yes when: inventory_hostname == "masterlinux01"
As we want to run our registry on our Linux Manager node, we need to set a label to it at first. This is done with the docker node update --label-add command. Then we create a mountpoint inside the Linux Manager Docker host for later usage in the registry Docker container. The last step is the crucial one. It creates a Docker Swarm service running our local Registry, configured to run on port 5000 and only on the Linux Manager node with the help of --constraint 'node.labels.registry==true'.
If you manually check the Swarm´s services after this command, you´ll notice a running Swarm service called swarm-registry. Or we could simply go to our Portainer UI on http://localhost:49000/ and have a look:
The Swarm is ready for action!
We´ve reached the last step in our playbook initialize-docker-swarm.yml. The last task includes the playbook display-swarm-status.yml, which doesn´t really do anything on our machines – but outputs the current Swarm status to the console that executes our playbook:
This means that our Docker Swarm cluster is ready for the deployment of our applications! Wow, this was quite a journey we did in this post. But I think we´ve already reached a lot of our goals. Again, we have a completely comprehensible setup in place. Messing up something on the way to our running Swarm is not a problem any more. Just delete everything an start fresh! As we use the “Infrastructure as code” paradigm here, everything is automated and 100 % transparent. Just have a look into the GitHub repository or the commandline output. So no “I have played around with Swarm and everything worked out on my machine” speech. This setup works. And if not, fix it with a pull request. 🙂 I can´t emphasize this enough in the context of the fast-moving development of Docker Windows Containers right now.
This brings us to the second goal: We have a fully functional mixed OS hybrid Docker Swarm cluster in place which provides every potentially needed basis for our applications inside Docker containers – be it native Windows or native Linux. And by leveraging the power of Vagrant´s multi-machine setups, everything can be executed locally on your laptop while at the same time opening up to any possible cloud solution out there. So this setup will provide us with a local environment which is as near to staging or even production as possible.
So what´s left? We haven´t deployed an application so far! We for sure want to deploy a lot of microservices to our Docker Swarm cluster and let them automatically be scaled out. We also need to know how we can access applications running inside the Swarm and how we can do things like rolling updates without generating any downtime to our consumers. There are many things left to talk about, but maybe there will also be a second part to this blog post.
The post Taming the Hybrid Swarm: Initializing a mixed OS Docker Swarm Cluster running Windows & Linux native Containers with Vagrant & Ansible appeared first on codecentric AG Blog.
Taming the Hybrid Swarm: Initializing a mixed OS Docker Swarm Cluster running Windows & Linux native Containers with Vagrant & Ansible published first on http://ift.tt/2fA8nUr
0 notes