#many have pointed this out but WHY is it windows subsystem for linux and not... linux subsystem for windows
Explore tagged Tumblr posts
exeggcute · 2 years ago
Text
how about my old setup that was a mac running a windows virtual machine running WSL
the command line feels so personal... like how some people say when they pray they're talking directly to god but instead I'm talking directly to the computer
4K notes · View notes
racaletka1979-blog · 6 years ago
Text
I got the NYX in "Stockholm" which was the color I thought I wanted the least since it looked like a pale nude beige online. The tube didn look great either, but I tried it on and it a surprisingly wearable corally nude. Before reading your comments I probably would have opted for the Ibiza if given the choice because I do like brighter lip colors sometimes, but after hearing it a neon clown shade I glad I got something that at least wearable. You missing my point. I didn say SNL is intellectual comedy. I said the two shows have different dna. There are people on the app that take it really seriously and make fools out of themselves. The rest of what I see is 13 18 year old boys making fun of the people that take it so seriously. It's a really weird concept to explain but I've gotten so into it in an attempt to relate with my 14 year old brothers generation. After that I would look at the other chunks floating by and noticed there was so much sealife just hanging around the seaweed. I even saw a tiny black drum that was maybe a centimeter long. Then I understood why the seagulls crowd the shore when seaweed washes up.. Maybe giving a large persistent posses a fort? The first 6months of GTA online had far less going for it. Online has a lot of potential. Looking forward to see what R has planned for it.. Thank you so much. I am using MyFitnessPal to keep track of everything. I am also struggling a little because i have stomach surgery 4 years ago and the amount of food i can eat specially fat and protein is less that the average person. Fairly self explanatory but for more detail, 용인출장마사지 check out our Posting GuidelinesWhen directing someone to the sidebar. Provide specific links. For someone asking about Allergies or Beginner Guides, link them to the specific subsections within AB University, rather than saying it's in the sidebar.Finally, a reminder that no one is obligated to answer your questions. 2) if you notice your urge to pick happens at certain times or in certain situations, help yourself by limiting your exposure to those risky scenarios. My weakest time is when I getting ready for bed, so I try to do my routine as quickly as possible. That way I don catch myself staring at my pores and closed comedones and suddenly become overwhelmed with the need to pick, and I avoid mirrors whenever I can help it.. However, if you have broken pores or oily skin avoid using oils as it may worsen the skin. Afterwards using a soft cloth is important to clean the skin properly, as usage of cotton balls is not sufficient. Second step involves using cleanser that consists glycolic acid this will help the skin gentle exfoliation. Everything you want to do, you can do on Windows in some form. If you intend to program a lot, I do recommend dual booting into Linux or looking into the Windows Subsystem for Linux. While many IDEs and especially the Microsoft stack are supported perfectly well on Windows, POSIX compliant environments are usually preferred by developers.. I answered. You questioned my answer, so I elaborated.The 용인출장마사지 only concept of intuitive eating that makes any sense in my experience is when you crave something healthy. Like, if you suddenly crave seafood when it not normally your favorite, you might need Vitamin D or omegas 3s.That said, 80% of the time cravings are my animal brain telling me how good a bacon cheeseburger would be, or that pizza would be the best right now, or that a chocolate croissant would be just amazing.
1 note · View note
globebusinesscenter · 4 years ago
Text
How to root Android phones and tablets and unroot them
Android rooting is the perfect way to gain more control over your smartphone, and open up a world of unknown, yet important, possibilities. 
Tumblr media
Root Android phones and tablets and unroot them
Rooting isn't without its risks - and if something goes wrong, it could void the warranty, leave you a broken smartphone or tablet, or worse.
Before continuing, it is important to understand that rooting is not always a straightforward process, and you may experience hiccups along the way. If you decide that you should root your Android device then continue below, but know that this is not for the faint of heart or tech geek.
Manufacturers and vectors will discourage you from taking root, and they're not just a freak out of fear. If you don't follow the instructions properly, the worst-case scenario could damage your device irreparably, but many people find the potential benefits worth it. With a rooted phone, you can remove bloatware, speed up your processor, and customize every element of your phone's software appearance.
This guide will guide you on how to root Android phones through the steps to root your device. While we can reach some phones within minutes, others will require more research. But one thing is clear: Rooting your phone is one of the best ways to harness the true potential of your Android device.
What is rooting?
Rooting an Android phone or tablet is like jailbreaking an iPhone - it essentially lets you dive deeper into the phone's subsystem. Once the rooting process is complete, you have access to the entire operating system to customize almost everything on your Android device, and you can bypass any restrictions that your manufacturer or carrier might have imposed.
Rooting is best done with caution. You must back up your phone's software before installing - or "flashing" in terms of root - a custom ROM (modified version of Android).
Why should you root?
One of the biggest incentives in rooting your Android phone is to allow you to remove bloatware that cannot be uninstalled otherwise (although you can sometimes turn it off - see our guide on disabling bloatware). On some devices, rooting will enable settings that were previously disabled, such as wireless tethering. Additional features include the ability to install specialized tools and dedicated flash ROMs, each of which can add additional features and improve the performance of your phone or tablet.
There aren't many root apps to have, but there are enough of them to make them worth it. Certaines applications vous permettront de sauvegarder automatiquement toutes vos applications et données dans le cloud, de bloquer les publicités Web et intégrées aux applications, de créer des tunnels sécurisés verses Internet, d'overclocker votre processeur ou de faire de votre appareil un point d'accès cellular. Take a look at the best root accessed apps for devices to get a better idea of ​​what is possible.
Why shouldn’t you root?
I am a fixture on non-influencing abilities based on the Android voting system.
Voiding Your Warranty: Some manufacturers or carriers will void your warranty if you root your device, so it should be borne in mind that you can always unroot. If you need to return the device for repair, all you need to do is flash the software backup you made and it will be like new.
Brick your phone: choose If something goes wrong during the rooting process, you risk breaking - i.e. corrupting - your device. The easiest way to appear to happen is to follow the media carefully. Make sure that the guide you are following is up to date and that the custom ROM you are flashing is specifically for your phone. If you do your research, you won't have to worry about bricking your smartphone.
Security Risks: Rooting presents some security risks. Depending on the services or apps you use on your device, this could create a security hole. And I have seen malicious elements that play a big role in data protection, the combination of malicious and invented launchers and made available from devices inaccessible through the annoying web.
Disabled apps: a few apps and services: Security-conscious doesn't work on rooted devices - financial platforms like Google Pay and Barclays Mobile Banking don't support them. Popular apps for human rights-protected TV works and movies, like Sky Go and Virgin TV Anywhere, won't start on rooted devices either - and neither will Netflix.
How to prepare your Android device for rooting
One of the easiest ways to root an Android device is through an app, and several rooting apps have received attention over the years - Framaroot, Firmware.mobi, Kingo Root, BaiduRoot, One Click Root, SuperSU, and Root Master are among the most reliable. 
Typically, these services root your device during the time you spend brushing your teeth. But some of them only support devices running older versions of Android, so you might need to do some research to find a device that works with your device. If you are looking to root an older device, you may need to check out Firmware. Mobi.
Previously, root Android versions of Android 7.0 Nougat was more difficult. The certified startup service will check the integrity of the device's encryption to detect if your device's system files have been compromised, preventing legitimate root applications. Thankfully, I faced the curve root apps and it became much easier to root newer versions of Android than before.
If your phone is not compatible with the one-click rooting app, then you need to spend some time looking for alternatives in Android forums. A great place to start is the XDA Developers Forum. Look for a thread on your phone or tablet and you'll likely find a way.
Preparing for rooting
Back up anything you can't live without before you start. You should always back up your current ROM to your phone before flashing a new one. You'll also need to make sure your device is fully charged before you begin.
You will need to turn on USB Debugging and OEM Unlocking. Do this by opening Settings on your device. If you do not see Developer Options toward the bottom of the Settings screen, follow these steps to activate it.
Tap on About Phone and find the Build Number. The exact path depends on your phone, but it’ll usually be found with other software information.
Tap on the Build Number seven times, and the Developer Options will appear on the Settings main page. You may need to confirm your security passcode to enable this.
Tap on the Back key to see your new developer options.
Tap Developer Options.
Check to enable USB Debugging.
Check to enable OEM Unlocking.
Installing the Android SDK Platform Tools
Previously rooting included downloading the entire Android SDK from Google. Thankfully, this is no longer the case, and all you need is the Android SDK platform tools.
Download and install the Android SDK Platform Tools from Google's developer site. There are options for Windows, Mac, and Linux systems. These are instructions for Windows devices. Extract zip files. When asked to select the directory where you want to install the program, we recommend that you set it to C: android-SDK. If you've chosen a different site, be sure to remember this.
Installing device drivers
To ensure that your computer can properly communicate with your smartphone or tablet, you will need to install the correct USB driver.
Devices from some manufacturers come with drivers included in the phone software, so all you need to do to install the correct USB driver is to connect your phone to your computer with a USB cable. OnePlus is an example, but it's worth connecting your phone first to see if the USB drivers will be installed automatically.
Other than that, here is a list of the most popular manufacturers' drivers:
Asus
Acer
Alcatel
Coolpad
Google / Nexus / Pixel
HTC
Huawei / Honor
Lenovo / Motorola
LG
Samsung
Sony
Xiaomi
Follow the installer’s instructions. Once the drivers are installed, proceed to the next step.
Unlock your bootloader
Before you begin, you need to unlock your device's bootloader. Bootloader, in simple terms, is the program that loads a device's operating system. Identifies the apps that run during the booting process of your phone or tablet.
Some manufacturers require you to have a key to unlock the bootloader. Motorola, HTC, LG, and Sony provide step-by-step instructions on how to do this, but a word of caution: it requires you to sign up for a developer account.
Unfortunately for Huawei and Honor device users, it is no longer possible to unlock the boot chargers on these phones. Huawei revoked the ability to request unlock codes in July 2018. If you still want to root a Huawei or Honor device, you need to use a third-party service like DC-Unlocker.
Once you follow these steps, you can start the unlocking process. You will need to put your device in fast boot mode. It's different for each phone, but on most devices, restarting the device and holding the Power and Volume Down buttons for 10 seconds does the trick (HTC phones require you to press the Volume Down button and press the Power button to select it).
Once Fastboot starts up, head to the folder where you previously unzipped the Android SDK files. Next, open the command prompt on your computer by pressing Shift + right-click and choosing Open Command Prompt here. If your device requires a passcode, you'll get a long string of characters. Paste it in the box on the manufacturer's website for your device, then submit the form and wait for an email with a key, file, and additional instructions.
Unlock the bootloader of your device by connecting it to your computer and returning it to Fastboot Mode. Open a command prompt by typing cmd into the start menu.
For Google Nexus and Pixel devices, the commands are easy:
Nexus phones: Type “fast-boot OEM unlock” (without quotes) and hit Enter.
Pixel phones: Type “fast-boot flashing unlock” (without quotes) and hit Enter.
It’s the same for Samsung devices:
Samsung phones: Type “fast-boot flashing unlock” (without quotes) and hit Enter.
Motorola’s command is a little different:
Type “OEM unlock UNIQUE_KEY” (without quotes), replacing “UNIQUE KEY” with the code you received
So is HTC’s:
Type “unlock token Unlock_code.bin” (without quotes), replacing “Unlock_code.bin” with the file you received.
Confirm the unlock, and you’re one step closer to rooting your Android device.
Some manufacturers and carriers don’t sanction bootloader unlocking, but that doesn’t mean it can’t be done. Try searching the XDA Developers forum for workarounds and unofficial solutions.
How to root Android phones and tablets and unroot them How to root Android phones, HOWTO, one click root, root Android phone, root android phones, root my android, root my device, root my phone, rooting definition, unroot android phone, What is rooting via exercisesfatburnig.blogspot.com https://ift.tt/3mJ1MWf
0 notes
suzanneshannon · 6 years ago
Text
Moving an ASP.NET Core from Azure App Service on Windows to Linux by testing in WSL and Docker first
I updated one of my websites from ASP.NET Core 2.2 to the latest LTS (Long Term Support) version of ASP.NET Core 3.1 this week. Now I want to do the same with my podcast site AND move it to Linux at the same time. Azure App Service for Linux has some very good pricing and allowed me to move over to a Premium v2 plan from Standard which gives me double the memory at 35% off.
My podcast has historically run on ASP.NET Core on Azure App Service for Windows. How do I know if it'll run on Linux? Well, I'll try it see!
I use WSL (Windows Subsystem for Linux) and so should you. It's very likely that you have WSL ready to go on you machine and you just haven't turned it on. Combine WSL (or the new WSL2) with the Windows Terminal and you're in a lovely spot on Windows with the ability to develop anything for anywhere.
First, let's see if I can run my existing ASP.NET Core podcast site (now updated to .NET Core 3.1) on Linux. I'll start up Ubuntu 18.04 on Windows and run dotnet --version to see if I have anything installed already. You may have nothing. I have 3.0 it seems:
$ dotnet --version 3.0.100
Ok, I'll want to install .NET Core 3.1 on WSL's Ubuntu instance. Remember, just because I have .NET 3.1 installed in Windows doesn't mean it's installed in my Linux/WSL instance(s). I need to maintain those on my own. Another way to think about it is that I've got the win-x64 install of .NET 3.1 and now I need the linux-x64 one.
NOTE: It is true that I could "dotnet publish -r linux-x64" and then scp the resulting complete published files over to Linux/WSL. It depends on how I want to divide responsibility. Do I want to build on Windows and run on Linux/Linux? Or do I want to build and run from Linux. Both are valid, it just depends on your choices, patience, and familiarity.
GOTCHA: Also if you're accessing Windows files at /mnt/c under WSL that were git cloned from Windows, be aware that there are subtleties if Git for Windows and Git for Ubuntu are accessing the index/files at the same time. It's easier and safer and faster to just git clone another copy within the WSL/Linux filesystem.
I'll head over to https://dotnet.microsoft.com/download and get .NET Core 3.1 for Ubuntu. If you use apt, and I assume you do, there's some preliminary setup and then it's a simple
sudo apt-get install dotnet-sdk-3.1
No sweat. Let's "dotnet build" and hope for the best!
It might be surprising but if you aren't doing anything tricky or Windows-specific, your .NET Core app should just build the same on Windows as it does on Linux. If you ARE doing something interesting or OS-specific you can #ifdef your way to glory if you insist.
Bonus points if you have Unit Tests - and I do - so next I'll run my unit tests and see how it goes.
OPTION: I write things like build.ps1 and test.ps1 that use PowerShell as PowerShell is on Windows already. Then I install PowerShell (just for the scripting, not the shelling) on Linux so I can use my .ps1 scripts everywhere. The same test.ps1 and build.ps1 and dockertest.ps1, etc just works on all platforms. Make sure you have a shebang #!/usr/bin/pwsh at the top of your ps1 files so you can just run them (chmod +x) on Linux.
I run test.ps1 which runs this command
dotnet test /p:CollectCoverage=true /p:CoverletOutputFormat=lcov /p:CoverletOutput=./lcov .\hanselminutes.core.tests
with coverlet for code coverage and...it works! Again, this might be surprising but if you don't have any hard coded paths, make any assumptions about a C:\ drive existing, and avoid the registry and other Windows-specific things, things work.
Test Run Successful. Total tests: 23 Passed: 23 Total time: 9.6340 Seconds Calculating coverage result... Generating report './lcov.info' +--------------------------+--------+--------+--------+ | Module | Line | Branch | Method | +--------------------------+--------+--------+--------+ | hanselminutes.core.Views | 60.71% | 59.03% | 41.17% | +--------------------------+--------+--------+--------+ | hanselminutes.core | 82.51% | 81.61% | 85.39% | +--------------------------+--------+--------+--------+
I can build, I can test, but can I run it? What about running and testing in containers?
I'm running WSL2 on my system and I've doing all this in Ubuntu 18.04 AND I'm running the Docker WSL Tech Preview. Why not see if I can run my tests under Docker as well? From Docker for Windows I'll enabled the Experimental WSL2 support and then from the Resources menu, WSL Integration I'll enable Docker within my Ubuntu 18.04 instance (your instances and their names will be your own).
I can confirm it's working with "docker info" under WSL and talking to a working instance. I should be able to run "docker info" in BOTH Windows AND WSL.
$ docker info Client: Debug Mode: false Server: Containers: 18 Running: 18 Paused: 0 Stopped: 0 Images: 31 Server Version: 19.03.5 Storage Driver: overlay2 Backing Filesystem: extfs ...snip...
Cool. I remembered I also I needed to update my Dockerfile as well from the 2.2 SDK on the Docker hub to the 3.1 SDK from Microsoft Container Registry, so this one line change:
#FROM microsoft/dotnet:2.2-sdk AS build FROM mcr.microsoft.com/dotnet/core/sdk:3.1 as build
as well as the final runtime version for the app later in the Dockerfile. Basically make sure your Dockerfile uses the right versions.
#FROM microsoft/dotnet:2.1-aspnetcore-runtime AS runtime FROM mcr.microsoft.com/dotnet/core/aspnet:3.1 AS runtime
I also volume mount the tests results so there's this offensive If statement in the test.ps1. YES, I know I should just do all the paths with / and make them relative.
#!/usr/bin/pwsh docker build --pull --target testrunner -t podcast:test . if ($IsWindows) { docker run --rm -v d:\github\hanselminutes-core\TestResults:/app/hanselminutes.core.tests/TestResults podcast:test } else { docker run --rm -v ~/hanselminutes-core/TestResults:/app/hanselminutes.core.tests/TestResults podcast:test }
Regardless, it works and it works wonderfully. Now I've got tests running in Windows and Linux and in Docker (in a Linux container) managed by WSL2. Everything works everywhere. Now that it runs well on WSL, I know it'll work great in Azure on Linux.
Moving from Azure App Service on Windows to Linux
This was pretty simple as well.
I'll blog in detail how I build andd eploy the sites in Azure DevOps and how I've moved from .NET 2.2 with Classic "Wizard Built" DevOps Pipelines to a .NET Core 3.1 and a source control checked-in YAML pipeline next week.
The short version is, make a Linux App Service Plan (remember that an "App Service Plan " is a VM that you don't worry about. See in the pick below that the Linux Plan has a penguin icon. Also remember that you can have as many apps inside your plan as you'd like (and will fit in memory and resources). When you select a "Stack" for your app within Azure App Service for Linux you're effectively selecting a Docker Image that Azure manages for you.
I started by deploying to staging.mydomain.com and trying it out. You can use Azure Front Door or CloudFlare to manage traffic and then swap the DNS. I tested on Staging for a while, then just changed DNS directly. I waited a few hours for traffic to drain off the Windows podcast site and then stopped it. After a day or two of no traffic I deleted it. If I did my job right, none of you noticed the site moved from Windows to Linux, from .NET Core 2.2 to .NET Core 3.1. It should be as fast or faster with no downtime.
Here's a snap of my Azure Portal. As of today, I've moved my home page, my blood sugar management portal, and my podcast site all onto a single Linux App Service Plan. Each is hosted on GitHub and each is deploying automatically with Azure DevOps.
Next big migration to the cloud will be this blog which still runs .NET Framework 4.x. I'll blog how the podcast gets checked into GitHub then deployed with Azure DevOps next week.
What cool migrations have YOU done lately, Dear Reader?
Sponsor: Like C#? We do too! That’s why we've developed a fast, smart, cross-platform .NET IDE which gives you even more coding power. Clever code analysis, rich code completion, instant search and navigation, an advanced debugger... With JetBrains Rider, everything you need is at your fingertips. Code C# at the speed of thought on Linux, Mac, or Windows. Try JetBrains Rider today!
© 2019 Scott Hanselman. All rights reserved.
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
      Moving an ASP.NET Core from Azure App Service on Windows to Linux by testing in WSL and Docker first published first on https://deskbysnafu.tumblr.com/
0 notes
philipholt · 6 years ago
Text
Moving an ASP.NET Core from Azure App Service on Windows to Linux by testing in WSL and Docker first
I updated one of my websites from ASP.NET Core 2.2 to the latest LTS (Long Term Support) version of ASP.NET Core 3.1 this week. Now I want to do the same with my podcast site AND move it to Linux at the same time. Azure App Service for Linux has some very good pricing and allowed me to move over to a Premium v2 plan from Standard which gives me double the memory at 35% off.
My podcast has historically run on ASP.NET Core on Azure App Service for Windows. How do I know if it'll run on Linux? Well, I'll try it see!
I use WSL (Windows Subsystem for Linux) and so should you. It's very likely that you have WSL ready to go on you machine and you just haven't turned it on. Combine WSL (or the new WSL2) with the Windows Terminal and you're in a lovely spot on Windows with the ability to develop anything for anywhere.
First, let's see if I can run my existing ASP.NET Core podcast site (now updated to .NET Core 3.1) on Linux. I'll start up Ubuntu 18.04 on Windows and run dotnet --version to see if I have anything installed already. You may have nothing. I have 3.0 it seems:
$ dotnet --version 3.0.100
Ok, I'll want to install .NET Core 3.1 on WSL's Ubuntu instance. Remember, just because I have .NET 3.1 installed in Windows doesn't mean it's installed in my Linux/WSL instance(s). I need to maintain those on my own. Another way to think about it is that I've got the win-x64 install of .NET 3.1 and now I need the linux-x64 one.
NOTE: It is true that I could "dotnet publish -r linux-x64" and then scp the resulting complete published files over to Linux/WSL. It depends on how I want to divide responsibility. Do I want to build on Windows and run on Linux/Linux? Or do I want to build and run from Linux. Both are valid, it just depends on your choices, patience, and familiarity.
GOTCHA: Also if you're accessing Windows files at /mnt/c under WSL that were git cloned from Windows, be aware that there are subtleties if Git for Windows and Git for Ubuntu are accessing the index/files at the same time. It's easier and safer and faster to just git clone another copy within the WSL/Linux filesystem.
I'll head over to https://dotnet.microsoft.com/download and get .NET Core 3.1 for Ubuntu. If you use apt, and I assume you do, there's some preliminary setup and then it's a simple
sudo apt-get install dotnet-sdk-3.1
No sweat. Let's "dotnet build" and hope for the best!
It might be surprising but if you aren't doing anything tricky or Windows-specific, your .NET Core app should just build the same on Windows as it does on Linux. If you ARE doing something interesting or OS-specific you can #ifdef your way to glory if you insist.
Bonus points if you have Unit Tests - and I do - so next I'll run my unit tests and see how it goes.
OPTION: I write things like build.ps1 and test.ps1 that use PowerShell as PowerShell is on Windows already. Then I install PowerShell (just for the scripting, not the shelling) on Linux so I can use my .ps1 scripts everywhere. The same test.ps1 and build.ps1 and dockertest.ps1, etc just works on all platforms. Make sure you have a shebang #!/usr/bin/pwsh at the top of your ps1 files so you can just run them (chmod +x) on Linux.
I run test.ps1 which runs this command
dotnet test /p:CollectCoverage=true /p:CoverletOutputFormat=lcov /p:CoverletOutput=./lcov .\hanselminutes.core.tests
with coverlet for code coverage and...it works! Again, this might be surprising but if you don't have any hard coded paths, make any assumptions about a C:\ drive existing, and avoid the registry and other Windows-specific things, things work.
Test Run Successful. Total tests: 23 Passed: 23 Total time: 9.6340 Seconds Calculating coverage result... Generating report './lcov.info' +--------------------------+--------+--------+--------+ | Module | Line | Branch | Method | +--------------------------+--------+--------+--------+ | hanselminutes.core.Views | 60.71% | 59.03% | 41.17% | +--------------------------+--------+--------+--------+ | hanselminutes.core | 82.51% | 81.61% | 85.39% | +--------------------------+--------+--------+--------+
I can build, I can test, but can I run it? What about running and testing in containers?
I'm running WSL2 on my system and I've doing all this in Ubuntu 18.04 AND I'm running the Docker WSL Tech Preview. Why not see if I can run my tests under Docker as well? From Docker for Windows I'll enabled the Experimental WSL2 support and then from the Resources menu, WSL Integration I'll enable Docker within my Ubuntu 18.04 instance (your instances and their names will be your own).
I can confirm it's working with "docker info" under WSL and talking to a working instance. I should be able to run "docker info" in BOTH Windows AND WSL.
$ docker info Client: Debug Mode: false Server: Containers: 18 Running: 18 Paused: 0 Stopped: 0 Images: 31 Server Version: 19.03.5 Storage Driver: overlay2 Backing Filesystem: extfs ...snip...
Cool. I remembered I also I needed to update my Dockerfile as well from the 2.2 SDK on the Docker hub to the 3.1 SDK from Microsoft Container Registry, so this one line change:
#FROM microsoft/dotnet:2.2-sdk AS build FROM mcr.microsoft.com/dotnet/core/sdk:3.1 as build
as well as the final runtime version for the app later in the Dockerfile. Basically make sure your Dockerfile uses the right versions.
#FROM microsoft/dotnet:2.1-aspnetcore-runtime AS runtime FROM mcr.microsoft.com/dotnet/core/aspnet:3.1 AS runtime
I also volume mount the tests results so there's this offensive If statement in the test.ps1. YES, I know I should just do all the paths with / and make them relative.
#!/usr/bin/pwsh docker build --pull --target testrunner -t podcast:test . if ($IsWindows) { docker run --rm -v d:\github\hanselminutes-core\TestResults:/app/hanselminutes.core.tests/TestResults podcast:test } else { docker run --rm -v ~/hanselminutes-core/TestResults:/app/hanselminutes.core.tests/TestResults podcast:test }
Regardless, it works and it works wonderfully. Now I've got tests running in Windows and Linux and in Docker (in a Linux container) managed by WSL2. Everything works everywhere. Now that it runs well on WSL, I know it'll work great in Azure on Linux.
Moving from Azure App Service on Windows to Linux
This was pretty simple as well.
I'll blog in detail how I build andd eploy the sites in Azure DevOps and how I've moved from .NET 2.2 with Classic "Wizard Built" DevOps Pipelines to a .NET Core 3.1 and a source control checked-in YAML pipeline next week.
The short version is, make a Linux App Service Plan (remember that an "App Service Plan " is a VM that you don't worry about. See in the pick below that the Linux Plan has a penguin icon. Also remember that you can have as many apps inside your plan as you'd like (and will fit in memory and resources). When you select a "Stack" for your app within Azure App Service for Linux you're effectively selecting a Docker Image that Azure manages for you.
I started by deploying to staging.mydomain.com and trying it out. You can use Azure Front Door or CloudFlare to manage traffic and then swap the DNS. I tested on Staging for a while, then just changed DNS directly. I waited a few hours for traffic to drain off the Windows podcast site and then stopped it. After a day or two of no traffic I deleted it. If I did my job right, none of you noticed the site moved from Windows to Linux, from .NET Core 2.2 to .NET Core 3.1. It should be as fast or faster with no downtime.
Here's a snap of my Azure Portal. As of today, I've moved my home page, my blood sugar management portal, and my podcast site all onto a single Linux App Service Plan. Each is hosted on GitHub and each is deploying automatically with Azure DevOps.
Next big migration to the cloud will be this blog which still runs .NET Framework 4.x. I'll blog how the podcast gets checked into GitHub then deployed with Azure DevOps next week.
What cool migrations have YOU done lately, Dear Reader?
Sponsor: Like C#? We do too! That’s why we've developed a fast, smart, cross-platform .NET IDE which gives you even more coding power. Clever code analysis, rich code completion, instant search and navigation, an advanced debugger... With JetBrains Rider, everything you need is at your fingertips. Code C# at the speed of thought on Linux, Mac, or Windows. Try JetBrains Rider today!
© 2019 Scott Hanselman. All rights reserved.
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
      Moving an ASP.NET Core from Azure App Service on Windows to Linux by testing in WSL and Docker first published first on http://7elementswd.tumblr.com/
0 notes
constellations-soc · 8 years ago
Link
From the main Constellations blog:
This post is an introduction and placeholder for a planned series of posts on useful apps, services, and software. Once there are a few posts in the series I will eventually promote this post to a page with an index of all the posts from the series.
I decided to make a series for this because although using computers has become a key part of academic work, too many academics remain uncomfortable using them. Often I come across people using Word for anything that involves text. Not due to it being the best tool for the job but that they are unaware of the alternatives. Even when someone knows it is not an ideal solution, it is not always easy to find a good entry point to start learning how to use new software. At a training event I attended last year I was sat next to a professor. From the introductions he obviously had years of experience using an advance software package for tasks similar to the software the training was on. Yet, it became clear from the start that he was uneasy and disorientated when facing a new application with an unfamiliar interface. After accidentally launching another application and then opening the wrong file, that resulted in a garbled mess of symbols appearing on the screen, he got up and left only ten minutes into the session. While it is rare for someone to feel so at a loss that they leave, I have heard multiple times from PhD students that despite feeling like walking out they have persisted through a training session and still come away not feeling any more confident in knowing how to use the software. Such experiences end up reinforcing self perceptions of not being ‘a computer person’.
I do not think this is a fault of the trainers. Instead, I think there are two reasons why training workshops are not the best solution. The first is that they are often focused on complex specialist applications when there is a need to improve the general diversity of software academics are using and help with basic computing skills. The second reason is that training workshops are not always the best way to introduce new software. Too much ends up being covered at once and, unless paying for personalised training, the dates available for training do not align with when people actually start using the software within their project. By the time people then reach the stage in their research and launch the software for the first time in months, they have forgotten most of what was taught. It is common for PhD students to attend a training workshop at the start of their PhD in anticipation of analysis. Then a year later once they have their data needing to delay analysis in order to attend another workshop as a refresher. A significant chunk of their funds as a result is eaten up by training costs.
This series of posts is primarily concerned with the first issue and will focus on introducing a broad range of software that can be used in various contexts. The aim being to help people break out of their comfort zone to try new apps and services. I have to admit that part of me dies whenever someone goes to show me their notes to see them shift through a mess of assorted doc files and PDFs strewn about their desktop. I also realise I am on the other end of the spectrum, often perceived as using software for anything and everything. Since I have handwriting that even I struggle to read, I always look for options that allow me to ditch any reliance on pen and paper. The series of posts then is in no way meant as a proscriptive ‘you should use all of these’ but a pick and mix of what looks of potential interest. Please feel free to use the comments to highlight anything that is unclear or for any questions not addressed in the posts themselves.
Some posts will focus on a single item whereas others will be on a theme with a few apps covered together such as ‘Ahhh, my eyes’ and ‘Reducing e-mail overload’. Posts will not be in a specific order, though they will be categorised and tagged. Additionally, there will be a range in the complexity of what is introduced from single use apps, command line applications, apps like Anki and Tasker that contain programming elements, to introducing programming itself in the form of AutoHotKey, Python, and R. The latter will be tied into another planned series of posts I am working on, covering scripts I have written and, where feasible, a line by line breakdown of what the code does. The aim being to offer numerous entry points for starting to learn and become comfortable with more advance computing tasks.
Predominantly, desktop software will be Window based and mobile apps Android. In an ideal world I would be using Linux, but its rare for Universities to offer staff desktops running Linux as well as there still being academic software that cannot easily be installed on Linux. However, I will where possible highlight MacOS and Linux alternatives to software that is Windows only. Furthermore, a decent number of posts will be focused on making Windows less painful to use. For example, the one saving grace of Windows 10 is the new Linux Subsystem for Windows that enables access to a whole host of useful command line tools. For those running older versions of Windows there is Cygwin, that achieves a similar result but through a different means. Both will be covered in a future post.
3 notes · View notes
nxfury · 5 years ago
Text
Retro Computing- Is Old School The Smart Way?
For those who remember their vintage Mac Classic or Commodore 64, they also remember how they were heavily constrained to the likes of 256 kilobytes of RAM. Even in these conditions, programmers still had the ability to engineer the same sorts of software we use today.
In this era from the 1970s to the 1980s, we saw several major innovations- the first computers, UNIX, the first graphical desktops, word processing software, printing, and internetworking of devices via ARPANET (which would later become the internet).
So why is there a lack of major innovation at such a rigorous speed anymore?
Stale Innovation
This may be a hard pill to swallow for some, but the increased availability that high-end hardware provides lowered the barrier of entry into computer programming, thus decreasing the quality of code. Due to this, overall competency in the average software developer declines. Naturally, this affects the importance of a "new" innovation- what's the point of rewriting code if the rewrite is bound to have worse quality?
On top of this, large companies, universities and defense contractors no longer fund major innovators. Let's use a modern-day example: The OpenBSD Foundation. They're one of the many organizations dedicated to furthering the UNIX source code, with an extreme focus on producing a system that has secure and sane defaults. Ironically, they were the inventors of OpenSSH and sudo (currently used in almost every Enterprise network running Linux or UNIX). So why aren't they recognized? It all boils down to a saying I learned from my grandfather: "Nobody likes change- even if it helps them."
Convenience Over Simplicity
Wait- don't these mean the same thing? Actually, no.
This is how American Heritage Dictionary defines these two words: Simple- Having few parts or features; not complicated or elaborate. Convenient- Suited or favorable to one's comfort, purpose, or needs.
For ages, programmers pursued simplicity as a way to provide stable, high-quality code that would run on virtually anything- even a toaster if one were so inclined. This old school of thought still exists, but is largely frowned upon with modern day programming paradigms.
For example, rapid prototyping has brought programming languages like Python to the forefront due to the convenience they provide and the ease of implementation in them. However, it's nearly impossible to produce efficient programs that guarantee stability across a wide variety of different platforms, as Python isn't yet implemented on as many platforms as languages such as C.
The truly sad thing about this is how it all ties right back to my first point on how it reduces competence among programmers.
The Attack Of The Public Domain
How is one supposed to train up a new generation of programmers for the enterprise world if there's no quality code to work on? It's a paradox, as large enterprise companies like Microsoft, Apple, and more make use of Open Source and Public Domain source code but rarely contribute anything that could help further the development of Open Source. In recent news, Microsoft introduced "DirectX 12 for Linux", but in reality they only made a way to access the Application Programming Interface (API) available to Linux users. No source code was disclosed and it was explicitly added solely for their Windows Subsystem for Linux. According to U.S. v. Microsoft (2001), the Department of Justice found an alarming statement for Microsoft's internal marketing strategy known as "EEE"- Embrace, Extend, Extinguish. Embrace the idea as if they support it, Extend support for the idea, then Extinguish it by rendering it obsolete. Google and Apple have been known to engage in similar practices.
Herein lies the paradox- there's a lack of new enterprise source code to look at without paying a significant amount of money for. Due to this, there's a lack of large-scale scientific research being conducted in computing that's available to the public.
Lack Of Attentiveness
It's all our fault here... If you're from the 1990s, you may remember "Windows Buyback Day", when Linux users protested outside Microsoft's headquarters about being forced to pay for a Windows license they don't even use.
20 years later, such noble ideas haven't been forgotten- they've been ignored and thrown on the proverbial backburner by the rest of society.
The Good News
Moore's Law is slowly becoming rendered obsolete. For those who are unaware of what this entails, Gordon Moore created this "rule of thumb" in 1965 that computing devices would double in capability, exponentially, every year. This turned out to be true until recently where manufacturers are reaching the physical limits of what they can fit on a circuit board.
This means that we're limited in terms of performance and in order to continue to maintain Moore's Law, we will be forced to go back to the days of old, writing high-quality software while retaining a large degree of performance.
Liked This Content? Check Out Our Discord Community and Become an email subscriber!
0 notes
holytheoristtastemaker · 5 years ago
Link
Tumblr media
Embrace, extend, and extinguish - these 3 words briefly describe the strategy Microsoft used in the '90 to further expand its multi-million dollar empire. But things have changed since then. No longer are we bound by the terrific Internet Explorer, nor see Microsoft engage in anti-competitive behavior. In fact, we see what's an almost-complete switch - new Edge embracing Chromium, Azure using Linux, and the whole company being much more open-source-friendly.
And so, with such a drastic change, I think it's understandable why many people - especially in the programming community - have different opinions about the company. Has Microsoft truly changed, or is this only the first step in their grand implementation of the "triple E" strategy?
War against open-source
First, let's take a step back to see how MS was back in the day. And, as a matter of fact, we don't have to go very far, as even in 2001, then-current CEO - Steve Ballmer - claimed Linux and open-source in general to be - quote here - "cancer that attaches itself in an intellectual property sense to everything it touches".
Now, to be clear, I don't intend to judge or critique here anybody personally. I think we can all agree that a company is a lot more than a sum of its employees and executives. Yet still, the CEO is a very important person, whose thoughts and opinions often influence the entire company's operations.
In this case, Ballmer was unhappy with Linux and its licensing. Sure, he did have a bit of a point there, but it certainly didn't help that MS back then was a primarily proprietary software-driven company. And it's not like that fact alone is bad. Naturally, most companies are less willing to open-source what makes them profit. But in this case, it's the overall context related to the mentioned statement that puts it in a very bad light.
Such a statement might not have spanned such a controversy if not for MS's questionable past. Browser wars, IE dominance, and negative approach towards open-source "helped" MS establish its perception as this big, bad company in the minds of many customers, programmers, and enthusiasts alike. And even though this might be profitable short term, bad public opinion is nothing that you'd like to have attached to your company's name in the long run.
Change of heart
So, with all this "bad press" as some might call it, how is it possible that a company so big, making such great profits from its fight against the Open-Source Software (OSS), suddenly turned around and became arguably one of the biggest OSS contributors out there?
Well, it's really hard to say, and the reasoning behind any kind of assumptions can be quite complex. Thus, we'll try to understand MS's new business model in a moment, but first, we'll take a look at how much exactly the company has contributed to the open-source community.
Linux
Let's start with the elephant in the room - Linux. It went from being called cancer to straight-up being openly loved. Whether it's because MS truly lost its battle against the open-source OS, or it's changed its mind doesn't matter.
What matters is that MS has truly embraced Linux - both on the user as well as the business side. The OS is now the most used one on MS Azure cloud platform, which on its own competes in size with players like GCP or even AWS itself! But the changes don't end here!
On Windows, MS added something called Windows Subsystem for Linux (WSL). You might have heard of it. It's official, MS-supported solution for running Linux right inside Windows itself, meant mostly for developers to take advantage of UNIX-like OS capabilities.
I'd say that the only way to go more Linux is to make Windows be based on Linux kernel, which... we all know is not going to happen and doesn't make much sense anyway. So, kudos to MS for going full-on Linux in all logical means! 😅
Acquisitions
What about MS's last acquisitions? What do they say about their new direction? As you might expect, I'd mostly want to focus here on GitHub and NPM - two of the arguably most influential acquisitions in my field (web development).
Starting with GitHub, MS acquired it in late 2018. At the time the opinions were mixed. On one hand, MS with its vast resources could boost GitHub's development, but on the other - no one knew what plans they had for the company. Currently, I must admit that GitHub only got better since then. New features such as Actions, Discussions, or the upcoming Codespaces continue to push GitHub into becoming an all-in-one platform for everything code. And there's also all the new free features like private repos for individuals and teams! I don't know if all this would have happened if not for MS's bigger budget!
Next up, the NPM was acquired only recently by GitHub, so technically MS too. The largest package repository in general, used mostly within the JavaScript ecosystem. Here, we haven't seen yet any striking changes (like with GitHub), and so we'll have to wait and see how this plays out.
Open-source contributions
As a cherry on top, let's take a look at some of MS's open-source contributions. And, in all fairness, there's so many of them that I can't even list them all - from beloved VS Code and TypeScript, through Rush and Playwright to tools like Windows Calculator and Terminal. There's literally tons of high-quality open-source code right here - code that like VS Code or TypeScript empowers millions of developers around the world.
I must admit - I have no reason to complain about MS and its latest actions. I'm almost completely "trapped" (and I say it in a positive, comfortable meaning) inside its open-source ecosystem. My go-to code editor is VS Code and most of my mostly-TypeScript-based code lives on GitHub or gets published on NPM. Sure, I don't use Windows, but when I think about it - I'm already fairly deep into all that stuff! 🙃
New business model
So, I think I've talked enough about all the positive changes that MS has gone through. Now, about that "why?" question. Why did MS have such a drastic change of heart? Where's the money to be made now that things are so open? Surely that trillion-dollar market evaluation comes from somewhere, right?
Well, I don't know what MS's real end goal is, but it's pretty obvious that currently, their main focus is on the cloud. Azure is growing rapidly and MS's presence in open-source helps further boost that growth. If you think about it Azure is mainly powered by Linux, and as it's a cloud platform, it's beneficial for MS to have better contact with their main customers - developers. And how to better reach them than through open-source platforms like GitHub - places where new talented individuals often start their coding journeys? And along the way, getting people more accustomed to the MS ecosystem - even if it's open-source - through tools like VS Code or TypeScript, helps convince them to go with Azure for tighter integration and better development experience.
That's already about 3 very compelling reasons for MS to go all-in on open-source. And I haven't even mentioned how it's a great way for a company to improve its software, without much investment! ���
Bottom line
As you can see, MS has gone a long way from what many might remember it as. The embracement of Linux, OSS, and the coding community as a whole improved the company's overall image. Sure, no one can ever know if it's not only yet another implementation of the EEE strategy, just on a wider scale. With all the MS's open-source software that I'm using - I hope that it's not. And even if it was - I think MS would lose too much trying to execute the final step. It's just no longer profitable. 🤑
What do you think about this whole topic? Feel free to leave your thoughts in the comments below!
0 notes
siva3155 · 6 years ago
Text
300+ TOP TERADATA Interview Questions and Answers
TERADATA Interview Questions for freshers experienced :-
1. What is Teradata? What are some primary characteristics of Teradata? Teradata is an RDBMS (Relational database management system) which is perfect to use with large-scale data warehousing application. It works on the parallelism concept. It is an open system. It can run on Windows/ UNIX/ Linux server platform. Teradata provides support to multiple data warehouse operations at the same time to different clients. It is developed by an American IT firm called Teradata corporation. It is a dealer of analytic data platforms, applications, and other related services. Characteristics of Teradata It is compatible with the American National Standards Institute (ANSI). It acts in a way as a server does. It is an open system. It is single. It has multi-node running capabilities. It is built on parallelism. 2. What are the newly developed features of Teradata? Some of the newly developed features of Teradata are: – Automated temporal analytics Extension in the compression capabilities which allows flexible compression of data about 20 times more data than the previous version. Customer associated innovation like tetradata viewpoint. 3. Highlight a few of the important components of Teradata. Some of the important components of Teradata are: – Bynet Access Module Processor (AMP) Parsing Engine (PE) Virtual Disk (vDisk) Virtual Storage System (VSS) TeradataLogoVertical 4. Mention the procedure via which, we can run Teradata jobs in a UNIX environment. All you have to do is perform execution in UNIX in the way as mentioned below. $Sh > BTEQ < > or $Sh > BTEQ < TEE 5. In Teradata, how do we Generate Sequence? In Teradata, we Generate Sequence by making use of Identity Column 6. During the Display time, how is the sequence generated by Teradata? All you have to do is use CSUM. 7. A certain load is being imposed on the table and that too, every hour. The traffic in the morning is relatively low, and that of the night is very high. As per this situation, which is the most advisable utility and how is that utility supposed to be loaded? The most suggestible utility here has to be Tpump. By making use of packet size decreasing or increasing, the traffic can be easily handled. 8. If Fast Load Script fails and only the error tables are made available to you, then how will you restart? There are basically two ways of restarting in this case. Making the old file to run – Make sure that you do not completely drop the error tables. Instead, try to rectify the errors that are present in the script or the file and then execute again. Running a new file – In this process, the script is executed simply using end loading and beginning statements. This will help in removing the lock that has been put up on the target table and might also remove the given record from the fast-log table. Once this is done, you are free to run the whole script once again. 9. Mention a few of the ETL tools that come under Teradata. Some of the ETL tools which are commonly used in Teradata are DataStage, Informatica, SSIS, etc. 10. Highlight a few of the advantages that ETL tools have over TD. Some of the advantages that ETL tools have over TD are: – Multiple heterogeneous destinations, as well as sources can be operated. Debugging process is much easier with the help of ETL tools owing to full-fledged GUI support. Components of ETL tools can be easily reused, and as a result, if there is an update to the main server, then all the corresponding applications connected to the server are updated automatically. De-pivoting and pivoting can be easily done using ETL tools. 11. What is the meaning of Caching in Teradata? Caching is considered as an added advantage of using Teradata as it primarily works with the source which stays in the same order i.e. does not change on a frequent basis. At times, Cache is usually shared amongst applications. 12. How can we check the version of Teradata that we are using currently? Just give the command .SHOW VERSION. 13. Give a justifiable reason why Multi-load supports NUSI instead of USI. The index sub-table row happens to be on the same Amp in the same way as the data row in NUSI. Thus, each Amp is operated separately and in a parallel manner. 14. How is MLOAD Client System restarted after execution? The script has to be submitted manually so that it can easily load the data from the checkpoint that comes last. 15. How is MLOAD Teradata Server restarted after execution? The process is basically carried out from the last known checkpoint, and once the data has been carried out after execution of MLOAD script, the server is restarted. 16. What is meant by a node? A node basically is termed as an assortment of components of hardware and software. Usually a server is referred to as a node. 17. Let us say there is a file that consists of 100 records out of which we need to skip the first and the last 20 records. What will the code snippet? We need to use BTEQ Utility in order to do this task. Skip 20, as well as Repeat 60 will be used in the script. 18. Explain PDE. PDE basically stands for Parallel Data Extension. PDE basically happens to be an interface layer of software present above the operation system and gives the database a chance to operate in a parallel milieu. 19. What is TPD? TPD basically stands for Trusted Parallel Database, and it basically works under PDE. Teradata happens to be a database that primarily works under PDE. This is the reason why Teradata is usually referred to as Trusted Parallel or Pure Parallel database. 20. What is meant by a Channel Driver? A channel driver is software that acts as a medium of communication between PEs and all the applications that are running on channels which are attached to the clients. 21. What is meant by Teradata Gateway? Just like channel driver, Teradata Gateway acts as a medium of communication between the Parse Engine and applications that are attached to network clients. Only one Gateway is assigned per node. 22. What is meant by a Virtual Disk? Virtual Disk is basically a compilation of a whole array of cylinders which are physical disks. It is sometimes referred to as disk Array. 23. Explain the meaning of Amp? Amp basically stands for Access Module Processor and happens to be a processor working virtually and is basically used for managing a single portion of the database. This particular portion of database cannot be shared by any other Amp. Thus, this form of architecture is commonly referred to as shared-nothing architecture. 24. What does Amp contain and what are all the operations that it performs? Amp basically consists of a Database Manager Subsystem and is capable of performing the operations mentioned below. Performing DML Performing DDL Implementing Aggregations and Joins. Releasing and applying locks, etc. 25. What is meant by a Parsing Engine? PE happens to be a kind Vproc. Its primary function is to take SQL requests and deliver responses in SQL. It consists of a wide array of software components that are used to break SQL into various steps and then send those steps to AMPs. 26.What do you mean by parsing? Parsing is a process concerned with analysis of symbols of string that are either in computer language or in natural language. 27. What are the functions of a Parser? A Parser: – Checks semantics errors Checks syntactical errors Checks object existence 28. What is meant by a dispatcher? Dispatcher takes a whole collection of requests and then keeps them stored in a queue. The same queue is being kept throughout the process in order to deliver multiple sets of responses. 29. How many sessions of MAX is PE capable of handling at a particular time? PE can handle a total of 120 sessions at a particular point of time. 30. Explain BYNET. BYNET basically serves as a medium of communication between the components. It is primarily responsible for sending messages and also responsible for performing merging, as well as sorting operations. 31. What is meant by a Clique? A Clique is basically known to be an assortment of nodes that is being shared amongst common disk drives. Presence of Clique is immensely important since it helps in avoiding node failures. 32. What happens when a node suffers a downfall? Whenever there is a downfall in the performance level of a node, all the corresponding Vprocs immediately migrate to a new node from the fail node in order to get all the data back from common drives. 33. List out all forms of LOCKS that are available in Teradata. There are basically four types of LOCKS that fall under Teradata. These are: – Read Lock Access Lock Exclusive Lock Write Lock 34. What is the particular designated level at which a LOCK is liable to be applied in Teradata? Table Level : All the rows that are present inside a table will certainly be locked. Database Level Lock : All the objects that are present inside the database will be locked. Row Hash Level Lock : Only those rows will be locked which are corresponding to the particular row. 35. In the Primary Index, what is the score of AMPs that are actively involved? Only one AMP is actively involved in a Primary Index. 36. In Teradata, what is the significance of UPSERT command? UPSERT basically stands for Update Else Insert. This option is available only in Teradata. 37. Highlight the advantages of PPI(Partition Primary Index). PPI is basically used for Range-based or Category-based data storage purposes. When it comes to Range queries, there is no need of Full table scan utilization as it straightaway moves to the consequent partition thus skipping all the other partitions. 38. Give the sizes of SMALLINT, BYTEINT and INTEGER. SMALLINT – 2 Bytes – 16 Bites -> -32768 to 32767 BYTEINT – 1 Bytes – 8 Bits -> -128 to 127 INTEGER – 4 Bytes – 32 Bits -> -2,147,483,648 to 2,147,483,647 39. What is meant by a Least Cost Plan? A Least Cost Plan basically executes in less time across the shortest path. 40. Highlight the points of differences between the database and user in Teradata. A database is basically passive, whereas a user is active. A database primarily stores all the objects of database, whereas a user can store any object whether that is a macro, table, view, etc. Database does not has password while the user has to enter password. 41. Highlight the differences between Primary Key and Primary Index. Primary index is quite mandatory, whereas Primary Key is optional. Primary Index has a limit of 64 tables/columns, whereas Primary Key does not have any limit. Primary Index allows duplicates and nulls, whereas Primary Key doesn’t. Primary Index is a physical mechanism, whereas Primary Key is purely logical mechanism. 42. Explain how spool space is used. Spool space in Teradata is basically used for running queries. Out of the total space that is available in Teradata, 20% of the space is basically allocated to spool space. 43. Highlight the need for Performance Tuning. Performance tuning in Teradata is basically done to identify all the bottlenecks and then resolve them. 44. Comment whether bottleneck is an error or not. Technically, bottleneck is not a form of error, but it certainly causes a certain amount of delay in the system. 45. How can bottlenecks be identified? There are basically four ways of identifying a bottleneck. These are: – Teradata Visual Explain Explain Request Modifier Teradata Manager Performance Monitor 46. What is meant by a Highest Cost Plan? As per Highest Cost Plan, the time taken to execute the process is more, and it takes the longest path available. 47. Highlight all the modes that are present under Confidence Level. Low, No, High and Join are the four modes that are present under Confidence Level. 48. Name the five phases that come under MultiLoad Utility. Preliminary Phase, DML Phase, Data Acquisition Phase, Application Phase and End Phase. 49. Highlight the limitations of TPUMP Utility. Following are the limitations of TPUMP utility: – We cannot use SELECT statement. Data Files cannot be concatenated. Aggregate and Exponential operators are not supported. Arithmetic functions cannot be supported. 50. In BTEQ, how are the session-mode parameters being set? .set session transaction BTET -> Teradata transaction mode .set session transaction ANSI -> ANSI mode These commands will work only when they are entered before logging into the session. 51.  Have you used Net meeting? Yes. Used net meeting for team meeting when members of the team geographically in different locations. 52.  Do you have any question? What is the team size going to be? What is the current status of the project? What is the project schedule? 53.  What is your available date? Immediate. Or your available date for the project. 54.   How much experience you have in MVS? Intermediate. In my previous two projects used MVS to submit JCL jobs. 55.   Have you created JCL script from scratch? Yes. I have created JCL scripts from scratch while creating jobs in the development environment. 56.  Have you modified any JCL script and used? Yes I have modified JCL scripts. In my previous projects many applications were re-engineered so the existing JCL scripts were modified according to the company coding standards. 57.  Rate yourself on using Teradata tools like BTEQ, Query man, FastLoad, MultiLoad and Tpump! Intermediate to expert level. Extensively using for last 4 years. Also I am certified in Teradata. 58.  Which is your favorite area in the project? I enjoy every working on every part of the project. Volunteer my time for my peers so that I can also learn and contribute more towards the project success. 59.  What is data mart? A data mart is a special purpose subset of enterprise data used by a particular department, function or application. Data marts may have both summary and details data, however, usually the data has been pre aggregated or transformed in some way to better handle the particular type of requests of a specific user community. Data marts are categorized as independent, logical and dependant data marts. 60.  Difference between star and snowflake schemas? Star schema is De-normalized and snowflake schema is normalized. 61.  Why should you put your data warehouse in a different system other than OLTP system? Relational Data Modeling (OLTP design) Dimensional Data Modeling (OLAP design) Data is stored in RDBMS Data is stored in RDBMS or Multidimensional databases Tables are units of storage Cubes are units of storage Data is normalized and used for OLTP. Optimized for OLTP processing Data is de-normalized and used in data warehouse and data mart. Optimized for OLAP Several tables and chains of relationships among them Few tables and fact tables are connected to dimensional tables Volatile (several updates) and time variant Non volatile and time invariant SQL is used to manipulate data MDX is used to manipulate data Detailed level of transactional data Summary of bulky transactional data (Aggregates and Measures) used in business decisions Normal Reports User friendly, interactive, drag and drop multidimensional OLAP Reports 62.  Why are OLTP database designs not generally a good idea for a Data Warehouse? OLTP designs are for real time data and they are not normalized and pre-aggregated. They are not good for decision support systems. 63.  What type of Indexing mechanism do we need to use for a typical data warehouse? Primary Index mechanism is the ideal type of index for data warehouse. 64.  What is VLDB? Very Large databases. Please find more information on it. 65.  What is the difference between OLTP and OLAP? Refer answer for question 61. 66.  What is real time data warehousing? Real-time data warehousing is a combination of two things: 1) real-time activity and 2) data warehousing. Real-time activity is activity that is happening right now. The activity could be anything such as the sale of widgets. Once the activity is complete, there is data about it. Data warehousing captures business activity data. Real-time data warehousing captures business activity data as it occurs. As soon as the business activity is complete and there is data about it, the completed activity data flows into the data warehouse and becomes available instantly. In other words, real-time data warehousing is a framework for deriving information from data as the data becomes available. 67.  What is ODS? An operational data store (ODS) is primarily a "dump" of relevant information from a very small number of systems (often just one) usually with little or no transformation. The benefits are an ad hoc query database, which does not affect the operation of systems required to run the business. ODS’s usually deal with data "raw" and "current" and can answer a limited set of queries as a result. 68.  What is real time and near real time data warehousing? The difference between real time and near real time can be summed up in one word: latency. Latency is the time lag that is between an activity completion and the completed activity data being available in the data warehouse. In real time, the latency is negligible whereas in near real time the latency is a tangible time frame such as two hours. 69.  What are Normalization, First Normal Form, Second Normal Form and Third Normal Form? Normalization is the process of efficiently organizing data in a database. The two goals of the normalization process are eliminate redundant data (storing the same data in more than one table) and ensure data dependencies make sense (only storing related data in the table). First normalization form: Eliminate duplicate columns from the same table. Create separate tables for each group of related data and identify each row with a unique column or set of columns (primary key) Second normal form: Removes sub set of data that apply to multiple rows of table and place them in separate table. Create relationships between these new tables and their predecessors through the use of foreign keys. Third normal form: Remove column that are not dependent upon the primary key. 70.  What is fact table? The centralized table in a star schema is called as FACT table i.e. a table in that contains facts and connected to dimensions. A fact table typically has two types of columns: those that contain facts and those that are foreign keys to dimension tables. The primary key of a fact table is usually a composite key that is made up of all of its foreign keys. A fact table might contain either detail level facts or facts that have been aggregated (fact tables that contain aggregated facts are often instead called summary tables). In the real world, it is possible to have a fact table that contains no measures or facts. These tables are called as Factless Fact tables. 71.  What is ETL? Extract, transformation, and loading. ETL refers to the methods involved in accessing and manipulating source data and loading it into target database. The first step in ETL process is mapping the data between source systems and target database (data warehouse or data mart). The second step is cleansing of source data in staging area. The third step is transforming cleansed source data and then loading into the target system. Note that ETT (extract, transformation, transportation) and ETM (extraction, transformation, move) are sometimes used instead of ETL. 72.  What is ER diagram? It is Entity relationship diagram. Describes the relationship among the entities in the database model. 73.  What is data mining? Analyzing of large volumes of relatively simple data to extract important trends and new, higher level information. For example, a data-mining program might analyze millions of product orders to determine trends among top-spending customers, such as their likelihood to purchase again, or their likelihood to switch to a different vendor. 74.  What is Star schema? Star Schema is a relational database schema for representing multi-dimensional data. It is the simplest form of data warehouse schema that contains one or more dimensions and fact tables. It is called a star schema because the entity-relationship diagram between dimensions and fact tables resembles a star where one fact table is connected to multiple dimensions. The center of the star schema consists of a large fact table and it points towards the dimension tables. The advantages of star schema are slicing down, performance increase and easy understanding of data. 75.  What is a lookup table? Refer answer for questions 78. Dimension tables are sometimes called as lookup or reference tables. 76.  What is a level of Granularity of a fact table? The components that make up the granularity of the fact table correspond directly with the dimensions of the data model. Thus, when you define the granularity of the fact table, you identify the dimensions of the data model. The granularity of the fact table also determines how much storage space the database requires. For example, consider the following possible granularities for a fact table: Product by day by region Product by month by region The size of a database that has a granularity of product by day by region would be much greater than a database with a granularity of product by month by region because the database contains records for every transaction made each day as opposed to a monthly summation of the transactions. You must carefully determine the granularity of your fact table because too fine a granularity could result in an astronomically large database. Conversely, too coarse granularity could mean the data is not detailed enough for users to perform meaningful queries against the database. 77.  What is a dimension table? Dimension table is one that describes the business entities of an enterprise, represented as hierarchical, categorical information such as time, departments, locations, and products. Dimension tables are sometimes called lookup or reference tables. In a relational data modeling, for normalization purposes, country lookup, state lookup, county lookup, and city lookups are not merged as a single table. In a dimensional data modeling (star schema), these tables would be merged as a single table called LOCATION DIMENSION for performance and slicing data requirements. This location dimension helps to compare the sales in one region with another region. We may see good sales profit in one region and loss in another region. If it is a loss, the reasons for that may be a new competitor in that area, or failure of our marketing strategy etc. 78.  What are the various Reporting tools in the Market? Crystal reports, Business objects, micro strategy and etc., 79.  What are the various ETL tools in the Market? Ab Initio, Informatica and etc., 80. How do you define Teradata? Give some of the primary characteristics of the same. Teradata is basically an RDMS which is used to drive the Datamart, Datawarehouse, OLAP, OLTP, as well as DSS Appliances of the company. Some of the primary characteristics of Teradata are given below. Is capable of running on Single-nodes, as well as multi-nodes. Parallelism is built into the system. Very much compatible with the standards of ANSI. Tends to act in the same way as a server. It is an Open System that basically executes for UNIX MR-RAS, Suse Linux ETC, WIN2K, etc. TERADATA Questions and Answers Pdf Download Read the full article
0 notes
linuxlife · 7 years ago
Text
Linux Life Episode 38
Tumblr media
Well hello folks and welcome back to the world of Linux Life.  Not much happening regarding machines this episode as I have been exceptionally busy but there is a few topics regarding Linux which has been pretty prominent lately I would like to discuss.  This is probably going to be a long episode... you have been warned.
Topic 1 - Why is Linux not seen as a major desktop system.
This is an interesting discussion as given that Linux costs the huge price of absolutely nothing.  How come now that its about 15 years old is it not dominating Windows as the desktop OS environment.  Majority of the software is free and it should have a much larger reach than it currently has.
Thing is it is still seen as an operating system for geeks.  Linux installation is much easier than it ever was.  I’m a guy who started with Slackware and can remember having to do everything from text prompts.  Now everything is graphical and straightforward.  It will even set up to dual boot if the right options are selected or at least show you how to do so via a tutorial.
So what is real problem if not installation.  Application support is primarily seen by many as there is no Microsoft Office, Adobe Photoshop, Premiere Pro and After Effects, No FL Studio or Cubase products.
Now all of these can be run by Wine in Linux but they can only run the 32 bit versions of said software although 64 bit support is improving. Wine is constantly improving so more software running is becoming much easier.
However there is alternatives to said programs such as LibreOffice, GIMP, Kdenlive and Davinci Resolve. For music your have Ardour and Rosegarden.  Now I understand it would require major relearning of your workflow which will take time and effort which you may not be able to afford.
Linux has Firefox, Chrome, Opera and all the usual browsers so Internet is not a problem.  Also most wifi cards now have a driver automatically recognised.  A small number of cards don’t but compared to the early years it’s vastly improved.  It recognised about five wifi cards if you were lucky.
Linux also had many mail applications such as Thunderbird, Twitter support with Corebird and much more. So this is no longer a problem.
Now Gaming was always a stumbling point for Linux.  However Steam has been available for Linux for a number of years now and several games have a Linux version.  Recently Valve have started working with CodeWeavers the people behind the Wine project to create the Proton project.
This allows Steam to play Windows games in your Steam library using a modified Wine installation wrapper.  It runs quite a lot of games but currently is still in beta.  In time more and more games will work as the Proton driver is improved.  Also as Valve are working with the Wine guys it will also help it run various applications as the code will be available to it.
Linux is far more customizable than Windows.  Don’t like the look or setup. ��You can normally move stuff or if you want replace the Windows Manager with one more to your preference.  While changing WM is not easy it can be done.  However theming and desktop modification is normally reasonably simple.
Also don’t need certain drivers then you can remove most of them bar the essentials.
The thing is with Linux is for all it’s getting much better, it’s still a bit technical when it goes wrong.  Reinstalling software doesn’t always fix things like it does in Windows.
A lot of things still require terminal commands.  However distributions like Arch or any other rolling release tend to do most things in GUIs and only occasionally do you need a terminal.
Until they find a true way to minimise this and make fixing issues much easier it will remain a fringe operating system.  Mind you how many people go hunting with RegEdit on Windows so it can be a pain equally at times.
Linux has the advantage of malware and viruses are virtually none existent due to rights management and if any bugs are found they are normally fixed pretty quickly.
So maybe in three to five years Linux may be where it should be then it may start to dominate.  We may even have Mir by then but I wouldn’t put money on it.  Funny thing is because it’s free a lot of people distrust it which is odd.  They figure if you are giving it away for free it must have something wrong with it.
Topic 2 - Microsoft is now getting involved with Linux.  What?
Well its pretty well documented that Microsoft now have their own Linux based Azure Sphere OS.  However to program anything for Azure Sphere you need a Windows installation and Visual Studio to do so.  Which is kind of counterproductive if you ask me.
Also they have a seat on the board of The Linux Foundation meaning they get a vote in the way that Linux progresses.  They have open sourced various patents to the OIN (Open Invention Network) but the only people who gain advantage with this are existing OIN members.
It’s easy to see why many don’t trust Microsoft being involved.  Many ditched Windows to be away from their incessant spying and fiddling.  Also they have not had the best things to say about Linux.  Admittedly this was primarily when Steve Ballmer was the CEO of the company and now that Satya Nadella has taken the reins it has changed it’s stance.
“Linux is a cancer” - Steve Ballmer (1 June 2001)
Ballmer admitted in 2016 he hated it then but he now loves Linux but he had been left Microsoft for 2 years by then.
Most Linux people who have been around for a bit also remember that Microsoft sued several Linux Distro providers due to patent usage so when they announced they liked Linux and would even defend it if necessary.  Many Linux users were sceptical.
Also their recent acquisition of GitHub has raised concerns as many of the Linux Open Source projects are based on there and with Microsoft in charge what sort of issues will this present.
While understandably sceptical about Microsoft being involved with Linux.  They will probably fund many things to improve the way things are done in Linux especially useful regarding things like drivers.  As they can throw millions at it and not bat an eyelid.
However it’s equally dangerous because this goes back to a Microsoft philosophy that makes many nervous.  The philosophy is Embrace, Enhance and Extinguish.  This is where they introduce themselves to a system.  Add and convert many programs to standards written by Microsoft, then extinguish by leaving such programs without update making users move to something else.
While it will be virtually impossible to extinguish Linux.  The fact that Azure Sphere can only have programs written using Visual Studio.  Microsoft are already starting on the what it calls the Enhance stage.
They think that Visual Studio has enhanced the way to write Linux software.  Many I am sure would disagree vehemently.  If enough Windows guys take it up then there could be trouble on the horizon.
Several people think Linux is already getting a bit bloated without Microsoft attempting to fill up your hard drive.
There is even a version of the Linux Subsystem for Windows available for Windows 10 should you feel you want it.  This allows you to run Linux binaries under Windows 10 using a wrapper to put them on your desktop instead of using a VM.
It’s a double edged sword as I see it.  They could help get drivers and support for applications due to funding the Linux Foundation but equally they could totally start clogging up the Linux system with proprietary software.
What do you think?
Topic 3 - Video Editing on Linux?
Recently a Youtuber by the name of EposVox (Adam Taylor) had a bit of a meltdown regarding trying to switch from Windows to Linux regarding video editing on his channel.
Now he was trying to use Davinci Resolve 15 as his primary video editor and I believe he was using Kubuntu to begin with.  Now EposVox has featured many videos on his channel regarding Linux and  certain aspects of it.
He was certainly no beginner.  However he found first he could not access his video footage from external drives as they were ExFAT.  Now Kubuntu does not supply them by default.  However you can install them using apt-get.
Then when he did files to Linux using USB, he discovered Davinci Resolve 15 Free Edition did not have the codecs to read the videos as they were in 4K H.264 format. So he had to install Davinci Resolve 15 Studio but to do so he had to transfer one of the keys from one of his Windows machines.
Now the program could read his videos but would regularly crash for no apparent reason due to video driver issues.  In frustration he eventually gave up.
He also tried it under Linux Mint 19 which did include the ExFAT drivers but once again Davinci Resolve 15 Studio would just crash randomly due to video driver issues.
This caused him to post his video rant.  Now many tried to defend Linux saying he didn’t know what he was doing.  This I find hard to believe.  He had championed Linux for a while so he was far from incompetent.
However it does seem when it comes to using 4K video under normal Linux conditions, problems do tend to occur.  This is because they have not really got true codec support from the manufacturers.
A lot of the drivers have probably been written by enthusiasts who just wanted to see if it could be done so they probably are not the best drivers available.
Since camera manufacturers such as RED and other makers of 4K or higher cameras work almost exclusively with Microsoft or Apple.  Linux is left out in the cold as it’s not paying them millions of dollars for drivers.
Most Linux content creators are running under 4K, normally 1080p is the limit.  Now I know the BBC use Linux for their editing but it has probably been specifically built for them and if they want the drivers they can afford to pay for them.
Until Linux gets competent 4K video codec support a lot of profession video editors will probably stick with the likes of Adobe Premiere Pro or Final Cut Pro X.
Kdenlive has improved vastly over the years also there is OpenShot and Lightworks but they all seem to be under optimised or missing features video editors want.
It’s not that Linux is bad at video editing.  Once again I think it is something that they will get in time but for now anyone using 4K camera footage will have not the greatest of time working in the Linux environment.
I could be wrong about 4K video being awkward under Linux but that seems to be the opinion out there currently.
I have had issues running Davinci Resolve in the past so I can understand it can be frustrating.  It is very finicky with it’s setup and is far from stable under Linux.  Works fine under Windows however.
Topic 4 - New Mac ? No Linux for you then...
Well it seems Apple have finally found a way to lock out Linux from  the new 2018 range of Apple Macs.  Due to the implementation of the current T2 security chip it has locked out Linux.
The current T2 chip has also caused issues regarding upgrading.  If you upgrade a new Mac yourself and don’t get an Apple registered service member to do it.  The likelihood is MacOS Mojave will not start.
The reason being that Apple have supplied software to their Apple registered engineers that allows them to reset the T2 security chip so it will start MacOS.  Without this you will just get a black screen if you do it yourself.
Well this same T2 security chip is now locking out Linux so it can’t boot.  Previously Linux got around Mac boot issues by using the Windows Production CA 2011 certificate.
Apparently New Macs block Windows 10 too until you enable it in the Boot Camp settings.  However Linux does not have that luxury.
So is this the end of Linux on new Mac machines or do you think Apple will give them access.  I can only imagine this would happen if Apple were paid a considerable amount of money.
If those who hack it decide to put their findings on the Internet so Linux can be booted.  Be prepared as Apple are notoriously famous for sending legal teams after people.
I’m sure the likes of Canonical, Red Hat and SUSE will eventually cough up some money but other smaller distributions may not be able to afford such.
Sad really and eventually they will find a way around it probably just bypassing the T2 security chip all together and making it think it sees a T2 chip that will let it past.
Anyway that’s enough waffle for this episode.  Hopefully next time I will have more to report on what I have actually been doing with Linux not just general news opinion.
Until next time... Take Care.
0 notes
johanlouwers · 7 years ago
Text
Review: the Librem 13v2
Review: the Librem 13v2
Image
Shawn Powers Thu, 05/03/2018 - 07:00
Reviews
Hardware
Laptops
Security
Privacy
Purism
Librem 13
The Librem 13—"the first 13-inch ultraportable designed to protect your digital life"—ticks all the boxes, but is it as good in real life as it is on paper?
I don't think we're supposed to call portable computers "laptops" anymore. There's something about them getting too hot to use safely on your lap, so now they're officially called "notebooks" instead. I must be a thrill-seeker though, because I'm writing this review with the Librem 13v2 directly on my lap. I'm wearing pants, but apart from that, I'm risking it all for the collective. The first thing I noticed about the Librem 13? The company refers to it as a laptop. Way to be brave, Purism!
Why the Librem?
I have always been a fan of companies who sell laptops (er, notebooks) pre-installed with Linux, and I've been considering buying a Purism laptop for years. When our very own Kyle Rankin started working for the company, I figured a company smart enough to hire Kyle deserved my business, so I ordered the Librem 13 (Figure 1). And when I ordered it, I discovered I could pay with Bitcoin, which made me even happier!
Figure 1. The 13" Librem 13v2 is the perfect size for taking on the road (photo from Purism)
There are other reasons to choose Purism computers too. The company is extremely focused on privacy, and it goes so far as to have hardware switches that turn off the webcam and WiFi/Bluetooth radios. And because they're designed for open-source operating systems, there's no "Windows" key; instead there's a meta key with a big white rectangle on it, which is called the Purism Key (Figure 2). On top of all those things, the computer itself is rumored to be extremely well built, with all the bells and whistles usually available only on high-end top-tier brands.
Figure 2. No Windows key here! This beats a sticker-covered Windows logo any day (photo from Purism).
My Test Unit
Normally when I review a product, I get whatever standard model the company sends around to reviewers. Since this was going to be my actual daily driver, I ordered what I wanted on it. That meant the following:
i7-6500U processor, which was standard and not upgradable, and doesn't need to be!
16GB DDR4 RAM (default is 4GB).
500GB M.2 NVMe (default is 120GB SATA SSD).
Intel HD 520 graphics (standard, not upgradable).
1080p matte IPS display.
720p 1-megapixel webcam.
Elantech multitouch trackpad.
Backlit keyboard.
The ports and connectors on the laptops are plentiful and well laid out. Figure 3 shows an "all sides" image from the Purism website. There are ample USB ports, full-size HDMI, and the power connector is on the side, which is my preference on laptops. In this configuration, the laptop cost slightly more than $2000.
Figure 3. There are lots of ports, but not in awkward places (photo from Purism).
The Physical Stuff and Things
The Case
The shell of the Librem 13 is anodized aluminum with a black matte texture. The screen's exterior is perfectly plain, without any logos or markings. It might seem like that would feel generic or overly bland, but it's surprisingly elegant. Plus, if you're the sort of person who likes to put stickers on the lid, the Librem 13 is a blank canvas. The underside is nearly as spartan with the company name and little else. It has a sturdy hinge, and it doesn't feel "cheap" in any way. It's hard not to compare an aluminum case to a MacBook, so I'll say the Librem 13 feels less "chunky" but almost as solid.
The Screen
Once open, the screen has a matte finish, which is easy to see and doesn't have the annoying reflection so prevalent on laptops that have a glossy finish. I'm sure there's a benefit to a glossy screen, but whatever it might be, the annoying glare nullifies the benefit for me. The Librem 13's screen is bright, has a sufficient 1080p resolution, and it's pleasant to stare at for hours. A few years back, I'd be frustrated with the limitation of a 1080p (1920x1080) resolution, but as my eyes get older, I actually prefer this pixel density on a laptop. With a higher-res screen, it's hard to read the letters without jacking up the font size, eliminating the benefit of the extra pixels!
The Keyboard
I'm a writer. I'm not quite as old-school as Kyle Rankin with his mechanical PS/2 keyboard, but I am very picky when it comes to what sort of keys are on my laptop. Back in the days of netbooks, I thought a 93%-sized keyboard would be perfectly acceptable for lengthy writing. I was horribly wrong. I didn't realize a person could get cramps in their hands, but after an hour of typing, I could barely pick my nose much less type at speed.
The Librem 13's keyboard is awesome. I won't say it's the best keyboard I've ever used, but as far as laptops go, it's right near the top of the pile. Like most (good) laptops, the Librem 13 has Chicklet style keys, but the subtleties of click pressure, key travel, springiness factor and the like are very adequate. The Librem 13v2 has a new feature, in that the keys are backlit (Figure 4). Like most geeks, I'm a touch typist, but in a dark room, it's still incredibly nice to have the backlight. Honestly, I'm not sure why I appreciate the backlight so much, but I've tried both on and off, and I really hate when the keyboard is completely dark. That might just be a personal preference, but having the choice means everyone is happy.
Figure 4. I don't notice the keyboard after hours of typing, which is what you want in a keyboard (photo from Purism).
The Trackpad
The Librem 13 has a huge (Figure 5), glorious trackpad. Since Apple is known for having quality hardware, it's only natural to compare the Librem 13 to the Macbook Pro (again). For more than a decade, Apple has dominated the trackpad scene. Using a combination of incredible hardware and silky smooth software, the Apple trackpad has been the gold standard. Even if you hate Apple, it's impossible to deny its trackpads have been better than any other—until recently. The Librem 13v2 has a trackpad that is 100% as nice as MacBook trackpads. It is large, supports "click anywhere" and has multipoint support with gestures. What does all that mean? The things that have made Apple King of Trackpad Land are available not only on another company's hardware, but also with Linux. My favorite combination is two-finger scrolling with two-finger clicking for "right-click". The trackpad is solid, stable and just works. I'd buy the Librem 13 for the trackpad alone, but that's just a throwaway feature on the website.
Figure 5. This trackpad is incredible. It's worth buying the laptop for this feature alone (photo from Purism).
The Power Adapter
It might seem like a silly thing to point out, but the Librem 13 uses a standard 19-volt power adapter with a 5.5mm/2.5mm barrel connector. Why is that significant? Because I accidentally threw my power supply away with the box, and I was worried I'd have to special-order a new one. Thankfully, the dozen or so power supplies I have in my office from netbooks, NUCs and so on fit the Librem 13 perfectly. Although I don't recommend throwing your power supply away, it's nice to know replacements are easy to find online and probably in the back of your tech junk drawer.
Hardware Switches
I'm not as security-minded as perhaps I should be. I'm definitely not as security-minded as many Linux Journal readers. I like that the Librem 13 has physical switches that disconnect the webcam and WiFi/Bluetooth. For many of my peers, the hardware switches are the single biggest selling point. There's not much to say other than that they work. They physically switch right to left as opposed to a toggle, and it's clear when the physical connection to the devices have been turned off (Figure 6). With the Librem 13, there's no need for electrical tape over the webcam. Plus, using your computer while at DEFCON isn't like wearing a meat belt at the dog pound. Until nanobots become mainstream, it's hard to beat the privacy of a physical switch.
Figure 6. It's not possible to accidentally turn these switches on or off, which is awesome (photo from Purism).
I worried a bit about how the operating systems would handle hardware being physically disconnected. I thought perhaps you'd need special drivers or custom software to handle the disconnect/reconnect. I'm happy to report all the distributions I've tried have handled the process flawlessly. Some give a pop-up about devices being connected, and some quietly handle it. There aren't any reboots required, however, which was a concern I had.
Audio/Video
I don't usually watch videos on my laptop, but like most people, I will show others around me funny YouTube videos. The audio on the Librem 13 is sufficiently loud and clear. The video subsystem (I mention more about that later) plays video just fine, even full screen. There is also an HDMI port that works like an HDMI connection should. Modern Linux distributions are really good at handling external displays, but every time I plug in a projector and it just works, my heart sings!
PureOS
The Librem 13 comes with Purism's "PureOS" installed out of the box. The OS is Debian-based, which I'm most comfortable using. PureOS uses its own repository, hosted and maintained by Purism. One of the main reasons PureOS exists is so that Purism can make sure there is no closed-source code or proprietary drivers installed on its computers. Although the distro includes tons of packages, the really impressive thing is how well the laptop works without any proprietary code. The "purity" of the distribution is comforting, but the standout feature is how well Purism chose the hardware. Anyone who has used Linux laptops knows there's usually a compromise regarding proprietary drivers and wrappers in order to take full advantage of the system. Not so with the Librem 13 and PureOS. Everything works, and works well.
PureOS works well, but the most impressive aspect of it is what it does while it's working. The pre-installed hard drive walks you through encryption on the first boot. The Firefox-based browser (called "Purebrowser") uses HTTPS: Everywhere, defaults to DuckDuckGo as the search engine, and if that's not sufficient for your privacy needs, it includes the Tor browser as well. The biggest highlight for me was that since Purebrowser is based on Firefox, the browsing experience wasn't lacking. It didn't "feel" like I was running a specialized browser to protect my identity, which makes doing actual work a lot easier.
Other Distributions
Although I appreciate PureOS, I also wanted to try other options. Not only was I curious, but honestly, I'm stuck in my ways, and I prefer Ubuntu MATE as my desktop interface. The good news is that although I'm not certain the drivers are completely open source, I am sure that Ubuntu installs and works very well. There are a few glitches, but nothing serious and nothing specific to Ubuntu (more on those later).
I tried a handful of other distributions, and they all worked equally well. That makes sense, since the hardware is 100% Linux-compatible. There was an issue with most distributions, which isn't the fault of the Librem 13. Since my system has the M.2 NVMe as opposed to a SATA SSD, most installers have a difficult time determining where to install the bootloader. Frustratingly, several versions of the Ubuntu installer don't let the manual selection of the correct partition to be chosen either. The workaround seems to be setting up hard drive partitions manually, which allows the bootloader partition to be selected. (For the record, it's /dev/nvme0n1.) Again, this isn't Purism's fault; rather, it's the Linux community getting up to speed with NVMe drives and EFI boot systems.
Quirks
There are a few oddities with a freshly installed Librem 13. Most of the quirks are ironed out if you use the default PureOS, but it's worth knowing about the issues in case you ever switch.
NVMe Thing
As I mentioned, the bootloader problem with an NVMe system is frustrating enough that it's worth noting again in this list. It's not impossible to deal with, but it can be annoying.
Backslash Key
The strangest quirk with the Librem 13 is the backslash key. It doesn't map to backslash. On every installation of Linux, when you try to type backslash, you get the "less than" symbol. Thankfully, fixing things like keyboard scancodes is simple in Linux, but it's so strange. I have no idea how the non-standard scancode slipped through QA, but nonetheless, it's something you'll need to deal with. There's a detailed thread on the Purism forum that makes fixing the problem simple and permanent.
Trackpad Stuff
As I mentioned before, the trackpad on the Librem 13 is the nicest I've ever used on a non-Apple laptop. The oddities come with various distributions and their trackpad configuration software. If your distribution doesn't support the gestures and/or multipoint settings you expect, rest assured that the trackpad supports every feature you are likely to desire. If you can't find the configuration in your distro's setup utility, you might need to dig deeper.
The Experience and Summary
The Librem 13 is the fastest laptop I've ever used. Period. The system boots up from a cold start faster than most laptops wake from sleep. Seriously, it's insanely fast. I ran multiple VMs without any significant slowdowns, and I was able to run multiple video-intensive applications without thinking "laptops are so slow" or anything like that.
The only struggle I had was when I tried to use the laptop for live streaming to Facebook using OBS (Open Broadcast Studio). The live transcoding really taxed the CPU. It was able to keep up, but normally on high-end computers, it's easier to offload the transcoding to a discrete video card. Unfortunately, there aren't any non-Intel video systems that work well without proprietary drivers. That means even though the laptop is as high-end as they get, the video system works well, but it can't compare to a system with a discrete NVIDIA video card.
Don't let the live streaming situation sour your view of the Librem 13 though. I had to try really hard to come up with something that the Librem 13 didn't chew through like the desktop replacement it is. And even with my live streaming situation, I was able to transcode the video using the absurdly fast i7 CPU. This computer is lightning fast, and it's easily the best laptop I've ever owned. More than anything, I'm glad this is a system I purchased and not a "review copy", so I don't have to send it back!
https://ift.tt/2joLG4c via @johanlouwers . follow me also on twitter
0 notes
just4programmers · 8 years ago
Text
Ongoing updates to Linux on Windows 10 and Important Tips!
I noticed this blog post about Ubuntu over at the Microsoft Command Line blog. Ubuntu is now available from the Windows Store for builds of Windows over 16215.
You can run "Winver" to see your build number of Windows. If you run Windows 10 you can certainly sign up for the Windows Insiders builds, or you can wait a few months until these features make their way to the mainstream. I've been running Windows 10 Insiders "Fast ring" for a while with a few issues but nothing blocking.
The addition of Ubuntu to the Windows Store may initially seem confusing or even a little bizarre. However, given a minute to understand the larger architecture it make a lot of sense. However, for those of us who have been beta-testing these features, the move to the Windows Store will require some manual steps in order for you to reap the benefits.
Here's how I see it.
For the early betas of the Windows Subsystem for Linux you type bash from anywhere and it runs Ubuntu on Windows.
Ubuntu on Windows hides its filesystem in C:\Users\scott\AppData\Local\somethingetcetc and you shouldn't go there or touch it.
By moving the tar files and Linux distro installation into the store, that allows us users to use the Store's CDN (Content Distrubution Network) to get Distros quickly and easily. 
Just turn on the feature and REBOOT
Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Windows-Subsystem-Linux
then hit the store to get the binaries!
Ok, now this is where and why it gets interesting.
Soon (later this month I'm told) we will be able to have n number of native Linux distros on our Windows 10 machines at one time. You can install as many as you like from the store. No VMs, just fast Linux...on Windows!
There is a utility for the Windows Subsystem for Linux called "wslconfig" that Windows 10 has.
C:\>wslconfig Performs administrative operations on Windows Subsystem for Linux Usage: /l, /list [/all] - Lists registered distributions. /all - Optionally list all distributions, including distributions that are currently being installed or uninstalled. /s, /setdefault <DistributionName> - Sets the specified distribution as the default. /u, /unregister <DistributionName> - Unregisters a distribution. C:\WINDOWS\system32>wslconfig /l Windows Subsystem for Linux Distributions: Ubuntu (Default) Fedora OpenSUSE
At this point when I type "bash" at the regular Windows command prompt or PowerShell I will be launching my default Linux. I can also just type "Ubuntu" or "Fedora," etc to get a specific one.
If I wanted to test my Linux code (.NET, node, go, ruby, whatever) I could script it from Windows and run my tests on n number of distros. Slick for developers.
TODOs if you have WSL and Bash from earlier betas
If you already have "bash" on your Windows 10 machine and want to move to the "many distros" you'll just install the Ubuntu distro from the store and then move your distro customizations out of the "legacy/beta bash" over to the "new train but beta although getting closer to release WSL." I copied my ~/ folder over to /mnt/c/Users/Scott/Desktop/WSLBackup, then opened Ubuntu and copied my .rc files and whatnot back in. Then I removed my original bash with lxrun /uninstall. Once I've done that, my distro are managed by the store and I can have as many as I like. Other than customizations, it's really easy (like, it's not a big deal and it's fast) to add or remove Linuxes on Windows 10 so fear not. Backup your stuff and this will be a 10 min operation, plus whatever apt-get installs you need to redo. Everything else is the same and you'll still want to continue storing and sharing files via /mnt/c.
NOTE: I did a YouTube video called Editing code and files on Windows Subsystem for Linux on Windows 10 that I'd love if you checked out and shared on social media!
Enjoy!
Sponsor: Seq is simple centralized logging, on your infrastructure, with great support for ASP.NET Core and Serilog. Version 4 adds integrated dashboards and alerts - check it out!
© 2017 Scott Hanselman. All rights reserved.
0 notes
itsitatech-blog · 8 years ago
Text
Divisive Android bolster for Windows Mobile dropped, no one knows why Microsoft defers the tech, maybe forever.
Project Astoria, the Android similarity layer for Windows Mobile, has been deferred, maybe even uncertainly.
Seven months back, Microsoft declared a quartet of "extensions" to open up the Windows Store to more designers and more applications. Two of these—Centennial, for wrapping existing Win32 programming in a Store-accommodating bundle, and Westminster, for distributing applications utilizing Web tech—were generally uncontroversial, however the other two—Islandwood, for porting iOS applications, and Astoria, for running Android applications—raised the passion of existing Windows engineers.
Astoria specifically was seen practically as refuting the exertion and speculation that numerous Windows devs had made into Microsoft's Universal Windows Platform (UWP), the arrangement of APIs that permit applications to target Windows on PCs, tablets, and cell phones and which one year from now will likewise be usable on the Xbox One and HoloLens.
Islandwood was publicly released before in the year (and I have parts of source code that have been joined into the venture). It was likewise observed as less antagonistic. Applications utilizing Islandwood still must be ported to UWP; the Islandwood libraries empowered engineers to utilize a greater amount of their current iOS code on Windows, however the instruments still require a non-irrelevant exertion with respect to application designers. Islandwood applications and Islandwood designers are still UWP applications and UWP engineers.
In any case, Astoria didn't require this level of association. Some insider sneak peaks of Windows 10 Mobile have incorporated the Android subsystem—however the latest forms have excluded it—and it empowered the utilization of unmodified Android bundles. Zero exertion was expected to make the applications into "Windows" applications; Android projects could basically run specifically on Windows 10 Mobile. Full points of interest aren't known since Microsoft has discharged alongside nothing open about Astoria, however a constrained office to redo the applications to exploit Windows highlights exists, so the Android applications can be changed to take a little favorable position of the way that they're running on Windows. However, this is all entirely discretionary; with Astoria, Android designers could just dump their applications onto Windows with no interest in or information of UWP.
This made Astoria to some degree disagreeable among Windows designers. Why try learning UWP and having Windows as an essential target when you could simply learn Android improvement and have a considerably greater cell phone gathering of people? With Astoria, there was just no point.
The evacuation of Astoria in late forms had been noted, yet at first there was instability concerning why. On Friday, Windows Central detailed that Astoria's extremely future was in uncertainty and that it was deferred inconclusively. In an announcement, Microsoft affirmed that Astoria is in any event deferred:
We're focused on offering engineers numerous choices to convey their applications to the Windows Platform, including spans accessible now for Web and iOS, and soon Win32. The Astoria scaffold is not prepared yet, but rather different apparatuses offer incredible alternatives for engineers. For instance, the iOS connect empowers engineers to compose a local Windows Universal application which calls UWP APIs specifically from Objective-C, and to blend and match UWP and iOS ideas, for example, XAML and UIKit. Engineers can compose applications that keep running on all Windows 10 gadgets and exploit local Windows includes effectively. We're thankful to the criticism from the improvement group and anticipate supporting them as they create applications for Windows 10.
It's as yet not precisely clear why Astoria has been wrecked, however there are a couple of conceivable reasons. Apparently the most convincing (and most self-evident) is that the presence of Astoria is just excessively destructive, making it impossible to UWP and the current Windows improvement group. While Astoria provides a snappy settle for the Windows Mobile "application crevice"— basically stick the Android applications into the Windows Store—it does as such so as to end any expectations that engineers will fabricate genuine UWP applications.
Obviously, one may address whether it's Windows engineers that Microsoft truly should court. The motivation behind both Astoria and Islandwood is to make Windows all the more engaging non-Windows designers. Hitherto, Windows designers haven't been making the sort of must-have huge name applications that both iOS and Android appreciate. While the harm that Astoria does to UWP is plain, a case could be made that the upside, as a greatly improved application biological community for Windows Mobile, makes it a value worth paying. Episodically, I've as of now observed a couple people tweeting that the evacuation of Astoria has helped them decide and that their next telephone won't be a Windows telephone. As terrible as Astoria might be for UWP, it obviously spoke to in any event some putative Windows telephone clients.
Windows Central reports that there were some execution issues that originated from Astoria, and that specialized perspective may have been an issue. While this may represent a short deferral—bugs set aside opportunity to settle, all things considered—it appears to be little motivation to postpone Astoria uncertainly as has been finished. The center Astoria innovation is known to work; the sneak peaks have shown that much.
Both Islandwood and Astoria additionally have something in like manner: their faulty legitimate status. In the event that Oracle in the end beats Google and the US courts state that APIs are copyrightable, Microsoft's discount lifting of both iOS and Android APIs for Islandwood and Astoria would be in lawful boiling hot water; Apple would have the capacity to request that Microsoft slaughter off Islandwood, and both Oracle and Google would have lawful premise to restrict Astoria.
Assets may likewise have an impact. Windows Central reports that some 60-80 individuals were dealing with Astoria, contrasted with five on Islandwood. That might be valid, yet it cuts both ways; Islandwood right now feels very under-resourced, with advance being very moderate.
At last, this may turn out to be an impermanent postponement all things considered. In the event that Windows 10's force doesn't convert into more extensive improvement and take-up of UWP applications, Microsoft may have nothing to lose by supporting Android applications on Windows.
The Astoria tech is additionally convincing for something beyond Android bolster. Windows used to have a POSIX subsystem (differently called Interix, Services for UNIX, and Subsystem for UNIX-based Applications) that permitted the UNIX applications to be assembled and keep running on Windows. Before, this subsystem has had bolster that can best be depicted as unremarkable, and it was dropped in Windows 8.1. Restoring this, be that as it may, would have some interest: many open source applications, particularly in the server space, aren't created for Windows. Giving Windows more straightforward similarity with Linux (or maybe even FreeBSD) would in some capacity fill an alternate, server room "application hole." Astoria, which underpins numerous Linux APIs on Windows, could frame a piece of this.
0 notes
thinkdash · 8 years ago
Link
From "Linux is a cancer" to Windows Subsystem for Linux.
Since the early 1990s, when Windows became much more popular in the enterprise, people have been trying to put Unix and Linux into places where it doesn’t want to be, using toolkits that implement just enough of the Portable Operating System Interface (POSIX) standard to feel like Unix.
The reasons are pretty simple: a lot of open source tools, especially development tools, are primarily targeted to Unix/Linux platforms. Although the most important ones have been ported to Windows, they are designed to work best when they have access to a Unix/Linux shell scripting environment and the many utilities that come with those platforms.
Today, there are many options availability for getting Unix or Linux functionality on Windows, and the most recent one, the Windows Subsystem for Linux (WSL), provides a Linux environment that is deeply integrated with Windows. But before I get into that, I’ll look at the some of the other options.
The early days
In the early 1990s, you could get a pretty good Unix shell, compiler, and text utilities on DOS using DJGPP. It was also around this time that you could simply install Linux on a computer if you wanted a pure Unix-like environment. Linux had none of the limitations that a Unix-like subsystem had, but it also meant that you had to convert in total to Linux.
So, if you wanted—or needed—to run a Unix-like environment alongside a Microsoft operating system, you needed a subsystem like DJGPP. And in order to comply with US Federal Information Processing Standards and be considered for defense-related projects, Microsoft needed one, too.
Figure 1. Running GNU bash under DJGPP in a modern DOS emulator (DOSBox).
Windows NT, Microsoft’s first foray into true multitasking and multi-user operating systems, is the basis of their current operating system offerings, whether you’re running it on a phone, computer, or Raspberry Pi. Although it shares superficial traits with Unix and Linux, internally, it is not at all like them, which wasn’t surprising considering when Windows NT was born: in 1993, Unix was something you’d find on a workstation or server. Apple’s Macs were running their venerable System 7, and it would be eight more years before Mac OS X would come out, itself a Unix-based operating system.
The difference between Windows and Unix meant that when you ported a Unix application to Linux, there was substantial functionality, such as the fork() system call, that was simply not available. Because of this, the number of Unix programs that could be fully ported to Windows was fairly small, and many programs had reduced functionality as a result.
Over the years, Microsoft kept Unix and Linux at arm’s length. To comply with those federal standards, it supplied a bare minimum POSIX subsystem, just enough to satisfy the standards. The POSIX subsystem was eventually replaced with something called the Windows Services for Unix, which provided a Unix shell, a compiler, and a lot of command-line tools. But it wasn’t enough.
The age of Linux
It didn’t take long for Linux to define what the Unix-like experience should be on a PC. Full access to networking, a GUI, but also, an enormous library of software available for you to install. Out of the box, an Ubuntu system has more than 50,000 packages available to install. Do you need a simulator for electronic circuits? There are several available. How about a cross-compiler for MIPS, ARM, or PowerPC? There’s that, and more: an incredible number of programming languages, specialized software for ham radio, thousands of tools for manipulating text. Much of what Linux has to offer is enabled by the GNU project, which created the compiler, shell, and the most common utilities you’ll use day-to-day.
So it wasn’t surprising that the go-to Unix environment for Windows would look a lot like Linux. Cygwin, a Unix-like subsystem for Windows, implements enough of the POSIX API that it can include thousands of software packages ported from the Unix and Linux universe. Cygwin provides a shell, all the tools and compilers you’d expect from a Linux system, and implements the Unix (POSIX) system calls through a Windows DLL. If you run a Windows executable from within the Cygwin shell, it will run just like any other Windows program. But if you run an executable that was compiled for Cygwin, it will call into cygwin1.dll to provide the POSIX APIs.
In 2016, two unusual things happened that changed the relationship between Microsoft and the Unix/Linux world.
Windows and Linux, closer than ever
First, Microsoft partnered with Canonical, the makers of Ubuntu, to bring a full Ubuntu subsystem to Windows 10, the Windows Subsystem for Linux (WSL) beta. Its point of entry is the bash shell. After you enable the subsystem, run the bash command from Windows and you’ll be dropped into a bash shell. It doesn’t look all that different from Cygwin or from Linux or Mac OS X, for that matter. The difference is that from the perspective of a program that was compiled to run under x86 Linux, WSL is indistinguishable from Linux. To all intents and purposes, WSL is Ubuntu 16.04 Xenial. There are a few system calls not yet implemented, but that list gets smaller and smaller with each Insider Preview Release of Windows.
Figure 2. Running Firefox under X11 from Bash on Windows.
Second, Microsoft became a Platinum member of the Linux Foundation. Fifteen years ago, Microsoft’s position was that Linux is a cancer. A lot has changed inside Microsoft. A lot has changed in the world outside Microsoft, as well.
Today, Unix and Unix-like are everywhere: Apple’s phones, tablets, and computers are all based on a Unix operating system. Linux has essentially come to define the modern Unix experience, and both Android and the Raspberry Pi operating systems are running a Linux kernel. Why Unix and Linux? First of all, Linux is free, so it’s an easy choice for anyone making a new device (Apple’s choice of Unix is historical; the OPENSTEP operating system, on which macOS is based, was a Unix variant). Second, because so many tools and applications are available (also free) for Linux and Unix, there aren’t as many wheels that need to be reinvented.
Now, Microsoft has a bash shell. Because of the deep integration with Ubuntu, it uses the same package repositories as any other machine running Ubuntu Xenial. Most of the applications in those packages will run just fine on Windows.
Years ago, when Apple chose Unix as its foundation for Mac OS X, we came out with Mac OS X for Unix Geeks, and some 60,000 readers used it to figure out exactly how Apple had rearranged their familiar environment. Over the years, Mac OS X’s Unix became a lot closer to Linux, adopting many of the same core libraries and utilities. And as Mac OS X matured, it’s now the norm for open source software to compile without all sorts of strange workarounds.
It’s a little different for WSL. From day one, WSL is Linux. Sure, there are some differences. For example, WSL doesn’t include an X server. But that’s a small matter of installing any one of the excellent X servers available for Windows, and setting your DISPLAY environment variable appropriately. Getting an X server running with WSL opens up the full Linux experience, so it should be one of the first things you do. I use and recommend the free and open source VcXsrv X Server.
If you’re a Windows user who needs to learn WSL, you can turn to the same resources that have been serving Linux (and Unix) users for years: If you’re completely new to the Unix/Linux shell, our free report “Ten Steps to Linux Survival: Essentials for Navigating the Bash Jungle” will get you on your feet. And when you’re ready for more, our Bash Cookbook includes 15 recipes just on things that are likely to trip up Bash novices, as well as dozens of recipes on sorting, parsing, and automating day-to-day tasks. If you’ve got some familiarity with the Bash shell or Unix/Linux in general, Bash Pocket Reference, 2nd Edition, can help you find what you need when you need it.
As betas go, WSL is incredibly well-baked. If you’re on the Windows insider program, you’ll benefit from frequent updates to WSL, and you can monitor progress on their GitHub issue tracker, which is full of feature requests, discussions, and the hopes and dreams of everyone who wants a little bit of Linux in their Windows. The high water mark I’m waiting for is if Docker will run natively on WSL. That would present an attractive alternative to the Docker on Windows solutions that depend on Hyper-V or VirtualBox and would be a huge step forward in resource utilization. Here’s to hoping we get there in 2017.
Continue reading How we got Linux on Windows.
Think-Dash.com
0 notes
lbcybersecurity · 8 years ago
Text
The command-line, for cybersec
On Twitter I made the mistake of asking people about command-line basics for cybersec professionals. A got a lot of useful responses, which I summarize in this long (5k words) post. It’s mostly driven by the tools I use, with a bit of input from the tweets I got in response to my query. bash By command-line this document really means bash. There are many types of command-line shells. Windows has two, 'cmd.exe' and 'PowerShell'. Unix started with the Bourne shell ‘sh’, and there have been many variations of this over the years, ‘csh’, ‘ksh’, ‘zsh’, ‘tcsh’, etc. When GNU rewrote Unix user-mode software independently, they called their shell “Bourne Again Shell” or “bash” (queue "JSON Bourne" shell jokes here). Bash is the default shell for Linux and macOS. It’s also available on Windows, as part of their special “Windows Subsystem for Linux”. The windows version of ‘bash’ has become my most used shell. For Linux IoT devices, BusyBox is the most popular shell. It’s easy to clear, as it includes feature-reduced versions of popular commands. man ‘Man’ is the command you should not run if you want help for a command. Man pages are designed to drive away newbies. They are only useful if you already mostly an expert with the command you desire help on. Man pages list all possible features of a program, but do not highlight examples of the most common features, or the most common way to use the commands. Take ‘sed’ as an example. It’s used most commonly to do a search-and-replace in files, like so: $ sed 's/rob/dave/' foo.txt This usage is so common that many non-geeks know of it. Yet, if you type ‘man sed’ to figure out how to do a search and replace, you’ll get nearly incomprehensible gibberish, and no example of this most common usage. I point this out because most guides on using the shell recommend ‘man’ pages to get help. This is wrong, it’ll just endlessly frustrate you. Instead, google the commands you need help on, or better yet, search StackExchange for answers. You might try asking questions, like on Twitter or forum sites, but this requires a strategy. If you ask a basic question, self-important dickholes will respond by telling you to “rtfm” or “read the fucking manual”. A better strategy is to exploit their dickhole nature, such as saying “too bad command xxx cannot do yyy”. Helpful people will gladly explain why you are wrong, carefully explaining how xxx does yyy. If you must use 'man', use the 'apropos' command to find the right man page. Sometimes multiple things in the system have the same or similar names, leading you to the wrong page. apt-get install yum Using the command-line means accessing that huge open-source ecosystem. Most of the things in this guide do no already exist on the system. You have to either compile them from source, or install via a package-manager. Linux distros ship with a small footprint, but have a massive database of precompiled software “packages” in the cloud somewhere. Use the "package manager" to install the software from the cloud. On Debian-derived systems (like Ubuntu, Kali, Raspbian), type “apt-get install masscan” to install “masscan” (as an example). Use “apt-cache search scan” to find a bunch of scanners you might want to install. On RedHat systems, use “yum” instead. On BSD, use the “ports” system, which you can also get working for macOS. If no pre-compiled package exists for a program, then you’ll have to download the source code and compile it. There’s about an 80% chance this will work easy, following the instructions. There is a 20% chance you’ll experience “dependency hell”, for example, needing to install two mutually incompatible versions of Python. Bash is a scripting language Don’t forget that shells are really scripting languages. The bit that executes a single command is just a degenerate use of the scripting language. For example, you can do a traditional for loop like: $ for i in $(seq 1 9); do echo $i; done In this way, ‘bash’ is no different than any other scripting language, like Perl, Python, NodeJS, PHP CLI, etc. That’s why a lot of stuff on the system actually exists as short ‘bash’ programs, aka. shell scripts. Few want to write bash scripts, but you are expected to be able to read them, either to tweek existing scripts on the system, or to read StackExchange help. File system commands The macOS “Finder” or Windows “File Explorer” are just graphical shells that help you find files, open, and save them. The first commands you learn are for the same functionality on the command-line: pwd, cd, ls, touch, rm, rmdir, mkdir, chmod, chown, find, ln, mount. The command “rm –rf /” removes everything starting from the root directory. This will also follow mounted server directories, deleting files on the server. I point this out to give an appreciation of the raw power you have over the system from the command-line, and how easy you can disrupt things. Of particular interest is the “mount” command. Desktop versions of Linux typically mount USB flash drives automatically, but on servers, you need to do it automatically, e.g.: $ mkdir ~/foobar $ mount /dev/sdb ~/foobar You’ll also use the ‘mount’ command to connect to file servers, using the “cifs” package if they are Windows file servers: # apt-get install cifs-utils # mkdir /mnt/vids # mount -t cifs -o username=robert,password=foobar123  //192.168.1.11/videos /mnt/vids Linux system commands The next commands you’ll learn are about syadmin the Linux system: ps, top, who, history, last, df, du, kill, killall, lsof, lsmod, uname, id, shutdown, and so on. The first thing hackers do when hacking into a system is run “uname” (to figure out what version of the OS is running) and “id” (to figure out which account they’ve acquired, like “root” or some other user). The Linux system command I use most is “dmesg” (or ‘tail –f /var/log/dmesg’) which shows you the raw system messages. For example, when I plug in USB drives to a server, I look in ‘dmesg’ to find out which device was added so that I can mount it. I don’t know if this is the best way, it’s just the way I do it (servers don’t automount USB drives like desktops do). Networking commands The permanent state of the network (what gets configured on the next bootup) is configured in text files somewhere. But there are a wealth of commands you’ll use to via the current state of networking, make temporary changes, and diagnose problems. The ‘ifconfig’ command has long been used to via the current TCP/IP configuration and make temporary changes. Learning how TCP/IP works means playing a lot with ‘ifconfig’. Use “ifconfig –a” for even more verbose information. Use the “route” command to see if you are sending packets to the right router. Use ‘arp’ command to make sure you can reach the local router. Use ‘traceroute’ to make sure packets are following the correct route to their destination. You should learn the nifty trick it’s based on (TTLs). You should also play with the TCP, UDP, and ICMP options. Use ‘ping’ to see if you can reach the target across the Internet. Usefully measures the latency in milliseconds, and congestion (via packet loss). For example, ping NetFlix throughout the day, and notice how the ping latency increases substantially during “prime time” viewing hours. Use ‘dig’ to make sure DNS resolution is working right. (Some use ‘nslookup’ instead). Dig is useful because it’s the raw universal DNS tool – every time they add some new standard feature to DNS, they add that feature into ‘dig’ as well. The ‘netstat –tualn’ command views the current TCP/IP connections and which ports are listening. I forget what the various options “tualn” mean, only it’s the output I always want to see, rather than the raw “netstat” command by itself. You’ll want to use ‘ethtool –k’ to turn off checksum and segmentation offloading. These are features that break packet-captures sometimes. There is this new fangled ‘ip’ system for Linux networking, replacing many of the above commands, but as an old timer, I haven’t looked into that. Some other tools for diagnosing local network issues are ‘tcpdump’, ‘nmap’, and ‘netcat’. These are described in more detail below. ssh In general, you’ll remotely log into a system in order to use the command-line. We use ‘ssh’ for that. It uses a protocol similar to SSL in order to encrypt the connection. There are two ways to use ‘ssh’ to login, with a password or with a client-side certificate. When using SSH with a password, you type “ssh username@servername”. The remote system will then prompt you for a password for that account. When using client-side certificates, use “ssh-keygen” to generate a key, then either copy the public-key of the client to the server manually, or use “ssh-copy-id” to copy it using the password method above. How this works is basic application of public-key cryptography. When logging in with a password, you get a copy of the server’s public-key the first time you login, and if it ever changes, you get a nasty warning that somebody may be attempting a man in the middle attack. $ ssh [email protected] @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY! When using client-side certificates, the server trusts your public-key. This is similar to how client-side certificates work in SSL VPNs. You can use SSH for things other than loging into a remote shell. You can script ‘ssh’ to run commands remotely on a system in a local shell script. You can use ‘scp’ (SSH copy) to transfer files to and from a remote system. You can do tricks with SSH to create tunnels, which is popular way to bypass the restrictive rules of your local firewall nazi. openssl This is your general cryptography toolkit, doing everything from simple encryption, to public-key certificate signing, to establishing SSL connections. It is extraordinarily user hostile, with terrible inconsistency among options. You can only figure out how to do things by looking up examples on the net, such as on StackExchange. There are competing SSL libraries with their own command-line tools, like GnuTLS and Mozilla NSS that you might find easier to use. The fundamental use of the ‘openssl’ tool is to create public-keys, “certificate requests”, and creating self-signed certificates. All the web-site certificates I’ve ever obtained has been using the openssl command-line tool to create CSRs. You should practice using the ‘openssl’ tool to encrypt files, sign files, and to check signatures. You can use openssl just like PGP for encrypted emails/messages, but following the “S/MIME” standard rather than PGP standard. You might consider learning the ‘pgp’ command-line tools, or the open-source ‘gpg’ or ‘gpg2’ tools as well. You should learn how to use the “openssl s_client” feature to establish SSL connections, as well as the “openssl s_server” feature to create an SSL proxy for a server that doesn’t otherwise support SSL. Learning all the ways of using the ‘openssl’ tool to do useful things will go a long way in teaching somebody about crypto and cybersecurity. I can imagine an entire class consisting of nothing but learning ‘openssl’. netcat (nc, socat, cyptocat, ncat) A lot of Internet protocols are based on text. That means you can create a raw TCP connection to the service and interact with them using your keyboard. The classic tool for doing this is known as “netcat”, abbreviated “nc”. For example, connect to Google’s web server at port and type the HTTP HEAD command followed by a blank line (hit [return] twice): $ nc www.google.com 80 HEAD / HTTP/1.0 HTTP/1.0 200 OK Date: Tue, 17 Jan 2017 01:53:28 GMT Expires: -1 Cache-Control: private, max-age=0 Content-Type: text/html; charset=ISO-8859-1 P3P: CP="This is not a P3P policy! See https://www.google.com/support/accounts/answer/151657?hl=en for more info." Server: gws X-XSS-Protection: 1; mode=block X-Frame-Options: SAMEORIGIN Set-Cookie: NID=95=o7GT1uJCWTPhaPAefs4CcqF7h7Yd7HEqPdAJncZfWfDSnNfliWuSj3XfS5GJXGt67-QJ9nc8xFsydZKufBHLj-K242C3_Vak9Uz1TmtZwT-1zVVBhP8limZI55uXHuPrejAxyTxSCgR6MQ; expires=Wed, 19-Jul-2017 01:53:28 GMT; path=/; domain=.google.com; HttpOnly Accept-Ranges: none Vary: Accept-Encoding Another classic example is to connect to port 25 on a mail server to send email, spoofing the “MAIL FROM” address. There are several versions of ‘netcat’ that work over SSL as well. My favorite is ‘ncat’, which comes with ‘nmap’, as it’s actively maintained. In theory, “openssl s_client” should also work this way. nmap At some point, you’ll need to port scan. The standard program for this is ‘nmap’, and it’s the best. The classic way of using it is something like: # nmap –A scanme.nmap.org The ‘-A’ option means to enable all the interesting features like OS detection, version detection, and basic scripts on the most common ports that a server might have open. It takes awhile to run. The “scanme.nmap.org” is a good site to practice on. Nmap is more than just a port scanner. It has a rich scripting system for probing more deeply into a system than just a port, and to gather more information useful for attacks. The scripting system essentially contains some attacks, such as password guessing. Scanning the Internet, finding services identified by ‘nmap’ scripts, and interacting with them with tools like ‘ncat’ will teach you a lot about how the Internet works. BTW, if ‘nmap’ is too slow, using ‘masscan’ instead. It’s a lot faster, though has much more limited functionality. Packet sniffing with tcpdump and tshark All Internet traffic consists of packets going between IP addresses. You can capture those packets and view them using “packet sniffers”. The most important packet-sniffer is “Wireshark”, a GUI. For the command-line, there is ‘tcpdump’ and ‘tshark’. You can run tcpdump on the command-line to watch packets go in/out of the local computer. This performs a quick “decode” of packets as they are captured. It’ll reverse-lookup IP addresses into DNS names, which means its buffers can overflow, dropping new packets while it’s waiting for DNS name responses for previous packets. # tcpdump –p –i eth0 A common task is to create a round-robin set of files, saving the last 100 files of 1-gig each. Older files are overwritten. Thus, when an attack happens, you can stop capture, and go backward in times and view the contents of the network traffic using something like Wireshark: # tcpdump –p -i eth0 -s65535 –C 1000 –W 100 –w cap Instead of capturing everything, you’ll often set “BPF” filters to narrow down to traffic from a specific target, or a specific port. The above examples use the –p option to capture traffic destined to the local computer. Sometimes you may want to look at all traffic going to other machines on the local network. You’ll need to figure out how to tap into wires, or setup “monitor” ports on switches for this to work. A more advanced command-line program is ‘tshark’. It can apply much more complex filters. It can also be used to extract the values of specific fields and dump them to a text files. Base64/hexdump/xxd/od These are some rather trivial commands, but you should know them. The ‘base64’ command encodes binary data in text. The text can then be passed around, such as in email messages. Base64 encoding is often automatic in the output from programs like openssl and PGP. In many cases, you’ll need to view a hex dump of some binary data. There are many programs to do this, such as hexdump, xxd, od, and more. grep Grep searches for a pattern within a file. More important, it searches for a regular expression (regex) in a file. The fu of Unix is that a lot of stuff is stored in text files, and use grep for regex patterns in order to extra stuff stored in those files. The power of this tool really depends on your mastery of regexes. You should master enough that you can understand StackExhange posts that explain almost what you want to do, and then tweek them to make them work. Grep, by default, shows only the matching lines. In many cases, you only want the part that matches. To do that, use the –o option. (This is not available on all versions of grep). You’ll probably want the better, “extended” regular expressions, so use the –E option. You’ll often want “case-insensitive” options (matching both upper and lower case), so use the –i option. For example, to extract all MAC address from a text file, you might do something like the following. This extracts all strings that are twelve hex digits. $ grep –Eio ‘[0-9A-F]{12}’ foo.txt Text processing Grep is just the first of the various “text processing filters”. Other useful ones include ‘sed’, ‘cut’, ‘sort’, and ‘uniq’. You’ll be an expert as piping output of one to the input of the next. You’ll use “sort | uniq” as god (Dennis Ritchie) intended and not the heresy of “sort –u”. You might want to master ‘awk’. It’s a new programming language, but once you master it, it’ll be easier than other mechanisms. You’ll end up using ‘wc’ (word-count) a lot. All it does is count the number of lines, words, characters in a file, but you’ll find yourself wanting to do this a lot. csvkit and jq You get data in CSV format and JSON format a lot. The tools ‘csvkit’ and ‘jq’ respectively help you deal with those tools, to convert these files into other formats, sticking the data in databases, and so forth. It’ll be easier using these tools that understand these text formats to extract data than trying to write ‘awk’ command or ‘grep’ regexes. strings Most files are binary with a few readable ASCII strings. You use the program ‘strings’ to extract those strings. This one simple trick sounds stupid, but it’s more powerful than you’d think. For example, I knew that a program probably contained a hard-coded password. I then blindly grabbed all the strings in the program’s binary file and sent them to a password cracker to see if they could decrypt something. And indeed, one of the 100,000 strings in the file worked, thus finding the hard-coded password. tail -f So ‘tail’ is just a standard Linux tool for looking at the end of files. If you want to keep checking the end of a live file that’s constantly growing, then use “tail –f”. It’ll sit there waiting for something new to be added to the end of the file, then print it out. I do this a lot, so I thought it’d be worth mentioning. tar –xvfz, gzip, xz, 7z In prehistorical times (like the 1980s), Unix was backed up to tape drives. The tar command could be used to combine a bunch of files into a single “archive” to be sent to the tape drive, hence “tape archive” or “tar”. These days, a lot of stuff you download will be in tar format (ending in .tar). You’ll need to learn how to extract it: $ tar –xvf something.tar Nobody knows what the “xvf” options mean anymore, but these letters most be specified in that order. I’m joking here, but only a little: somebody did a survey once and found that virtually nobody know how to use ‘tar’ other than the canned formulas such as this. Along with combining files into an archive you also need to compress them. In prehistoric Unix, the “compress” command would be used, which would replace a file with a compressed version ending in ‘.z’. This would found to be encumbered with patents, so everyone switched to ‘gzip’ instead, which replaces a file with a new one ending with ‘.gz’. $ ls foo.txt* foo.txt $ gzip foo.txt $ ls foo.txt* foo.txt.gz Combined with tar, you get files with either the “.tar.gz” extension, or simply “.tgz”. You can untar and uncompress at the same time: $ tar –xvfz something .tar.gz Gzip is always good enough, but nerds gonna nerd and want to compress with slightly better compression programs. They’ll have extensions like “.bz2”, “.7z”, “.xz”, and so on. There are a ton of them. Some of them are supported directly by the ‘tar’ program: $ tar –xvfj something.tar.bz2 Then there is the “zip/unzip” program, which supports Windows .zip file format. To create compressed archives these days, I don’t bother with tar, but just use the ZIP format. For example, this will recursively descend a directory, adding all files to a ZIP file that can easily be extracted under Windows: $ zip –r test.zip ./test/ dd I should include this under the system tools at the top, but it’s interesting for a number of purposes. The usage is simply to copy one file to another, the in-file to the out-file. $ dd if=foo.txt of=foo2.txt But that’s not interesting. What interesting is using it to write to “devices”. The disk drives in your system also exist as raw devices under the /dev directory. For example, if you want to create a boot USB drive for your Raspberry Pi: # dd if=rpi-ubuntu.img of=/dev/sdb Or, you might want to hard erase an entire hard drive by overwriting random data: # dd if=/dev/urandom of=/dev/sdc Or, you might want to image a drive on the system, for later forensics, without stumbling on things like open files. # dd if=/dev/sda of=/media/Lexar/infected.img The ‘dd’ program has some additional options, like block size and so forth, that you’ll want to pay attention to. screen and tmux You log in remotely and start some long running tool. Unfortunately, if you log out, all the processes you started will be killed. If you want it to keep running, then you need a tool to do this. I use ‘screen’. Before I start a long running port scan, I run the “screen” command. Then, I type [ctrl-a][ctrl-d] to disconnect from that screen, leaving it running in the background. Then later, I type “screen –r” to reconnect to it. If there are more than one screen sessions, using ‘-r’ by itself will list them all. Use “-r pid” to reattach to the proper one. If you can’t, then use “-D pid” or “-D –RR pid” to forced the other session to detached from whoever is using it. Tmux is an alternative to screen that many use. It’s cool for also having lots of terminal screens open at once. curl and wget Sometimes you want to download files from websites without opening a browser. The ‘curl’ and ‘wget’ programs do that easily. Wget is the traditional way of doing this, but curl is a bit more flexible. I use curl for everything these days, except mirroring a website, in which case I just do “wget –m website”. The thing that makes ‘curl’ so powerful is that it’s really designed as a tool for poking and prodding all the various features of HTTP. That it’s also useful for downloading files is a happy coincidence. When playing with a target website, curl will allow you do lots of complex things, which you can then script via bash. For example, hackers often write their cross-site scripting/forgeries in bash scripts using curl. node/php/python/perl/ruby/lua As mentioned above, bash is its own programming language. But it’s weird, and annoying. So sometimes you want a real programming language. Here are some useful ones. Yes, PHP is a language that runs in a web server for creating web pages. But if you know the language well, it’s also a fine command-line language for doing stuff. Yes, JavaScript is a language that runs in the web browser. But if you know it well, it’s also a great language for doing stuff, especially with the “nodejs” version. Then there are other good command line languages, like the Python, Ruby, Lua, and the venerable Perl. What makes all these great is the large library support. Somebody has already written a library that nearly does what you want that can be made to work with a little bit of extra code of your own. My general impression is that Python and NodeJS have the largest libraries likely to have what you want, but you should pick whichever language you like best, whichever makes you most productive. For me, that’s NodeJS, because of the great Visual Code IDE/debugger. iptables, iptables-save I shouldn’t include this in the list. Iptables isn’t a command-line tool as such. The tool is the built-in firewalling/NAT features within the Linux kernel. Iptables is just the command to configure it. Firewalling is an important part of cybersecurity. Everyone should have some experience playing with a Linux system doing basic firewalling tasks: basic rules, NATting, and transparent proxying for mitm attacks. Use ‘iptables-save’ in order to persistently save your changes. MySQL Similar to ‘iptables’, ‘mysql’ isn’t a tool in its own right, but a way of accessing a database maintained by another process on the system. Filters acting on text files only goes so far. Sometimes you need to dump it into a database, and make queries on that database. There is also the offensive skill needed to learn how targets store things in a database, and how attackers get the data. Hackers often publish raw SQL data they’ve stolen in their hacks (like the Ashley-Madisan dump). Being able to stick those dumps into your own database is quite useful. Hint: disable transaction logging while importing mass data. If you don’t like SQL, you might consider NoSQL tools like Elasticsearch, MongoDB, and Redis that can similarly be useful for arranging and searching data. You’ll probably have to learn some JSON tools for formatting the data. Reverse engineering tools A cybersecurity specialty is “reverse engineering”. Some want to reverse engineer the target software being hacked, to understand vulnerabilities. This is needed for commercial software and device firmware where the source code is hidden. Others use these tools to analyze viruses/malware. The ‘file’ command uses heuristics to discover the type of a file. There’s a whole skillset for analyzing PDF and Microsoft Office documents. I play with pdf-parser. There’s a long list at this website: https://zeltser.com/analyzing-malicious-documents/ There’s a whole skillset for analyzing executables. Binwalk is especially useful for analyzing firmware images. Qemu is useful is a useful virtual-machine. It can emulate full systems, such as an IoT device based on the MIPS processor. Like some other tools mentioned here, it’s more a full subsystem than a simple command-line tool. On a live system, you can use ‘strace’ to view what system calls a process is making. Use ‘lsof’ to view which files and network connections a process is making. Password crackers A common cybersecurity specialty is “password cracking”. There’s two kinds: online and offline password crackers. Typical online password crackers are ‘hydra’ and ‘medusa’. They can take files containing common passwords and attempt to log on to various protocols remotely, like HTTP, SMB, FTP, Telnet, and so on. I used ‘hydra’ recently in order to find the default/backdoor passwords to many IoT devices I’ve bought recently in my test lab. Online password crackers must open TCP connections to the target, and try to logon. This limits their speed. They also may be stymied by systems that lock accounts, or introduce delays, after too many bad password attempts. Typical offline password crackers are ‘hashcat’ and ‘jtr’ (John the Ripper). They work off of stolen encrypted passwords. They can attempt billions of passwords-per-second, because there’s no network interaction, nothing slowing them down. Understanding offline password crackers means getting an appreciation for the exponential difficulty of the problem. A sufficiently long and complex encrypted password is uncrackable. Instead of brute-force attempts at all possible combinations, we must use tricks, like mutating the top million most common passwords. I use hashcat because of the great GPU support, but John is also a great program. WiFi hacking A common specialty in cybersecurity is WiFi hacking. The difficulty in WiFi hacking is getting the right WiFi hardware that supports the features (monitor mode, packet injection), then the right drivers installed in your operating system. That’s why I use Kali rather than some generic Linux distribution, because it’s got the right drivers installed. The ‘aircrack-ng’ suite is the best for doing basic hacking, such as packet injection. When the parents are letting the iPad babysit their kid with a loud movie at the otherwise quite coffeeshop, use ‘aircrack-ng’ to deauth the kid. The ‘reaver’ tool is useful for hacking into sites that leave WPS wide open and misconfigured. Remote exploitation A common specialty in cybersecurity is pentesting. Nmap, curl, and netcat (described above) above are useful tools for this. Some useful DNS tools are ‘dig’ (described above), dnsrecon/dnsenum/fierce that try to enumerate and guess as many names as possible within a domain. These tools all have unique features, but also have a lot of overlap. Nikto is a basic tool for probing for common vulnerabilities, out-of-date software, and so on. It’s not really a vulnerability scanner like Nessus used by defenders, but more of a tool for attack. SQLmap is a popular tool for probing for SQL injection weaknesses. Then there is ‘msfconsole’. It has some attack features. This is humor – it has all the attack features. Metasploit is the most popular tool for running remote attacks against targets, exploiting vulnerabilities. Text editor Finally, there is the decision of text editor. I use ‘vi’ variants. Others like ‘nano’ and variants. There’s no wrong answer as to which editor to use, unless that answer is ‘emacs’. Conclusion Obviously, not every cybersecurity professional will be familiar with every tool in this list. If you don’t do reverse-engineering, then you won’t use reverse-engineering tools. On the other hand, regardless of your specialty, you need to know basic crypto concepts, so you should know something like the ‘openssl’ tool. You need to know basic networking, so things like ‘nmap’ and ‘tcpdump’. You need to be comfortable processing large dumps of data, manipulating it with any tool available. You shouldn’t be frightened by a little sysadmin work. The above list is therefore a useful starting point for cybersecurity professionals. Of course, those new to the industry won’t have much familiarity with them. But it’s fair to say that I’ve used everything listed above at least once in the last year, and the year before that, and the year before that. I spend a lot of time on StackExchange and Google searching the exact options I need, so I’m not an expert, but I am familiar with the basic use of all these things. from The command-line, for cybersec
0 notes
nxfury · 5 years ago
Text
Retro Computing- Is Old School The Smart Way?
For those who remember their vintage Mac Classic or Commodore 64, they also remember how they were heavily constrained to the likes of 256 kilobytes of RAM. Even in these conditions, programmers still had the ability to engineer the same sorts of software we use today.
In this era from the 1970s to the 1980s, we saw several major innovations- the first computers, UNIX, the first graphical desktops, word processing software, printing, and internetworking of devices via ARPANET (which would later become the internet).
So why is there a lack of major innovation at such a rigorous speed anymore?
Stale Innovation
This may be a hard pill to swallow for some, but the increased availability that high-end hardware provides lowered the barrier of entry into computer programming, thus decreasing the quality of code. Due to this, overall competency in the average software developer declines. Naturally, this affects the importance of a "new" innovation- what's the point of rewriting code if the rewrite is bound to have worse quality?
On top of this, large companies, universities and defense contractors no longer fund major innovators. Let's use a modern-day example: The OpenBSD Foundation. They're one of the many organizations dedicated to furthering the UNIX source code, with an extreme focus on producing a system that has secure and sane defaults. Ironically, they were the inventors of OpenSSH and sudo (currently used in almost every Enterprise network running Linux or UNIX). So why aren't they recognized? It all boils down to a saying I learned from my grandfather: "Nobody likes change- even if it helps them."
Convenience Over Simplicity
Wait- don't these mean the same thing? Actually, no.
This is how American Heritage Dictionary defines these two words: Simple- Having few parts or features; not complicated or elaborate. Convenient- Suited or favorable to one's comfort, purpose, or needs.
For ages, programmers pursued simplicity as a way to provide stable, high-quality code that would run on virtually anything- even a toaster if one were so inclined. This old school of thought still exists, but is largely frowned upon with modern day programming paradigms.
For example, rapid prototyping has brought programming languages like Python to the forefront due to the convenience they provide and the ease of implementation in them. However, it's nearly impossible to produce efficient programs that guarantee stability across a wide variety of different platforms, as Python isn't yet implemented on as many platforms as languages such as C.
The truly sad thing about this is how it all ties right back to my first point on how it reduces competence among programmers.
The Attack Of The Public Domain
How is one supposed to train up a new generation of programmers for the enterprise world if there's no quality code to work on? It's a paradox, as large enterprise companies like Microsoft, Apple, and more make use of Open Source and Public Domain source code but rarely contribute anything that could help further the development of Open Source. In recent news, Microsoft introduced "DirectX 12 for Linux", but in reality they only made a way to access the Application Programming Interface (API) available to Linux users. No source code was disclosed and it was explicitly added solely for their Windows Subsystem for Linux. According to U.S. v. Microsoft (2001), the Department of Justice found an alarming statement for Microsoft's internal marketing strategy known as "EEE"- Embrace, Extend, Extinguish. Embrace the idea as if they support it, Extend support for the idea, then Extinguish it by rendering it obsolete. Google and Apple have been known to engage in similar practices.
Herein lies the paradox- there's a lack of new enterprise source code to look at without paying a significant amount of money for. Due to this, there's a lack of large-scale scientific research being conducted in computing that's available to the public.
Lack Of Attentiveness
It's all our fault here... If you're from the 1990s, you may remember "Windows Buyback Day", when Linux users protested outside Microsoft's headquarters about being forced to pay for a Windows license they don't even use.
20 years later, such noble ideas haven't been forgotten- they've been ignored and thrown on the proverbial backburner by the rest of society.
The Good News
Moore's Law is slowly becoming rendered obsolete. For those who are unaware of what this entails, Gordon Moore created this "rule of thumb" in 1965 that computing devices would double in capability, exponentially, every year. This turned out to be true until recently where manufacturers are reaching the physical limits of what they can fit on a circuit board.
This means that we're limited in terms of performance and in order to continue to maintain Moore's Law, we will be forced to go back to the days of old, writing high-quality software while retaining a large degree of performance.
Liked This Content? Check Out Our Discord Community and Become an email subscriber!
0 notes