#just open the code editor and show me how to structure a project already...
Explore tagged Tumblr posts
Text
the worst part of learning any coding language from an official resource is when it thinks you want anything to do with the command line
123 notes
·
View notes
Photo
Péguy
Hi everybody! In this news feed I've told you a few times about a project I named Péguy. Well today I dedicate a complete article to it to present it to you in more detail but also to show you the new features I brought to it at the beginning of the winter. It's not the priority project (right now it's TGCM Comics) but I needed a little break during the holidays and coding vector graphics and 3D, it's a little bit addictive like playing Lego. x) Let's go then!
Péguy, what is it?
It is a procedural generator of patterns, graphic effects and other scenery elements to speed up the realization of my drawings for my comics. Basically, I enter a few parameters, click on a button, and my program generates a more or less regular pattern on its own. The first lines of code were written in 2018 and since then, this tool has been constantly being enriched and helping me to work faster on my comics. :D This project is coded with web languages and generates vector patterns in the format SVG. In the beginning it was just small scripts that had to be modified directly to change the parameters and run individually for each effect or pattern generated.
Not very user friendly, is it? :’D
This first version was used on episode 2 of Dragon Cat's Galaxia 1/2. During 2019 I thought it would be more practical to gather all these scripts and integrate them into a graphical user interface. Since then, I have enriched it with new features and improved its ergonomics to save more and more time. Here is a small sample of what can be produced with Péguy currently.
Graphic effects typical of manga and paving patterns in perspective or plated on a cylinder. All these features were used on Tarkhan and Gonakin. I plan to put this project online, but in order for it to be usable by others than me, I still need to fix a few ergonomy issues. For the moment, to recover the rendering, you still need to open the browser debugger to find and copy the HTML node that contains the SVG. In other words, if you don't know the HTML structure by heart, it's not practical. 8D
A 3D module!
The 2020 new feature is that I started to develop a 3D module. The idea, in the long run, is to be able to build my comics backgrounds, at least the architectural ones, a bit like a Lego game. The interface is really still under development, a lot of things are missing, but basically it's going to look like this.
So there's no shortage of 3D modeling software, so why am I making one? What will make my project stand out from what already exists? First, navigation around the 3D workspace. In short, the movement of the camera. Well please excuse me, but in Blender, Maya, Sketchup and so on, to be able to frame according to your needs to get a rendering, it's just a pain in the ass! So I developed a more practical camera navigation system depending on whether you're modeling an object or placing it in a map. The idea is to take inspiration from the map editors in some video games (like Age of Empire). Secondly, I'm going to propose a small innovation. When you model an object in Blender or something else, it will always be frozen and if you use it several times in an environment, it will be strictly identical, which can be annoying for natural elements like trees for example. So I'm going to develop a kind of little "language" that will allow you to make an object customizable and incorporate random components. Thus, with a single definition for an object, we can obtain an infinite number of different instances, with random components for natural elements and variables such as the number of floors for a building. I had already developed a prototype of this system many years ago in Java. I'm going to retrieve it and adapt it to Javascript. And the last peculiarity will be in the proposed renderings. As this is about making comics (especially in black and white in my case), I'm developing a whole bunch of shaders to generate lines, screentones and other hatchings automatically with the possibility to use patterns generated in the existing vector module as textures! :D
What are shaders?
Well, you see the principle of post-production in cinema... (Editing, sound effects, various corrections, special effects... all the finishing work after shooting). Well, shaders are about the same principle. They are programs executed just after the calculation of the 3D object as it should appear on the screen. They allow to apply patches, deformations, effects, filters... As long as you are not angry with mathematics, there is only limit to your imagination! :D When you enter a normal vector in a color variable it gives funny results.
Yes! It's really with math that you can display all these things. :D Now when you hear a smart guy tell you that math is cold, it's the opposite of art or incompatible with art... it's dry toast, you'll know it's ignorance. :p Math is a tool just like the brush, it's all about knowing how to use it. :D In truth, science is a representation of reality in the same way as a painting. It is photorealistic in the extreme, but it is nevertheless a human construction used to describe nature. It remains an approximation of reality that continually escapes us and we try to fill in the margins of error over the centuries... Just like classical painting did. But by the way? Aren't there a bunch of great painters who were also scholars, mathematicians? Yes, there are! Look hard! The Renaissance is a good breeding ground. x) In short! Physics is a painting and mathematics is its brush. But in painting, we don't only do figurative, not only realism, we can give free rein to our inspiration to stylize our representation of the world or make it abstract. Well like any good brush, mathematics allows the same fantasy! All it takes is a little imagination for that. Hold, for example, the good old Spirograph from our childhood. We all had one! Well, these pretty patterns drawn with the bic are nothing else than... parametric equations that make the students of math sup/math spe suffer. 8D Even the famous celtic triskelion can be calculated from parametric equations. Well, I digress, I digress, but let's get back to our shaders. Since you can do whatever you want with it, I worked on typical manga effects. By combining the Dot Pattern Generator and the Hatch Generator but display them in white, I was able to simulate a scratch effect on screentones.
In the traditional way it is an effect that is obtained by scraping the screentones with a cutter or similar tool.

Péguy will therefore be able to calculate this effect alone on a 3D scene. :D I extended this effect with a pattern calculated in SVG. So it will be possible to use the patterns created in the vector module as textures for the 3D module! Here it is a pattern of dots distributed according to a Fibonacci spiral (I used a similar pattern in Tarkhan to make stone textures, very commonly used in manga).
Bump mapping
So this is where things get really interesting. We stay in the shaders but we're going to give an extra dimension to our rendering. Basically, bump mapping consists in creating a bas-relief effect from a high map. And it gives this kind of result.
The defined object is always a simple cylinder (with 2 radii). It is the shaders that apply the pixel shift and recalculate the lighting thanks to the high map that looks like this.
This texture has also been calculated automatically in SVG. Thus we can dynamically set the number of bricks. Well, this bas-relief story is very nice, but here we have a relatively realistic lighting, and we would like it to look like a drawing. So by applying a threshold to have an area lit in white, a second threshold to have shadow areas in black, by applying the screentone pattern to the rest and by adding the hatching that simulates the scraped screentone, here is the result!
It's like a manga from the 80's! :D I tested this rendering with other screentone patterns: Fibonnacci spiral dots, parallel lines or lines that follow the shape of the object.
Now we know what Péguy can do. I think I can enrich this rendering a bit more with the shaders but the next time I work on this project the biggest part of the job will be to create what we call primitives, basic geometric objects. After that I can start assembling them. The concept of drawing while coding is so much fun that I'm starting to think about trying to make complete illustrations like this or making the backgrounds for some comic book projects only with Péguy just for the artistic process. Finding tricks to generate organic objects, especially plants should be fun too. That's all for today. Next time we'll talk about drawing! Have a nice week-end and see you soon! :D Suisei
P.S. If you want miss no news and if you haven't already done so, you can subscribe to the newsletter here : https://www.suiseipark.com/User/SubscribeNewsletter/language/english/
Source : https://www.suiseipark.com/News/Entry/id/302/
1 note
·
View note
Text
Continuous Deployments for WordPress Using GitHub Actions
Continuous Integration (CI) workflows are considered a best practice these days. As in, you work with your version control system (Git), and as you do, CI is doing work for you like running tests, sending notifications, and deploying code. That last part is called Continuous Deployment (CD). But shipping code to a production server often requires paid services. With GitHub Actions, Continuous Deployment is free for everyone. Let’s explore how to set that up.
DevOps is for everyone
As a front-end developer, continuous deployment workflows used to be exciting, but mysterious to me. I remember numerous times being scared to touch deployment configurations. I defaulted to the easy route instead — usually having someone else set it up and maintain it, or manual copying and pasting things in a worst-case scenario.
As soon as I understood the basics of rsync, CD finally became tangible to me. With the following GitHub Action workflow, you do not need to be a DevOps specialist; but you’ll still have the tools at hand to set up best practice deployment workflows.
The basics of a Continuous Deployment workflow
So what’s the deal, how does this work? It all starts with CI, which means that you commit code to a shared remote repository, like GitHub, and every push to it will run automated tasks on a remote server. Those tasks could include test and build processes, like linting, concatenation, minification and image optimization, among others.
CD also delivers code to a production website server. That may happen by copying the verified and built code and placing it on the server via FTP, SSH, or by shipping containers to an infrastructure. While every shared hosting package has FTP access, it’s rather unreliable and slow to send many files to a server. And while shipping application containers is a safe way to release complex applications, the infrastructure and setup can be rather complex as well. Deploying code via SSH though is fast, safe and flexible. Plus, it’s supported by many hosting packages.
How to deploy with rsync
An easy and efficient way to ship files to a server via SSH is rsync, a utility tool to sync files between a source and destination folder, drive or computer. It will only synchronize those files which have changed or don’t already exist at the destination. As it became a standard tool on popular Linux distributions, chances are high you don’t even need to install it.
The most basic operation is as easy as calling rsync SRC DEST to sync files from one directory to another one. However, there are a couple of options you want to consider:
-c compares file changes by checksum, not modification time
-h outputs numbers in a more human readable format
-a retains file attributes and permissions and recursively copies files and directories
-v shows status output
--delete deletes files from the destination that aren’t found in the source (anymore)
--exclude prevents syncing specified files like the .git directory and node_modules
And finally, you want to send the files to a remote server, which makes the full command look like this:
rsync -chav --delete --exclude /.git/ --exclude /node_modules/ ./ [email protected]:/mydir
You could run that command from your local computer to deploy to any live server. But how cool would it be if it was running in a controlled environment from a clean state? Right, that’s what you’re here for. Let’s move on with that.
Create a GitHub Actions workflow
With GitHub Actions you can configure workflows to run on any GitHub event. While there is a marketplace for GitHub Actions, we don’t need any of them but will build our own workflow.
To get started, go to the “Actions” tab of your repository and click “Set up a workflow yourself.” This will open the workflow editor with a .yaml template that will be committed to the .github/workflows directory of your repository.
When saved, the workflow checks out your repo code and runs some echo commands. name helps follow the status and results later. run contains the shell commands you want to run in each step.
Define a deployment trigger
Theoretically, every commit to the master branch should be production-ready. However, reality teaches you that you need to test results on the production server after deployment as well and you need to schedule that. We at bleech consider it a best practice to only deploy on workdays — except Fridays and only before 4:00 pm — to make sure we have time to roll back or fix issues during business hours if anything goes wrong.
An easy way to get manual-level control is to set up a branch just for triggering deployments. That way, you can specifically merge your master branch into it whenever you are ready. Call that branch production, let everyone on your team know pushes to that branch are only allowed from the master branch and tell them to do it like this:
git push origin master:production
Here’s how to change your workflow trigger to only run on pushes to that production branch:
name: Deployment on: push: branches: [ production ]
Build and verify the theme
I’ll assume you’re using Flynt, our WordPress starter theme, which comes with dependency management via Composer and npm as well as a preconfigured build process. If you’re using a different theme, the build process is likely to be similar, but might need adjustments. And if you’re checking in the built assets to your repository, you can skip all steps except the checkout command.
For our example, let’s make sure that node is executed in the required version and that dependencies are installed before building:
jobs: deploy: runs-on: ubuntu-latest steps: - uses: actions/checkout@v2 - uses: actions/[email protected] with: version: 12.x - name: Install dependencies run: | composer install -o npm install - name: Build run: npm run build
The Flynt build task finally requires, lints, compiles, and transpiles Sass and JavaScript files, then adds revisioning to assets to prevent browser cache issues. If anything in the build step fails, the workflow will stop executing and thus prevents you from deploying a broken release.
Configure server access and destination
For the rsync command to run successfully, GitHub needs access to SSH into your server. This can be accomplished by:
Generating a new SSH key (without a passphrase)
Adding the public key to your ~/.ssh/authorized_keys on the production server
Adding the private key as a secret with the name DEPLOY_KEY to the repository
The sync workflow step needs to save the key to a local file, adjust file permissions and pass the file to the rsync command. The destination has to point to your WordPress theme directory on the production server. It’s convenient to define it as a variable so you know what to change when reusing the workflow for future projects.
- name: Sync env: dest: '[email protected]:/mydir/wp-content/themes/mytheme’ run: | echo "$" > deploy_key chmod 600 ./deploy_key rsync -chav --delete \ -e 'ssh -i ./deploy_key -o StrictHostKeyChecking=no' \ --exclude /.git/ \ --exclude /.github/ \ --exclude /node_modules/ \ ./ $
Depending on your project structure, you might want to deploy plugins and other theme related files as well. To accomplish that, change the source and destination to the desired parent directory, make sure to check if the excluded files need an update, and check if any paths in the build process should be adjusted.
Put the pieces together
We’ve covered all necessary steps of the CD process. Now we need to run them in a sequence which should:
Trigger on each push to the production branch
Install dependencies
Build and verify the code
Send the result to a server via rsync
The complete GitHub workflow will look like this:
name: Deployment on: push: branches: [ production ] jobs: deploy: runs-on: ubuntu-latest steps: - uses: actions/checkout@v2 - uses: actions/[email protected] with: version: 12.x - name: Install dependencies run: | composer install -o npm install - name: Build run: npm run build - name: Sync env: dest: '[email protected]:/mydir/wp-content/themes/mytheme’ run: | echo "$" > deploy_key chmod 600 ./deploy_key rsync -chav --delete \ -e 'ssh -i ./deploy_key -o StrictHostKeyChecking=no' \ --exclude /.git/ \ --exclude /.github/ \ --exclude /node_modules/ \ ./ $
To test the workflow, commit the changes, pull them into your local repository and trigger the deployment by pushing your master branch to the production branch:
git push origin master:production
You can follow the status of the execution by going to the “Actions” tab in GitHub, then selecting the recent execution and clicking on the “deploy“ job. The green checkmarks indicate that everything went smoothly. If there are any issues, check the logs of the failed step to fix them.
Check the full report on GitHub
Congratulations! You’ve successfully deployed your WordPress theme to a server. The workflow file can easily be reused for future projects, making continuous deployment setups a breeze.
To further refine your deployment process, the following topics are worth considering:
Caching dependencies to speed up the GitHub workflow
Activating the WordPress maintenance mode while syncing files
Clearing the website cache of a plugin (like Cache Enabler) after the deployment
The post Continuous Deployments for WordPress Using GitHub Actions appeared first on CSS-Tricks.
Continuous Deployments for WordPress Using GitHub Actions published first on https://deskbysnafu.tumblr.com/
0 notes
Text
EVERY FOUNDER SHOULD KNOW ABOUT LOT
Countless paintings, when you look at them in xrays, turn out to have limbs that have been learned in previous ones. I tended to just spew out code that was hopelessly broken, and gradually beat it into shape. Well, I'll tell you what they want. So a company that can attract great hackers will have a huge advantage. It's hard enough already not to become the prisoner of your own. A speech like that is, in the sense that they're just trying to reproduce work someone else has already done for them. On the Web, the barrier for publishing your ideas is even lower. They didn't sell either; that's why they're in a position now to buy other companies. It's hard enough already not to become the prisoner of your own expertise, but it does at least make you keep an open mind. As Ricky Ricardo used to say, Lucy, you got a lot of what makes offices bad are the very qualities we associate with professionalism. But if you talk to startups, because students don't feel they're failing if they don't go into research.
The programmers you'll be able to set up local VC funds by supplying the money themselves and recruiting people from existing firms to run them, only organic growth can produce angel investors.1 But as long as they still have to show up for work every day, they care more about what they have in common is that they're often made by people working at home.2 Part of what software has to do is make good things.3 When there's something in a painting that works very well, you can probably make yourself smart too.4 The word now has such bad connotations that we forget its etymology, though it's staring us in the face. People await new Apple products the way they'd await new books by a popular novelist. VCs don't invest $x million because that's the amount the structure of business doesn't reflect it. When I was a student in Italy in 1990, few Italians spoke English. This turns out to be will depend on what we can do with this new medium. The problem is the way they're paid. It's a mistake to use Microsoft as a model, because their whole culture derives from that one lucky break.5
It felt as if someone had flipped on a light switch inside my head. The problem with the facetime model is not just that line but the whole program around it. But while energetic government intervention may be able to make a Japanese silicon valley, and so far is soccer. By definition these 10,000 founders wouldn't be taking jobs from Americans: it could be part of the terms of the visa that they couldn't work for existing companies, only new ones they'd founded. And in addition to the direct cost in time, there's the cost in fragmentation—breaking people's day up into bits too small to be useful. It's a good idea to save some easy tasks for moments when you would otherwise stall. They're competing against the best writing online.6 And since good people like to work on a Java project won't be as smart as the ones you could get to work on what you like. I'm talking to companies we fund? Painting has been a much richer source of ideas than the theory of computation.7
It falls between what and how: architects decide what to do by a boss. Another country I could see wanting to have a silicon valley? That wouldn't seem nearly as uncool. Nearly all makers have day jobs, and work on beautiful software on the side, I'm not proposing this as a new idea. Can you cultivate these qualities?8 It's too much overhead. But Sam Altman can't be stopped by such flimsy rules. Ideas beget ideas.
But that could be solved quite easily: let the market decide.9 This phrase began with musicians, who perform at night.10 And you can't go by the awards he's won or the jobs he's had, because in design, as in many fields, the hard part isn't solving problems, but deciding what problems to solve. And the first phase of that is mostly product creation—that blogs are just a medium of expression.11 The third big lesson we can learn, or at least confirm, from the example of painting is how to learn to hack by taking college courses in programming. Once you realize how little most people judging you care about judging you accurately—once you realize that most judgements are greatly influenced by random, extraneous factors—that most people judging you are more like a fickle novel buyer than a wise and perceptive magistrate—the more you realize you can do than the traditional employer-employee relationship. It's flattering to talk to other people in the Valley is watching them.12
The most famous example is probably Steve Wozniak, who originally wanted to build microcomputers for his then-employer, HP. For Trevor, that's par for the course. I suspect almost every successful startup has. Actors and directors are fired at the end of each film, so they have to sell internationally from the start.13 The other problem with startups is that there is a Michael Jordan of hacking, no one knows, including him. That varies enormously, from $10,000, whichever is greater.14 This is yet another problem that afflicts the sciences: math envy.15 If a hacker were a mere implementor, turning a spec into code, then he could just work his way through it from one end to the other like someone digging a ditch. What fraction of the smart people work as toolmakers. Kevin Kelleher suggested an interesting way to compare programming languages: to describe each in terms of the visa that they couldn't work for existing companies, only new ones they'd founded.
As a standard, you couldn't wish for more. Like the amount you invest, this can literally mean saving up bugs. This is a rare example of a big company in a design war with a company big enough that its software is designed by product managers, they'll never be able to get a job with a big picture of a door.16 If you throw them out, you find that good products do tend to win in the market. When I was in the bathroom!17 Once they invest in a company who really have to, but to surpass it. In this model, the research department functions like a mine. Of all the great programmers I can think of, I know of zero. And my theory explains why they'd tend to be forced to work on your projects, he can work wherever he wants on projects of your own.18
Here's a case where we can learn, or at least confirm, from the start. It has an English cousin, travail, and what it means. 5% of the world's population will be exceptional in some field only if there are a lot of servers and a lot of graduate programs.19 It seems to me that there have been two really clean, consistent models of programming so far: the C model and the Lisp model. Lisp syntax is scary. Ironically, of all the great programmers collected in one hub. You see it in Diogenes telling Alexander to get out of his office so we could go to lunch. I like debugging: it's the one time that hacking is as straightforward as people think it is. The only place your judgement makes a difference is in the borderline cases. That may be the best writer among Silicon Valley CEOs. Singapore seems very aware of the importance of encouraging startups. A lot of the past several years studying the paths from rich to poor, just as we were designed to eat a certain amount per generation.
Notes
It did. As Anthony Badger wrote, If it failed it failed it failed it failed it failed.
The unintended consequence is that the web have sucked—9. What I should degenerate from words to their stems, but I call it procrastination when someone gets drunk instead of editors, and the founders: agree with them. But one of these, and that he could just multiply 101 by 50 to 6,000.
You have to recognize them when you lose that protection, e.
The First Two Hundred Years. Once someone has said fail, most of their due diligence tends to happen fast, like architecture and filmmaking, but investors can get rich simply by being energetic and unscrupulous, but in practice signalling hasn't been much of the acquisition offers that every fast-growing startup gets on the expected value calculation for potential founders, if you want to learn. Whereas when you're starting a startup idea is to create a web-based applications. The reason this works is that you'll have to worry about that.
Which is not generally the common stock holders who take big acquisition offers are driven by bookmarking, not the second wave extends applications across the web was going to drunken parties.
Most of the problem is not to quit their day job.
It took a shot at destroying Boston's in the time I thought there wasn't, because living at all.
What made Google Google is much more depends on the blades may work for startups overall. Gauss was supposedly asked this when he received an invitation to travel aboard the HMS Beagle as a source of the lies people told 100 years, it would be just as you get a personal introduction—and to run on the entire West Coast that still require jackets for men.
Successful founders are willing to provide when it's aligned with the government, it would be too conspicuous. I'm not dissing these people.
In 1800 an empty plastic drink bottle with a clear upward trend. We couldn't decide between two alternatives, we'd be interested in you, you can ignore. So how do they decide you're a loser or possibly a winner.
Though they are now.
Hypothesis: Any plan in 2001, but different cultures react differently when things go well. A professor at a discount to whatever the valuation of the Dead was shot there. I was writing this, I should probably be the technology side of being watched in real time. To talk to an audience makes people feel good.
The examples in this article are translated into Common Lisp seems to have gotten where they all sit waiting for the same time.
On the other seed firms always find is that it's up to his time was 700,000. Vii. An investor who's seriously interested will already be working on Y Combinator is a way to find users to observe—e.
Even if you turn out to be started in Mississippi. There was one that we are not merely blurry versions of great things were created mainly to make Europe more entrepreneurial and more pervasive though.
But try this thought experiment: suppose prep schools supplied the same reason 1980s-style knowledge representation could never have come to accept a particular valuation, that I hadn't had much success in doing a small business that isn't the problem to have discovered something intuitively without understanding all its implications. Your user model almost couldn't be perfectly accurate, and everyone's used to end a series A rounds from top VC funds whether it was putting local grocery stores out of the x axis and returns on the one Europeans inherited from Rome, where you get a personal introduction—and in a cubicle except late at night. Though in a time of day, because the ordering system and image generator were written in 6502 machine language. What was missing, false positives reflecting the remaining outcomes don't have the.
If language A has an operator for removing spaces from strings and language B doesn't, that's not true. The best one could argue that the angels are no longer needed, big companies could dominate through economies of scale. Good news: users don't care what your project does.
We react like children, we're going to do is keep track of statistics for foo overall as well, partly because you can't expect you'll be able to formalize a small company that has a pretty comprehensive view of investor who merely seems like he will fund you, it becomes an advantage to be able to respond with extreme countermeasures. Particularly since economic inequality is a scarce resource. I suspect the recent resurgence of evangelical Christians.
The VCs recapitalize the company than you meant to. World War II the tax codes were so new that the Internet worm of its identity. If our hypothetical company making 1000 a month grew at 1% a week before. I need to do video on-demand, and we did not start to leave.
#automatically generated text#Markov chains#Paul Graham#Python#Patrick Mooney#way#start#one#offices#codes#reason#calculation#research#startups#mine#versions#depends#economies#Ricardo#time#connotations#people#Altman#confirm#A#barrier#random#HP#etymology
0 notes
Link
Java is one of the most in-demand programming languages in the world and one of the two official programming languages used in Android development (the other being Kotlin). Developers familiar with Java are highly employable and capable of building a wide range of different apps, games, and tools. In this Java tutorial for beginners, you will take your first steps to become one such developer! We’ll go through everything you need to know to get started, and help you build your first basic app.
What is Java?
Java is an object-oriented programming language developed by Sun Microsystems in the 1990s (later purchased by Oracle).
“Object oriented” refers to the way that Java code is structured: in modular sections called “classes” that work together to deliver a cohesive experience. We’ll discuss this more later, but suffice to say that it results in versatile and organized code that is easy to edit and repurpose.
Java is influenced by C and C++, so it has many similarities with those languages (and C#). One of the big advantages of Java is that it is “platform independent.” This means that code you write on one machine can easily be run on a different one. This is referred to as the “write once, run anywhere” principle (although it is not always that simple in practice!).
To run and use Java, you need three things:
The JDK – Java Development Kit
The JRE – The Java Runtime Environment
The JVM – The Java Virtual Machine
The Java Virtual Machine ensures that your Java applications have access to the minimum resources they need to run. It is thanks to the JVM that Java code is so easily run across platforms.
The Java Runtime Environment provides a “container” for those elements and your code to run in. The JDK is the “compiler” that interprets the code itself and executes it. The JDK also contains the developer tools you need to write Java code (as the name suggests!).
The good news is that developers need only concern themselves with downloading the JDK – as this comes packed with the other two components.
How to get started with Java programming
If you plan on developing Java apps on your desktop computer, then you will need to download and install the JDK.
You can get the latest version of the JDK directly from Oracle. Once you’ve installed this, your computer will have the ability to understand and run Java code. However, you will still need an additional piece of software in order to actually write the code. This is the “Integrated Development Environment” or IDE: the interface used by developers to enter their code and call upon the JDK.
When developing for Android, you will use the Android Studio IDE. This not only serves as an interface for your Java (or Kotlin) code, but also acts as a bridge for accessing Android-specific code from the SDK. For more on that, check out our guide to Android development for beginners.
For the purposes of this Java tutorial, it may be easier to write your code directly into a Java compiler app. You can download these for Android and iOS, or even find web apps that run in your browser. These tools provide everything you need in one place and let you start testing code.
I recommend compilejava.net.
How easy is it to learn Java programming?
If you’re new to Java development, then you may understandably be a little apprehensive. How easy is Java to learn?
This question is somewhat subjective, but I would personally rate Java as being on the slightly harder end of the spectrum. While easier than C++ and is often described as more user-friendly, it certainly isn’t quite as straightforward as options like Python or BASIC which sit at the very beginner-friendly end of the spectrum. For absolute beginners who want the smoothest ride possible, I would recommend Python as an easier starting point.
C# is also a little easier as compared with Java, although they are very similar.
Also read: An introduction to C# for Android for beginners
Of course, if you have a specific goal in mind – such as developing apps for Android – it is probably easiest to start with a language that is already supported by that platform.
Java has its quirks, but it’s certainly not impossible to learn and will open up a wealth of opportunities once you crack it. And because Java has so many similarities with C and C#, you’ll be able to transition to those languages without too much effort.
Also read: I want to developed Android apps – which languages should I learn?
What is Java syntax?
Before we dive into the meat of this Java for beginners tutorial, it’s worth taking a moment to examine Java syntax.
Java syntax refers to the way that things are written. Java is very particular about this, and if you don’t write things in a certain way, then your code won’t run!
I actually wrote a whole article on Java syntax for Android development, but to recap on the basics:
Most lines should end with a semicolon “;”
The exception is a line that opens up a new code block. This should end with an open curly bracket “{“. Alternatively, this open bracket can be placed on a new line beneath the statement. Code blocks are chunks of code that perform specific, separate tasks.
Code inside the code block should then be indented to set it apart from the rest.
Open code blocks should be closed with a closing curly bracket “}”.
Comments are lines preceded by “//”
If you hit “run” or “compile” and you get an error, there is a high chance it’s because you missed off a semi-colon somewhere!
You will never stop doing this and it will never stop being annoying. Joy!
With that out of the way, we can dive into the Java tutorial proper!
Java basics: your first program
Head over to compilejava.net and you will be greeted by an editor with a bunch of code already in it.
(If you would rather use a different IDE or app that’s fine too! Chances are your new project will be populated by similar code.)
Delete everything except the following:
public class HelloWorld { public static void main(String[] args) { } }
This is what we refer to “in the biz” (this Java tutorial is brought to you by Phil Dunphy) as “boilerplate code.” Boilerplate is any code that is required for practically any program to run.
The first line here defines the “class” which is essentially a module of code. We then need a method within that class, which is a little block of code that performs a task. In every Java program, there needs to be a method called main, as this tells Java where the program starts.
You won’t need to worry about the rest until later. All we need to know for this Java tutorial right now is that the code we actually want to run should be placed within the curly brackets beneath the word “main.”
Place the following statement here:
System.out.print("Hello world!");
This statement will write the words “Hello world!” on the screen. Hit “Compile & Execute” and you’ll be able to see it in action! (It’s a programming tradition to make your first program in any new language say “Hello world!” Programmers are a weird bunch.)
Congratulations! You just wrote your first Java app!
Introducing variables in Java
Now it’s time to cover some more important Java basics. Few things are more fundamental to programming than learning how to use variables!
A variable is essentially a “container” for some data. That means you’ll choose a word that is going to represent a value of some sort. We also need to define variables based on the type of data that they are going to reference.
Three basic types of variable that we are going to introduce in this Java tutorial are:
Integers – Whole numbers.
Floats – Or “floating point variables.” These contain full numbers that can include decimals. The “floating point” refers to the decimal place.
Strings – Strings contain alphanumeric characters and symbols. A typical use for a string would be to store someone’s name, or perhaps a sentence.
Once we define a variable, we can then insert it into our code in order to alter the output. For example:
public class HelloWorld { public static void main(String[] args) { String name = "Adam"; System.out.print("Hello " + name); } }
In this example code, we have defined a string variable called “name.” We did this by using the data type “String”, followed by the name of our variable, followed by the data. When you place something in inverted commas in Java, it will be interpreted verbatim as a string.
Now we print to the screen as before, but this time have replaced “Hello world!” With “Hello ” + name. This shows the string “Hello “, followed by whatever value is contained within the following String variable!
The great thing about using variables is that they let us manipulate data so that our code can behave dynamically. By changing the value of name you can change the way the program behaves without altering any actual code!
Conditional statements in Java tutorial
Another of the most important Java basics, is getting to grips with conditional statements.
Conditional statements use code blocks that only run under certain conditions. For example, we might want to grant special user privileges to the main user of our app. That’s me by the way.
So to do this, we could use the following code:
public class HelloWorld { public static void main(String[] args) { String name = "Adam"; System.out.print("Hello " + name +"\r\n"); if (name == "Adam") { System.out.print("Special user priveledges granted!"); } } }
Run this code and you’ll see that the special permissions are granted. But if you change the value of name to something else, then the code won’t run!
This code uses an “if” statement. This checks to see if the statement contained within the brackets is true. If it is, then the following code block will run. Remember to indent your code and then close the block at the end! If the statement in the brackets is false, then the code will simply skip over that section and continue from the closed brackets onward.
Notice that we use two “=” signs when we check data. You use just one when you assign data.
Methods in Java tutorial
One more easy concept we can introduce in this Java tutorial is how to use methods. This will give you a bit more idea regarding the way that Java code is structured and what can be done with it.
All we’re going to do, is take some of the code we’ve already written and then place it inside another method outside of the main method:
public class HelloWorld { public static void main(String[] args) { String name = "Adam"; System.out.print("Hello " + name +"\r\n"); if (name == "Adam") { grantPermission(); } } static void grantPermission() { System.out.print("Special user priveledges granted!"); } }
We created the new method on the line that starts “static void.” This states that the method defines a function rather than a property of an object and that it doesn’t return any data. You can worry about that later!
But anything we insert inside the following code block will now run any time that we “call” the method by writing its name in our code: grantPermission(). The program will then execute that code block and return to the point it left from.
Were we to write grantPermission() multiple times, the “Special user privileges granted!” message would be displayed multiple times! This is what makes methods such fundamental Java basics: they allow you to perform repetitive tasks without writing out code over and over!
Passing arguments in Java
What’s even better about methods though, is that they can receive and manipulate variables. We do this by passing variables into our methods as “Strings.” This is what the brackets following the method name are for.
In the following example, I have created a method that receives a string variable, and I have called that nameCheck. I can then refer to nameCheck from within that code block, and its value will be equal to whatever I placed inside the curly brackets when I called the method.
For this Java tutorial, I’ve passed the “name” value to a method and placed the if statement inside there. This way, we could check multiple names in succession, without having to type out the same code over and over!
Hopefully, this gives you an idea of just how powerful methods can be!
public class HelloWorld { public static void main(String[] args) { String name = "Adam"; System.out.print("Hello " + name +"\r\n"); checkUser(name); } static void checkUser(String nameCheck) { if (nameCheck == "Adam") { System.out.print("Special user priveledges granted!"); } } }
That’s all for now!
That brings us to the end of this Java tutorial. Hopefully, you now have a good idea of how to learn Java. You can even write some simple code yourself: using variables and conditional statements, you can actually get Java to do some interesting things already!
The next stage is to understand object-oriented programming and classes. This understanding is what really gives Java and languages like it their power, but it can be a little tricky to wrap your head around at first!
Read also: What is Object Oriented Programming?
The best place to learn more Java programming? Check out our amazing guide from Gary Sims that will take you through the entire process and show you how to leverage those skills to build powerful Android apps. You can get 83% off your purchase if you act now!
Of course, there is much more to learn! Stay tuned for the next Java tutorial, and let us know how you get on in the comments below.
Other frequently asked questions
Q: Are Java and Python similar? A: While these programming languages have their similarities, Java is quite different from Python. Python is structure agnostic, meaning it can be written in a functional manner or object-oriented manner. Java is statically typed whereas Python is dynamically typed. There are also many syntax differences.
Q: Should I learn Swift or Java? A: That depends very much on your intended use-case. Swift is for iOS and MacOS development.
Q: Which Java framework should I learn? A: A Java framework is a body of pre-written code that lets you do certain things with your own code, such as building web apps. The answer once again depends on what your intended goals are. You can find a useful list of Java frameworks here.
Q: Can I learn Java without any programming experience? A: If you followed this Java tutorial without too much trouble, then the answer is a resounding yes! It may take a bit of head-scratching, but it is well worth the effort.
source https://www.androidauthority.com/java-tutorial-for-beginners-write-a-simple-app-with-no-previous-experience-1121975/
0 notes
Quote
Continuous Integration (CI) workflows are considered a best practice these days. As in, you work with your version control system (Git), and as you do, CI is doing work for you like running tests, sending notifications, and deploying code. That last part is called Continuous Deployment (CD). But shipping code to a production server often requires paid services. With GitHub Actions, Continuous Deployment is free for everyone. Let’s explore how to set that up. DevOps is for everyone As a front-end developer, continuous deployment workflows used to be exciting, but mysterious to me. I remember numerous times being scared to touch deployment configurations. I defaulted to the easy route instead — usually having someone else set it up and maintain it, or manual copying and pasting things in a worst-case scenario. As soon as I understood the basics of rsync, CD finally became tangible to me. With the following GitHub Action workflow, you do not need to be a DevOps specialist; but you’ll still have the tools at hand to set up best practice deployment workflows. The basics of a Continuous Deployment workflow So what’s the deal, how does this work? It all starts with CI, which means that you commit code to a shared remote repository, like GitHub, and every push to it will run automated tasks on a remote server. Those tasks could include test and build processes, like linting, concatenation, minification and image optimization, among others. CD also delivers code to a production website server. That may happen by copying the verified and built code and placing it on the server via FTP, SSH, or by shipping containers to an infrastructure. While every shared hosting package has FTP access, it’s rather unreliable and slow to send many files to a server. And while shipping application containers is a safe way to release complex applications, the infrastructure and setup can be rather complex as well. Deploying code via SSH though is fast, safe and flexible. Plus, it’s supported by many hosting packages. How to deploy with rsync An easy and efficient way to ship files to a server via SSH is rsync, a utility tool to sync files between a source and destination folder, drive or computer. It will only synchronize those files which have changed or don’t already exist at the destination. As it became a standard tool on popular Linux distributions, chances are high you don’t even need to install it. The most basic operation is as easy as calling rsync SRC DEST to sync files from one directory to another one. However, there are a couple of options you want to consider: -c compares file changes by checksum, not modification time -h outputs numbers in a more human readable format -a retains file attributes and permissions and recursively copies files and directories -v shows status output --delete deletes files from the destination that aren’t found in the source (anymore) --exclude prevents syncing specified files like the .git directory and node_modules And finally, you want to send the files to a remote server, which makes the full command look like this: rsync -chav --delete --exclude /.git/ --exclude /node_modules/ ./ [email protected]:/mydir You could run that command from your local computer to deploy to any live server. But how cool would it be if it was running in a controlled environment from a clean state? Right, that’s what you’re here for. Let’s move on with that. Create a GitHub Actions workflow With GitHub Actions you can configure workflows to run on any GitHub event. While there is a marketplace for GitHub Actions, we don’t need any of them but will build our own workflow. To get started, go to the “Actions” tab of your repository and click “Set up a workflow yourself.” This will open the workflow editor with a .yaml template that will be committed to the .github/workflows directory of your repository. When saved, the workflow checks out your repo code and runs some echo commands. name helps follow the status and results later. run contains the shell commands you want to run in each step. Define a deployment trigger Theoretically, every commit to the master branch should be production-ready. However, reality teaches you that you need to test results on the production server after deployment as well and you need to schedule that. We at bleech consider it a best practice to only deploy on workdays — except Fridays and only before 4:00 pm — to make sure we have time to roll back or fix issues during business hours if anything goes wrong. An easy way to get manual-level control is to set up a branch just for triggering deployments. That way, you can specifically merge your master branch into it whenever you are ready. Call that branch production, let everyone on your team know pushes to that branch are only allowed from the master branch and tell them to do it like this: git push origin master:production Here’s how to change your workflow trigger to only run on pushes to that production branch: name: Deployment on: push: branches: [ production ] Build and verify the theme I’ll assume you’re using Flynt, our WordPress starter theme, which comes with dependency management via Composer and npm as well as a preconfigured build process. If you’re using a different theme, the build process is likely to be similar, but might need adjustments. And if you’re checking in the built assets to your repository, you can skip all steps except the checkout command. For our example, let’s make sure that node is executed in the required version and that dependencies are installed before building: jobs: deploy: runs-on: ubuntu-latest steps: - uses: actions/checkout@v2 - uses: actions/[email protected] with: version: 12.x - name: Install dependencies run: | composer install -o npm install - name: Build run: npm run build The Flynt build task finally requires, lints, compiles, and transpiles Sass and JavaScript files, then adds revisioning to assets to prevent browser cache issues. If anything in the build step fails, the workflow will stop executing and thus prevents you from deploying a broken release. Configure server access and destination For the rsync command to run successfully, GitHub needs access to SSH into your server. This can be accomplished by: Generating a new SSH key (without a passphrase) Adding the public key to your ~/.ssh/authorized_keys on the production server Adding the private key as a secret with the name DEPLOY_KEY to the repository The sync workflow step needs to save the key to a local file, adjust file permissions and pass the file to the rsync command. The destination has to point to your WordPress theme directory on the production server. It’s convenient to define it as a variable so you know what to change when reusing the workflow for future projects. - name: Sync env: dest: '[email protected]:/mydir/wp-content/themes/mytheme’ run: | echo "$" > deploy_key chmod 600 ./deploy_key rsync -chav --delete \ -e 'ssh -i ./deploy_key -o StrictHostKeyChecking=no' \ --exclude /.git/ \ --exclude /.github/ \ --exclude /node_modules/ \ ./ $ Depending on your project structure, you might want to deploy plugins and other theme related files as well. To accomplish that, change the source and destination to the desired parent directory, make sure to check if the excluded files need an update, and check if any paths in the build process should be adjusted. Put the pieces together We’ve covered all necessary steps of the CD process. Now we need to run them in a sequence which should: Trigger on each push to the production branch Install dependencies Build and verify the code Send the result to a server via rsync The complete GitHub workflow will look like this: name: Deployment on: push: branches: [ production ] jobs: deploy: runs-on: ubuntu-latest steps: - uses: actions/checkout@v2 - uses: actions/[email protected] with: version: 12.x - name: Install dependencies run: | composer install -o npm install - name: Build run: npm run build - name: Sync env: dest: '[email protected]:/mydir/wp-content/themes/mytheme’ run: | echo "$" > deploy_key chmod 600 ./deploy_key rsync -chav --delete \ -e 'ssh -i ./deploy_key -o StrictHostKeyChecking=no' \ --exclude /.git/ \ --exclude /.github/ \ --exclude /node_modules/ \ ./ $ To test the workflow, commit the changes, pull them into your local repository and trigger the deployment by pushing your master branch to the production branch: git push origin master:production You can follow the status of the execution by going to the “Actions” tab in GitHub, then selecting the recent execution and clicking on the “deploy“ job. The green checkmarks indicate that everything went smoothly. If there are any issues, check the logs of the failed step to fix them. Check the full report on GitHub Congratulations! You’ve successfully deployed your WordPress theme to a server. The workflow file can easily be reused for future projects, making continuous deployment setups a breeze. To further refine your deployment process, the following topics are worth considering: Caching dependencies to speed up the GitHub workflow Activating the WordPress maintenance mode while syncing files Clearing the website cache of a plugin (like Cache Enabler) after the deployment
http://damianfallon.blogspot.com/2020/04/continuous-deployments-for-wordpress.html
0 notes
Text
How to become an Android developer - Android development basics
As a beginner, one of the hardest things is just knowing what you need to learn. Besides taking a look at how to become an Android developer, we will also discuss why you should learn Android development. Here at CodeBrainer, we come across students who ask us what kind of topics they need to learn before they become proficient in Android development. The checklist isn’t short; nevertheless, we have decided to list most of the points that beginners should check-off.
First I must emphasise that this is a checklist but you can skip a step and you don’t need to learn it all in one week. It will take quite a bit of your time, but in the end, you will have enough skills to start a project of your own or start asking for internships, help a friend or acquaintance or even start applying for jobs.
We will try to explain a little bit about every topic. But some things you will have to research on your own, nevertheless let us know if you think we should add something to the list.
How to become an Android developer - Checklist
In our opinion, these are the skills and Android development basics you should conquer:
Android Studio
Layout Editor
Emulator and running apps
Android SDK and API version
UI Components and UX
Storing Data locally within the app
Calling REST APIs
Material design, styling and themes
Java or Kotlin and Objective programming
Debugging
Learn how you can start creating apps with Flutter.
How to become an Android developer - WHY?
As I said, before we can talk about what to learn and how to become an Android developer, we have to talk about why you should learn it in the first place. A lot of students wonder where to start and to be perfectly honest; Android Development is an excellent place to start. There are a lot of reasons; I like Android because it is accessible to anyone and you can install development tools on most operating systems. For example, I run Android studio on my MacBook Pro :D
Great IDE (Integrated development environment)
We will talk about Android Studio later, but for now, let me just tell you since it has grown to version 3, the IDE is excellent. With every version, we get more help in so many logical ways, that you will not even notice you are using artificial support.
Easier to start than web development
I like web development, but I still like to promote mobile development to beginners. Why? With mobile development, you get a friendly environment from the start. The thing that makes mobile development better for beginners is that we all use mobile phones all the time. You get a feeling of how an app should look like and what kind of functionalities you will need. With web development, it is harder to get a feel for the whole website, since you are looking at only one page at a time and then doing another google search. Most of the time, you already own a device, and you can install an app directly to your device and show it to your friends. In fact, I guarantee they will be amazed at what you can do.
Huge market size and mobile usage is still growing
The mobile application market has an excellent projection: by 2021, it is expected that the number of mobile app downloads worldwide will reach 352 billion. Android has a 76% market share compared to iOS, with 19%. We do have to be fair and admit that iOS is a better earner. Google Play earned $20,1B in revenue, while the App Store made a revenue of $38,5B. Google Play grew about 30 percent over 2016. And without doing any precise calculations, you can clearly see that this is a great market to work on.
New technologies are coming fast
Google is favourite to be on the cutting edge of technology all the time. No to mention, more and more of it is available for developers to use. Google is opening its knowledge about machine learning and artificial intelligence with development kits. And these improvements are available to Android developers very quickly. This will keep you on the edge of curiosity and keep you in touch with the ever-evolving world of IT.
Wide range of services for developers
Apart from new and cutting edge technologies, Google offers a lot of services to us out of the box. Maps, Analytics, Places for location awareness apps. A Great place to start is Firebase that offers notifications, analytics, crashlytics, real-time database (develop apps without the need for servers). Additionally, for launching apps all around the world, Test Lab could be a great partner, since you can test your app on a bunch of different devices.
“Android developer” is a great job to have
There are now 2,5 billion monthly active Android devices globally, and it’s the largest reach of any computing platform of its kind. Globally speaking, between 70 to 80% of all mobile devices are Androids. There is a massive demand for new Android developers out there. And because the market is growing, there is a lack of Android developers all around the world. All in all, it is hard to determine an average salary for the whole world, but the fact is that you will get a decent salary for your knowledge no matter where you live. In the light of what has been said, let's take a look at how to become an Android developer.
Android Studio
Since Android Studio is the best IDE (Integrated Development Environment) for Android development, this is the first thing you must conquer. For the most part, our content focuses on explaining about Android Studio as we explain development for Android. This goes the same for the Layout editor and the Code Editor.
Android Studio - Layout Editor
The Layout Editor is part of Android Studio. In fact, this is the place where you design the UI (User Interface) for your app. The main parts of an Android project are XMLs for designing activities, drawables and other resources and Java (or Kotlin) files for all the code you will write. Of course, on your path on how to become an Android developer, you will encounter more advanced projects where you will also learn about the structure of a project in detail.
But what I love most is that with the layout editor you are using a drag&drop approach and you can immediately see what you have done. You can place elements on the layout. You can group them into containers or views all using just your mouse. And this is great for beginners because you can get familiar with the code while already using a development environment. Even for mature developers, using visual tools helps when setting up screens, and in the layout editor, you can simulate the size of devices so that you can check if your design will work on all device sizes.
In addition, we have added a more in-depth look into Layout Editor as well. Check our Layout Editor blog post, learn about it and also find out a little bit about a few hidden features, so that you will develop your activities (screens) with ease.
Android Studio - Emulator and running apps
As we are creating apps for Android, and we want to run them to see how they look. One of the first things we need for that is an emulator. In fact, Android Studio has the ability to run an emulator out of the box. It has a lot of features. It runs fast and looks nice. All in all, it is a perfect tool to have. Generally speaking, this is just like going to the store and picking the best specs for your device, but in our case, it will be a virtual one.
Unfortunately, sometimes you have to prepare your computer to run an emulator. Here is an extended manual on how to configure hardware acceleration for an emulator or read our blog post on how to run an emulator.
You can also run an app on your mobile phone. This a good approach as well, as you check the touch and feel on a real device as you develop your app.
Here is an explanation on how to install drivers to run an app on your device.
Android SDK and API version
This is really a broad topic since it contains all that Android is about, all its functionalities. But as a beginner when you are starting to learn how to become an android developer, all you need to know is where to look for a new version, how to install it and a few pointers on which one to use.
A simple explanation would be: SDK (Software Development Kit) is a bunch of tools, documentation, examples and code for us, developers to use. API (Application Programming Interface) is an actual collection of Android functionalities, from showing screens, pop-ups, notifications… everything. We will give you a hint, use the API above 21 since this will get enough of devices to work with. And for a new project always aim for the last three major versions.
UI Components and UX
User interface components will link your app together, all the code, knowledge and data will be presented with some form of UI. UI means “user interface”, and this is what a user sees. UX means “user experience”, and this is the flow through the app, interactions, reactions to a users input, the whole story that happens within the app.
Of course, the best source of getting familiar with UI components would be our Calculator course since it explains most of them in great detail.
We have a few more blog posts explaining other UI Components, like a spinner, radio button and mail performing checks in our registration form blog post.
Storing Data locally within the app
Storing data is essential for every app. It can be as simple as storing emails for a login to a full-blown database with tables, relations, filters… Moreover, we think you must go step by step on this topic and learn a little bit about what data is, what are entities, and where to store them.
The first topic we cover is in our blog post about storing data within sharedPreferences, and you should read it. The next issue is storing even more data by using Room, which is a great implementation of ORM (object-relational mapping) and is part of Android. In fact, ORM lets us work with data as classes in our code, this is a more modern approach, but Room still allows us to use SQL statements if we want to.
Calling REST APIs
All the mature apps use some kind of REST (Representational State Transfer) calls. For example, if an app wants to know what is the temperature outside, in some city, it would use a REST API to get the data. If we want to login into a social network and get the list of friends we would use a REST API. This is a topic strongly linked with storing data since we are storing and reading data just not within an app but on a server. In short, the main topic to learn is how to transform data from a REST API to local data structures and classes. And be sure to learn how to react if an error occurs and how to send data to a server.
Material design, styling and themes
If you are thinking about how to become an Android developer, you have to learn how to create a nice app easily. For beginners, it is best to use material design and the Android Support library, which can help us make an app that will look great even without using a designer. All things considered, for mature apps, we will still use a designer to prepare a UI and UX, but for Android, it will be based on some variant of material design anyway. To summarize, on your path of how to become an Android developer, learning material design is essential.
Java or Kotlin and Objective programming
Both Java and Kotlin are excellent languages to start learning. Java has more structure to it. But Kotlin is more modern in style. And both are good choices. Java is the right choice if you want to broaden your skills with back-end development since developers for Java back-end are in very high demand. Kotlin has a shorter implementation; this means you see less code, and more things are done for you.
We are still teaching Java since it can be used elsewhere as well (Java backend for example), but Kotlin is a great choice as well. No matter what we choose for our language, we will have to learn some basics about it. In our courses, you will learn about Primitive Data types, Strings Control Structures (If, switch…), what are methods and of course all about classes (Inheritance, Interfaces and Abstract Classes). Arrays, sets, maps and other extended data types help us when working with a lot of data. For example, storing a list of people, list of cars, TODO tasks…
Why is it important to dive deep into essentials? Having a good foundation on essentials and Java basics will help you develop more complex applications while keeping them simple and organised at the same time.
Debugging
Debugging is an important part of programming since a lot of unpredictable flows will happen in our apps and we need a way of figuring out what went wrong, what was the source of an error and find a code that produced that error. When learning how to become an Android developer this might look like a tough topic, but at the core, it is something that will help you learn more advanced topics with ease since you will know how to track what the application is doing behind the scenes.
Making your app ready for Google Play
Equally important to all the skills mentioned above on how to become an android developer is, opening your app to users all around the globe. All things considered, this is one of the primary motivators for building apps in the first place. We must have knowledge about signing our apps, how to upload them to Google Play, what kind of text descriptions we need, screenshots we will show in the store... We need icons, designs and texts. All in all, this knowledge will come in handy a lot. In the first place, it will help you distribute an app to the first test users in the alpha store and then move to a more broad audience with beta and the final step with the public release.
How to become an Android developer - Conclusion
This is just a short list of topics we here at CodeBrainer think that you need when you think about how to become an Android developer. And we all want for you to be a great developer and make us proud. We will add advanced topics as we go. Advanced topics will just make you stand out from the crowd and give you comprehensive knowledge about Android. In the long run, what you need is experience, and this means practice, and then more practice.[Source]-https://www.codebrainer.com/blog/what-to-learn-checklist-for-android-beginners
Enroll for Android Certification in Mumbai at Asterix Solution to develop your career in Android. Make your own android app after Android Developer Training provides under the guidance of expert Trainers.
0 notes
Text
The Training Commission
After the end of a second ultraviolent American civil war, after we’ve placed the state under the guidance of automated systems—well, there’s inevitably going to be a Smithsonian exhibit. Ingrid Burrington and Brendan Byrne’s brilliant new speculative fiction newsletter—which received support from the Mozilla Foundation, and which we’re thrilled to share the first installment here today—collects the dispatches of an architecture critic with personal ties to the bloody conflict who is assigned to review the museum’s new Reconciliation Wing.
The authors explain: “The Training Commission is a speculative fiction newsletter about the compromises and consequences of applying technological solutionism to collective trauma. The USA, still reeling from a civil war colloquially referred to as the Shitstorm, has adopted an algorithmic society to free the nation from the pain of governing itself.” It’s also a hell of a story. There will be six installments in all, arriving weekly—subscribe here to receive the next five direct, as they say, to your inbox. Enjoy. -the ed
From: Aoife T <[email protected]> Subject: re: This is a bad idea Date: May 11, 2038 3:49 PM EDT To: Ellen Leavitt <[email protected]>
I understand why you think that would work, Ellen, but aside from generally having no interest in putting my personal life on display like that, I really don’t think me writing a tearjerker op-ed about a traumatizing exhibition display is going to get the Smithsonian to change their minds so much as convince them that the controversy will draw crowds. I’d rather deal with them through backchannels with my mom and sister on board, try to make this all go away quietly before the museum opens.
Thanks for the Kilfe token, I just saw it come through on the ledger. I’ll be running the runnable parts of the draft in my newsletter, I guess. Sorry again to let you down on this. I might have a beat on something interesting soon–too early to say but it means I think I’ll be down in DC for at least another week.
From: Aoife T <[email protected]> Subject: Some Things Don’t Belong In A Museum Date: May 12, 2038 4:30:58 PM EDT To: [email protected]
Apologies that it’s been a while since the last one of these. I’ve been busy, not successful busy, mostly pitching pieces in my new/old specialty. You’d think a contemporary moment so focused on rebuilding America would give some kind of shit about architecture, but uhm, nope.
What follows began as a review of the new Reconciliation Wing of the Smithsonian which a Very Kind Editor cherry-picked me for. It’s good to get paid to visit my hometown because, as my regular readers know, I will otherwise avoid the District like the sweaty American bog it is. I was apparently desperate enough for work to imagine the Reconciliation Wing might not feature an intersection with my own personal history, which, of course, was deeply delusional, and I took myself out of the game in a semi-dramatic fashion. Suffice to say, currently I’m fine but couldn’t really file something this incomplete so I’m sharing what parts of it could be salvaged here.
As seen from the National Mall ferry, the finally-completed Reconciliation Wing of the Smithsonian American History Museum is a major architectural interruption in the capitol’s low-lying landscape of retrofitted and elevated 20th-century buildings–which is ironic, considering how much attention went to making it seamlessly connect to the natural systems of the Anacostia canals. The first new construction project on the Mall since the creation of the DC canal system, the Reconciliation Wing has been subject of curiosity not only as an opening move in historicizing the National Shitstorm (ahem, The Interstate Conflict) but also as a formal progression in post-Capitol architecture. (Unless, of course, you believe that the bare-chested, perpetually shouting hologram of Alex Jones in the rear sculpture garden of the Newseum cannot be topped.)
The wing’s designer, Kay Mangakāhia, was a controversial selection from the Smithsonian and Ashburn Institute’s open call for submissions. An intern at Bjarke Ingels Group at the time, Mangakāhia was notable not only for her age (at twenty-two, she was barely ten at the time the Ashburn Accords were even signed) but her permaculture-infused proposal. The mycelium buttresses and living fungal structures of the Reconciliation Wing are now in high demand, but it took Mangakāhia’s persistence and the algorithm’s faith in her design to reach this plateau. The thriving structure’s delicate complexity and environmental pragmatism reflect the oft-quoted line from Mangakāhia’s original proposal: “survival without poetics is a carceral existence.”
One can’t say such an attitude pervades the exhibits in the Reconciliation Wing. Upon entry, a flickering series of Extremely Relatable Human Faces projected on black plinths greet visitors. The visages display a fairly narrow scale of emotions between Makes You Think and Slight but Telling Emotional Pain but somehow they manage to be all very specific. No context is provided. Given the purpose of the wing, one might suspect that these are some of the IRL victims of what the museum seems to have decided we’re calling “The First Algorithmic Society.”
Only upon arriving at a small, dim aperture is context provided: the portraits are all visuals generated by AIs developed pre-Shitstorm, let loose to slither upstream into visitors’ phones. They cull contact info, pictures, bank account etc. and put together a monstermash of the type of person you’re most likely to have an empathetic reaction to, then plugged said persona into the the loop, along with the last fifty or so visitors’.
This led to the other journalists in attendance performing variations on the exhausted sigh, since recent years have seen around half a dozen gallery shows in NYC using some version of this shock tactic (though, to be fair, rarely with the technical success of the Reconciliation Wing). While this installation is no doubt supposed to primarily remind visitors of the prevailing ease with which corporations accessed our pocket technological unconsciousnesses pre-Ashburn, it also serves the dual purpose of showing how vulnerable Palantir’s National Firewall is to even ridiculously outdated tech. Hence why the feds keeps running that Don’t Bring Your Phone to China/Don’t Actually Go to China Ever awareness campaign. (It shouldn’t surprise you that Vera’s written about this. Read her shit!)
Next is a long, narrow room skirted on the left by an unbroken screen which features a 1990s techno-thriller code waterfall with, again, no context. On the right runs a series of pictures, videos and artifacts designed to shock viewers into clubsterbomb memories–the remnants of a Google bus retrofitted and weaponized into a battering ram, that famous photo of the National Guard standing down at one of the many early BLM standoffs (everyone remembers the photo, never the standoff), a yellowing final print edition of the Washington Post.
To be fair, the Smithsonian’s only getting a fraction of the archival materials collected by the Ashburn Institute as part of the truth and reconciliation process. (This controversy–the splintering of the archive and intra-federal agency squabbles over it–does not get a mention in the exhibition.) Of course they went with the most bombastic acquisitions. But for all the attempted sensory overload, the wall text and captions are jarringly milquetoast, acquiescing to the kind of both-sides-ism that heavily aided the collapse of consensus truths in the first place. I wondered what kind of exhibit might have emerged had the Smithsonian received the full archives of the Training Commission–side note, has anyone ever actually referred to it as the Ashburn Truth and Reconciliation Council For A New American Consensus outside of official documents? Even Darcy Lawson called it the TC in her fucking victory lap TED Talk last year. When the director of the Ashburn Institute has embraced a term originally coined and deployed by critics of the project it seems like it might be time to drop the formalities.
Presumably, the TC is at least acknowledged in the exhibition. Considering that it enabled UBI, closed (almost) every prison in the country, and effectively automated the office of the Presidency out of existence, it would have to be. But I didn’t get that far.
(Here endeth the non-article.)
As longtime readers already know, I write about architecture and design here, not my brother. In fact, I don’t write about him at all. I have no interest in following in Ciarnán Whelan’s investigative reporter footsteps or reflecting on what happened to him in any public setting. I’m hoping that by the time the Reconciliation Wing opens to the public, a particularly distasteful section of the exhibition will be revised or altogether removed. But to include something so graphic with so little warning, with such a manipulative experience design, and with the gall to strategically place tissue boxes around the space as though that’s an act of mercy? It’s cheap and insulting. It doesn’t deserve to be written about. So I didn’t write about it.
Thanks for subscribing (and reading). Depending on whether a piece an editor’s been sitting on for months ever lands I might have something old-new for you next week.
From: Aoife T <[email protected]> Subject: Deadtech from a Dead Guy Date: May 13, 2038 2:31:58 AM EDT To: Avi Huerta <[email protected]>
Avi,
Did you read my last stringr newsletter? I mean, probably not by now since it just went out like under twelve hours ago and you have a small excellent child. But I can’t sleep, and you’re the kind of person who might be able to help but you also probably should read that first for context. (And, as context for the context, most of what’s below is what I wrote in a fugue state before realizing that I couldn’t send it to my editor.)
So I knew the real reason I got a press pass to the Reconciliation Wing preview wasn’t my bylines so much as my real last name. The press tour minders were practically levitating with morbid curiosity when I arrived. I managed to ditch them, lingering and checking photo credits (nerd) by about halfway through the exhibit. This meant, thankfully, that there was no one around when I turned the corner into the section I had secretly hoped wouldn’t be included: the tragic death of renowned journalist Ciarnán Whelan while embedded with the Last Luddite Revolutionary Guard, declared here by the museum to be a “turning point” in the Interstate Conflict.
I mean, I was expecting some triggering bullshit, but I wasn’t expecting the audacity of how it was delivered. Instead of taking the larger-than-life screen approach with that portrait everyone loves to use of him or a slo-mo attempt to make a snuff film elegiac, I got a fucking push notification on my phone from the museum AI.
“Please be advised that the following content may be disturbing to some,” it read. It turned out that wasn’t a notice to give you a fucking choice, just a preamble before the video started to play and I was fucking thirteen years old again, staring at my palm and a video of my big dumb reporter brother using his “serious correspondent voice” I always made fun of, just outside a New Mexico Facebook data center embedded with the Ludds. People forget how long the broadcast ran before the too-good-for-a-minor-militia “DIY” quadcopter IED actually hit. (This was, of course, the video that was broadcast on Facebook Live, the one that people said Facebook tweaked the algo to downrank when their role in the attack became clear. It didn’t work. As the wall text accurately notes, most people, like me, saw it live.)
The wall displays telegraphed the rest of it, though mostly I’m just guessing from what I vaguely remember seeing spinning on the walls in front of me right before I blacked out mid-panic attack. 90% sure they have a shot of Faraday Fields under construction, which should amuse you; also seemed like they get into the conspiracy theory/ies, which probably won’t.
I woke up in a basement office of the old Smithsonian, somewhere far below the canals. A slouchy middle-aged guy with no hair on his head and a throwback 2010s beard was sitting by the door, scrolling through his phone. “Welcome back,” he said, gesturing toward an ancient percolator with the elan of a long-suffering mid-level bureaucrat. The coffee smelled about as appealing as Anacostia scumwater, but I was too tired to turn it down.
I asked if I’d been out long, a little thrown that the Smithsonian’s idea of first aid was depositing me in an office with some rando who I definitely hadn’t seen on the press tour.
“A little more than an hour. The tour’s over. If you want to see the rest of it I can take you around in a bit.” Eyes a little too steady on me, he took the smallest sip of coffee from a mug which read No Taxation Without Input/Output. “You’re a good writer. I subscribe to your Stringr.”
“No shit, thanks man. What’s your name?”
“I was surprised to hear you took this gig,” he added, “Considering.” My face must have done something because he ducked his head slightly and said, “Sorry. Just came out.”
“Nothing new. Half my subscribers are legacy leftovers. Pity’s a driving force in my economic security, if you wanna call it that.”
His face compressed into a porpoise’s little O. “That can’t be true.”
(It’s true, shut up Avi, it’s true.)
I sipped some of the coffee, letting him know via performative sigh that it was shit. “So what’s your deal, guy? You volunteer to babysit me while I’m unconscious to fanboi out here or is this like your actual job?”
Said guy did some seriously inscrutable facial muscle constrictions, which I studied as an example of how not to behave towards formerly unconscious people. Then he smiled suddenly and said, “I have to get back to work.” He raised his eyebrows, actually raised his eyebrows, and gestured at the door.
“Well,” I said, standing a little unsteadily, blowing on and sipping the rough coffee one last time. “Thanks for the hospitality, I guess.” I watched him watch my right hand replace the coffee cup. I was pissed at myself that it couldn’t stop trembling, and I was pissed at him for noticing it. “You know whoever designed that section on my brother?”
“No.”
“You know who approved it?”
He thought about that a second. “Yes.”
“Do me a favor and tell them it’s manipulative and crass? That no one fucking needs to relive that?”
He nodded once, looking down at his coffee. I left before he could put his foot in his mouth again. Outside, in a arcing, narrow corridor I turned to see the name on the door: John Temblaine Paulson.
Shockingly, my phone had already synched up with the Smithsonian’s wayfinding platform, which guided me up two separate elevators then shunted me out a service exit onto Mangakāhia’s rhizomatic terrace. I took about three steps before palming my juul out of my bag and putting it to my lips, automatically clicking the button and drawing in hard before realizing that I had clicked no button and was drawing around an object which was definitely not providing me with a long-overdue nicotine hit.
It was a USB stick. The kind you might use in, like, 2008. Dead tech, and it looked it: scarred light purple shell and a connector skewed so hard I doubted its operability.
Avi, you are well aware that I have a fairly disordered work/home/personal life, but you’ve known me long enough to know my bag is always ordered. And never have I put a USB stick in my bag. Never have I, as an adult, even used a USB stick, much less carried one on my person. So John Temblaine Paulson had, quite obviously, stuck it in there.
Recalling his idle phone-scrolling when I came to and the inscrutable creepy expressions, I concluded the guy probably filmed me passed out in his office chair as some weird sex thing, then put that video on the USB somehow and left in my bag to taunt me.
Which, as I type this, sounds kind of insane but I was also coming off a blackout induced by re-watching my brother’s livestreamed murder, so logical conclusions weren’t exactly in reach. Plus the only thing in my stomach at that point was that shit museum coffee.
As I returned to the museum entrance the elderly docent who’d processed my credentials two hours ago welcomed me with a smile that demonstrated she’d completely forgotten who I was. “Lemme tell you about the kind of people you got working here,” I spat. “John Temblaine Paulson, that weird old pervert, how could you just let him–”
“John?” said the docent.
“–scoop me up like I was a puppy or something like small and stupid and throw me over his shoulder like a sack of onions or whatever he did, maybe he used a handtruck–”
“Paulson?”
“–and just spirit me down to his little serial killer sanctum and video me while I was passed out in his shitty little Federal-ass stiff-ass chair–”
“Temblaine?”
“Yeah, don’t even try to tell me you don’t know him.”
“Of course I know him, dear. He’s in Iceland for the month.”
That set me back, my jaw going while my brain stopped, and, luckily, nothing more coming out of my mouth. The docent smiled at me like she was worried I might be about to stroke out. “There’s no one in his office then?” I mumbled.
“Oh, that should be locked,” said the docent, but she was catching up and looking all concerned. “Were you there? In Mr. Tembaline Paulson’s office? Did someone take you there?”
And here, embarrassed and out of it yet suddenly aware of my own behavior, I was saying things like I’m confused, I think, apologies, you don’t remember who I am do you? and backing out of the lobby. With the docent oozing concerned utterances in my general direction, I fled through Mangakāhia’s rhizomes and caught a ferry back to the sliver of shipping container I’d reserved on the Marion Barry Inlet (of course I didn’t tell my mom I was in town, fuck’s sake). Wrote the article, cut off the part marked HAZARD PERSONAL SHIT, sent the other chunk to Ellen, fell asleep for three hours, woke up, wrote Ellen an email saying the article was shit, and then she said no it wasn’t but yeah she couldn’t run it, and then spent the rest of the night listening to the arrhythmic thud of water against the container hull and hating myself.
I tried to clear my head this morning by heading up to Air and Space. I know, I know you fucking hate that place, but my childhood nostalgia still beats out my discomfort at imperialist propaganda. It’s one of the last places in this city where I can actually space out.
You’ll be shocked to hear this is directly related to Ciarnán taking me there routinely as a key part of Big Brother Babysitting. Specifically, the museum’s second floor, where an exposed platform lets you look down on various high points of colonialist engineering. There’s a glass partition that I’d press against, as if there was nothing between me and the immense sun-drenched lacuna beneath us, Ciarnán at the ready just in case the glass shattered under the stress of my little form.
For just a minute, fingers dragging the smudging glass, now knee-height, looking down at the overlit off-season emptiness, I felt like I just might fall, like I just might be pulled back.
When I returned to the world somewhere around the Drone Wing, my phone buzzed insistently with one of FBUS’ all-hands alerts. Automatically I obeyed and was rewarded with not-John Temblain Paulson’s face enclosed in a little blue box. “Ashburn Institute staffer found dead in Potomac.” As my eyes blurred the images and my upper back instinctively scrunched into a defensive hunch, my hand curled around the USB stick still shoved in my pocket, fingernail scouring it again and again as if that might reveal whatever was stored inside.
So: can I come visit? Whatever this guy wanted me to see was apparently important enough to fake his way into the Smithsonian, and if I hand the USB to the case workers I’ll probably never find out what’s on it. You, on the other hand, have an oracular way with the dead tech, and who knows, maybe it’ll have some fun dirt on our New Algorithmic Society we can send to a real journalist or whatever. I mean, it’s probably not real spooky ops shit. But if it is, it’ll at least be interesting, right?
A
The Training Commission syndicated from https://triviaqaweb.wordpress.com/feed/
0 notes
Text
Working with Microsoft Azure for 20 hours and why I will not use it again
Last weekend I attended a Hackathon at Microsoft. Overall it was an awesome experience and I had a lot of fun, so this post has nothing to do with the event itself and also is not my overall opinion on Microsoft. They do awesome stuff in a lot of fields, but with Azure, they are definitely under delivering.
During the event, I started to get in touch with the Azure platform. Our project Idea was to create a website where you can search for news and then via sentiment analysis this news would be sorted by "happiness". The news search and sentiment analysis are offered via Azures so-called cognitive services that abstract the ML models away and you can simply use an API for accessing that services....so far so good. With this preconditions most of you coders out there will have the thought: "This sounds too easy to fill 24h of programming". Exactly what I thought...already thinking about also coding an Alexa skill and so on to fill the time. With two experienced developers, we thought the backend would be done in about 4h (conservative calculation) as it would only be stitching together three APIs and delivering that info to a JSON REST API for our frontend team. For keeping the fun up and having more learnings during the project we decided to do the backend as a serverless function. But then Azure came into our way...
In the end, it took us ~9h to develop the backend as a serverless function that is mainly a 40 lines JavaScript file we had to develop in the in-browser "editor" that Azure offers as all the other approaches we tried didn't work out and we ended up abandoning them. Once again: 9 hours for 40 lines of JS code stitching together three APIs...that is insane. (Btw. at 3 am we decided to switch to GCP (google cloud platform) and that did the job in about 45 minutes)
So for sure we did things wrong and it could have been done faster, but this blog post is about the hard onboarding and overall bad structure of Azure. Please also keep in mind that Azure is still in a more-or-less early stage and not everything is wrong in there. In the following, I will walk you through the timeline of this disaster and suggestions I would have in mind to fix some of the most confusing steps. Actually, I will try to avoid these mistakes in my own future projects, so thanks Microsoft by showing me a way how not to do things xD
Just a bit more background: My partner in the backend had some experience with GCP and I do most of my current projects with AWS, so we did know how stuff works there...couldn't be too hard to transfer that knowledge to the Azure platform.
Start of the project
So first of all creating a new Azure account, that is not that hard and after entering credit card info you get 100$ of free credit. I actually like how Microsoft solved that here: You have two plans. You start with the 100$ free tier and if you spend all of that money you manually have to change to the pay-as-you-go plan. So that protects you of opening up an account, doing some testing, forgetting about it and then a month later you get a huge bill (happened to me with AWS). So that is nice for protecting new users that just start to test the system. Good job here Microsoft!
After setting up the account I created a new project and added some of the resources we needed. Creating a serverless function I recognized the tag "(Preview)" on the function I created but didn't think more about it...but actually, that sign should be something like Experimental/Do not use/Will most likely not work properly. We created a Python serverless function (apparently Python functions are still beta here) and tried to get some code in there.
There are three ways to get code into an azure function:
Web "editor"
Azure CLI
VS Code
...for full-featured functions. As we selected the experimental/beta/preview functionality Python we only had the latter two options. Not that bad as it is the same for AWS and I am used to deploying my code via the AWS cmd...shouldn't be way harder with Azure.
My suggesting: Do not do publish functionality that is obviously not ready yet. Do internal testing instead of using your users for that task.
Azure plugins for VS code
Microsoft overs a wide range of VS code plugins for Azure. As that is my main editor anyways I wanted to give them a try. So for the functionality of serverless functions, you need the functions plugin and about 9 other mandatory ones that are some sort of base plugins. 500mb and three VS Code crashed later finally the required plugins were installed properly. The recommended login method did not work and I had to choose the method via the browser. Not that big of a deal, but as they recommend the inline method I would think that should work. (Didn't work for the other folks in my team as well...so had nothing to do with my particular machine)
You would think that 500mb should be enough for finally being able to deploy some code...but you still need 200mb more for the Azure cli that is required for the plugins to work properly.
Finally having installed all of it you can see all your Azure functions and resources in VS code. I started to get a bit excited as it looked that the development from now on would be straight forward and easier as I am used to from AWS.
But that 700mb of code did not work properly....the most important function "deploy" did fail without a more in-depth error message...AAAAAAARRRG. Why do I have to install all that crap and then it can't do the most simple task it has to get my code into their cloud.
Keep your tooling modular and try to do fewer things, but do them right
Code templates
A nice idea is, on creating a new serverless function Azure greets you with a basic boilerplate code example showing you how to handle the basic data interfaces.
Maybe also because we selected the alpha functionality "Python" we didn't get Python code here but JavaScript. So your function is prepopulated with code that is not able to run because it is the wrong programming language. We were lucky and recognized that right away, but you could get really confusing error messages here if you then start developing in JS but actually having a Python runtime.
Better no boilerplate code than one in the wrong programming language
But at least it is colorful
So next try with the Azure CLI. The first thing that you recognize is that the CLI has all sorts of different colors...but that does not help if you are annoyed and want to get things done.
That is a thing you also recognize in the Azure web interface...it has quite some UX issues but they do have more than five color themes you can choose from for styling the UI...not sure Microsoft if you set your priorities right here ;)
Also, the CLI did not get us where we wanted....either of our own incompetence or the CLI, no clue. Either way, I would blame Azure as it is their job to help developers onboarding and at least get basic tasks (we still only want to deploy a simple "hello world") done in an acceptable time.
Focus less on making your UI shine in every color of the rainbow and try to improve documentation and onboarding examples
Full ownership of a resource still does not give you full privileges
After finally being able to deploy at least the "hello world" we wanted to go a step further...work concurrently on that project. Yes 'till now we mainly did pair programming on a single machine.
As I was the owner of that resource I wanted to give my teammate also full access to it, so he could work on the resource and add functions if required. I granted him "owner" access rights (the highest that were available) but still, he was not able to work properly with that function. In the web UI it did work more or less but than again in VS code no chance to do anything (adding a function or deploying it). I ended up doing a thing that goes against everything I learned about security: I logged in with my credentials at his machine.
So imagine yourself now already sitting in front of your laptop for about 4 1/2 hours and you did not manage to do anything of your real work.
Ditching Azure Functions and switching to GCP
That was the moment when we did ditch the idea of doing the backend as Azure function. We switched to GCP and started there all over again. As I also never worked with that platform I expected a similar hard start as I already had in the last few hours with Azure. So about 25 minutes later we achieved more on GCP than with azure 'till then.
A thing both Azure and GCP do better than AWS is they have the Logs of a serverless function in the same window as the function itself. AWS has here a different approach and you have to change to the cloud logs when you want to get info about your function and how it worker. Props to both Google and Microsoft for solving this a lot better!
Actually a hint for AWS: Give your user all control and info at a single place
Cognitive services
The prices you could win at the Hackathon were attached to using Azure and thereby we stick to the cognitive services for doing the news search and the sentiment analysis. Overall the API is straight forward: Send your data and get the results back.
One thing we got told in a presentation and that you should keep in mind when using the cognitive services: You do not control the model and it could change at any moment in time. So if you use the cognitive services for productive use, you should continuously check that the API didn't change its behavior in a way that influences your product in a bad way. But most of the time it is still a lot cheaper and better than building the model yourself
The problem that we did have with the services where again authentication issues. Quite confusing some of the cognitive services (e.g. the sentiment analysis) are have different API base URLs depending where you register that cognitive service and others are not. As I assume they need that manual setting of data centers for a particular (unknown to me) reason. Indeed I would propose to have all the cognitive services bound to a location.
The news search, for example, is not bound to a location and so we had two different behaviors of the API base URLs in our so short and easy application:
One URL for all location.
Only a certain location is valid for your resource. If you point to a wrong API location you get an "unauthorized" as the response
Pointing to the wrong location is pure incompetence on the developer side but it would help a lot if there would be a distinct error code/message for that scenario.
Have the same base URL behavior for all cognitive services
Return some sort of 'wrong location'-error if you have a valid API token but you are pointing to the wrong location
Insufficient documented SDKs
Azure offers SDKs for using their services. We gave the JS SDK for the cognitive services a try. Here we stumbled upon two sides of a medal: First props to the developers coding that SDKs they are straight forward and do what they should. Even the code itself looks good...but why the hell do I have to look into the code of the SDKs to get all the options the functions offer? When you stick to the documentation provided via the GitHub readme or NPM you only get a fraction of the functionality. We were confused that the own SKDs of Microsoft seemed not to be API complete. Looking into the code we saw they are actually API complete and do offer a lot more options then documented.
Please Microsoft: Properly document your functionalities!
IMO there must be deep problems with the internal release processes at Azure. It is not acceptable that an IT company already being so long in the industry allows itself such a standard mistake. You should not release your products (and I see the SDKs as such) without proper documentation.
"Code Examples"
During our try and error period for trying to get the JS SDK running, we stumbled upon the quickstart guide for the cognitive services Quickstart: Analyze a remote image using the REST API with Node.js in Computer Vision
Instead of using their own SDK and explaining how to use it they show you how to manually build an HTTP request in JS. Sure that can be helpful for new JS coders, but if you have an SDK for that particular reason...why are you not using it? Looks like the left hand is not knowing what the right-hand does.
Stick to one way of doing things. If you have an SKD, also use it in your quickstart guides for being consistent
Conclusion
In the end, we did port the code back from GCP to an Azure function (again ~1h of work). We selected JS instead of Python and coded completely in the web UI...that did work. I now know how real Microsoft business developers do their daily business...never leave the web UI and just accept that life is hard.
Microsoft failed to deliver a good enough experience here and lost me as a potential customer. How can it be that I was able to do the same stuff in a fraction of the time in GCP? (And keep in mind: it was already 3am in the morning, I was super tired and I also never worked with GCP before)
None of the three major players is perfect and sure I understand it is hard to deliver fast and keeping good quality in this highly competitive market. But maybe actually going the step further will help to win in the end.
Once again: This is me only rating the onboarding experience of Azure in particular! No general opinion on Microsoft.
Last one: The Azure web UI didn't work in Chrome. So if you have issues with that, Firefox did the trick for us ;)
0 notes
Text
VS Code: An In-Depth Review for WordPress Developers
Microsoft gets a bad rap. Over the years, they’ve gotten a reputation as being a bit behind the times and less-than-user-friendly. Years ago, they might have even deserved it. But not anymore. Microsoft’s latest ventures are cross-platform, intuitive, and push the boundaries of the tech. That’s where Visual Studio Code comes in. Since its release in 2015, VS Code has become the defacto code editor for many developers, nudging out Sublime Text and Atom as the top choices. And that’s saying a lot. So let’s walk you through why VS Code is so great and how Microsoft regained all our trust.
Visual Studio Code: Open-Source and Loving It
On the surface, Visual Studio Code looks like most other editors out there. Syntax highlighting, dark theme, extensions, etc. But when you dig a bit deeper, you see that unlike many other editors and IDEs out there, the experience you get in VS Code is just smooth and — pardon the pun — sublime.
The biggest positive that VS Code has going for it is that its open source. But then again, so is Atom (and it’s technically owned by Microsoft, too, since they acquired GitHub). More than that, Microsoft has released it under the MIT license, the most lenient and open of the open-source licenses. For a company that has historically been pretty tight on patents and their intellectual property, this is a huge step.
Because of this licensing, VS Code commands a die-hard community of developers who not only use the software for their professional lives, but they also contribute to the editor itself or to some of the many extensions and plugins that are available to customize it. There is some debate as to the breadth of the MIT license for Visual Studio Code, but that should not affect users nor the majority developers.
An All-In-One IDE?
Here’s the question of the hour: Is VS Code a code editor or an IDE? It has built-in Git integration, terminal access and bash, a debugging console, and a special kind of syntax highlighting and code completion called IntelliSense.
All that is immediate from the download, with no extensions or customization at all. Usually, those kinds of features being built-in and updated by the official development team are available in premium apps like PhpStorm. But with VS Code…that line gets blurred. It offers a lot of IDE-like features.
But in the end, it’s not quite a full IDE. You don’t get default code refactoring, official language-specific updates and futureproofing, and the other heavyweight stuff that an IDE can plow through. That said, there is a Visual Studio IDE. It is a separate, premium product that Microsoft has made for years, and Visual Studio Code is just another member of the Visual Studio family. So if you are looking for a full, heavyweight IDE, you can get one from MS. But this is a pretty close second, honestly.
Also, there being a Visual Studio IDE is why devs refer to it as VS Code or simply Code rather than Visual Studio. It would just be too confusing otherwise.
VS Code: Out of the Box
If you haven’t guessed by now, there are a lot of parts that make up VS Code. Let’s start out by looking at the basics and how the whole thing works right out of the box, before adding any kind of extension or customizing anything.
When you open the editor for the first time, you will notice two things:
The layout and design are similar to other editors, and therefore familiar to most folks
It loads up quicker than most other code editors (Atom, we’re looking in your direction)
When you’re finished being amazed at how responsive it is, we can move to the left sidebar. This is where the majority of the additional tools that come with VS Code will live.
The default icons to the side will each open up a new column when clicked that can be resized and customized.
1. Explorer
Your default view in VS Code will be the Explorer tab. In it, you will see a section called Open Editors, which is VS Code slang for documents. Each file you have open is considered a new Editor. So if you have 8 .css files open, you will see a list of 8 editors.
Then you have the list of open Programs that might create files to be edited with VS Code. For this example, you will see the only one I have open in the background is Snagit. Beneath that is the Outline that displays the skeleton of the current file. When you get a gigantic file and need to get a top-down view of the entire structure, the Outline view actually works a little more smoothly than even the minimap to the right of the screen.
2. Search
The Search feature in VS Code is phenomenal. It’s not that it’s more powerful than other editors (because I have to be honest here: I adore Find/Replace in Sublime Text). It’s that it’s easier and more transparent than other editors.
When you perform the search, each instance of your search term is found at the bottom of the right column. You can then click on a single instance to highlight the search term’s location within the file. (If you CTRL/CMD-Click, it will open up a second instance of the file highlighting the newly chosen line.)
If you choose to replace the term in the second field, the results will show a red, crossed-out version of the search term and a green-tinted replacement in the results. When you click on a find/replace in the results, a comparative diff will appear to preview the changes. This feature is so useful that you will wonder how you ever lived without it.
3. Git
I am going to start by saying that I am probably biased in my approach to Git. I tend to be a command line/bash user, and graphical clients for Git have never really felt right for me. So a lot of Git integration with other editors and IDEs haven’t been my cup of tea. However, VS Code’s implementation is a hybrid between the command line and a GUI, and it works surprisingly well no matter which version of Git you prefer. Get it…version of Git?
The part about the Git integration in VS Code is that it just works. The left column that appears when you click the Git icon is a visual indicator of the status of your repo. You can click the ellipsis (…) to check out the Git commands that normally would have to be typed in very specifically. You can add, commit, push, and even amend your staged files and work on various branches via context menu.
Additionally, you do have the option of opening up a bash terminal in the editor itself. There’s a Terminal menu in the navigation bar, and the one inside VS Code is fast, clean, smooth, and pretty useable without having to customize it. You can split into multiple columns if necessary, and keep various directories open in different terminals that you can switch between via dropdown.
The terminal isn’t Git-specific, either. It just works so well with the feature, it felt natural to include it here.
4. Debug Console
The Debug Console is also one of the default features in VS Code that makes it stand out from other code editors. As of this writing, there are 171 debugging environments available to install within VS Code. They don’t provide a count, but I wanted to know and figured you did, too, so I counted manually.
Within the results, you can find debuggers for everything imaginable. JavaScript, CoffeeScript, Coffee, Java…all the caffeine-branded scripts, really. You get LUA environments and Python and Ruby, Docker, PHP, SASS, LESS, and…everything. Of all the obscure and/or dead programming languages I tried to find a debugger for, QBasic was the only one that didn’t come up with any results. And no one has used it in a very long time. I really think you’d be hard pressed to find something in modern use isn’t available on the Extensions Marketplace.
5. Extensions Marketplace
All that said, a deeper look into the Extensions Marketplace gives you an idea of the kind of tools you can expect out of the editor’s development community. You can see in the screenshot above at the millions of installations that some extensions have, and if you’re not sure where to begin, sorting by Installation or by Popularity may be your best bet.
You can sort and search by keyword using the @sort parameter. But you can also click the ellipsis (2) for a dropdown with all of your options. The options for managing your own installed extensions live here, too.
Once you find something that you want to install, it’s very simple to do. Click the green Install button,
You will then need to Reload the VS Code editor to finish the installation.
That’s it. Once that is complete, your extension is ready to use. Though, you may want to return to the Details tab occasionally because various issues are covered there, often through updated and color-coded tags.
Being able to check dependency and vulnerability status is great, and you can see any open issues with the extensions and how long it generally takes to address them. Not every extension will display all the information, but when they do, it’s incredibly useful.
Keyboard Shortcuts and Keymaps
Maybe the most important part of a code editor is the keyboard shortcuts and keymaps. All of the stuff we’ve already talked about is great, and they’re integral to the success of the editor and the project. But once you get used to a keymap and your fingers use them via muscle memory, swapping to a new one is nearly impossible.
At best, swapping will slow down your project schedule and reduce your efficiency, and at worst, your bumbling fingers commit some catastrophic injection to the codebase.
No matter where you’re coming from when you migrate to VS Code, the community has you covered. Whether it’s from VIM, Emacs, Sublime Text, or even Notepad++, you can keep the shortcuts and keymaps that you’re used to. You can either search the Extensions Marketplace with @recommended:keymaps or go into File – Preferences – Keymaps to bring up the list of available extensions.
And if you have no preference for shortcuts, that’s fine, too. If you feel the need to customize anything (or just want a rundown of what keyboard shortcuts are available in VS Code by default), you can go to File – Preferences – Keyboard Shortcuts.
Misc. Features You Should Know About
As a general overview, you should be able to see at this point most of what VS Code can offer as a text and code editor. That said, there are a handful of useful things you should know about.
1. The Selection Menu
This is useful no matter what level of dev you are, but it is especially useful if you are new to editors in general. The Selection menu has a number of functions that you will find invaluable.
Specifically being able to use the Add Cursors to Line Ends at a click is nice, and being able to go into the menu and Select All Occurrences of a highlighted word, phrase, or snippet within the current file. Most editors have these as shortcuts, but not all have them as easily labeled or available as VS Code does. It was refreshing to see them so up front since they are some of the most valuable and prominent commands you’ll use.
2. The Terminal Menu
Just because you work in a code editor doesn’t mean you’re a command line wizard. In fact, you might have looked at the command line and terminal section above and thought you’d never use it.
But take a look at the Terminal menu. Even if you don’t do a lot with it, you will see some basic commands that you can execute from the menu that might help your development along quite a bit.
Just having access to these via a menu instead of needing to know shell commands opens up the terminal and bash and command line in a way that a lot of apps just miss. It is small touches like these that make VS Code really appeal to everyone, not just veteran coders who are coming from VIM or Emacs.
3. Zen Mode
Under the View menu, you will find a submenu called Appearance that contains a Toggle Zen Mode option. The other options under View are worth checking out and experimenting with, but I want to call your attention to Zen Mode because I expect many people have never tried it.
Different editors may call it by different names, but the general idea is that you fill up your entire screen with only the document you’re currently editing. It’s different from a full-screen mode in that you don’t maximize the app, but the document.
It’s hard to showcase the mode with a screenshot because it can’t really show that the entire screen is covered by the VS Code editor. Even the Windows taskbar and MacOS dock. Every pixel of screen real estate is taken by your current project so that you can focus on it and nothing else.
And if it’s not for you, just hit ESC, and you’re back to your old view.
It may not seem like much, and I used to be a skeptic. But after using a similar feature in Scrivener to write fiction, I am a convert. You can more easily get into a flow state and really get things done. So many kudos to VS Code for implementing Zen Mode so that we can plug in our earbuds and work distraction-free whenever we want (or as much as we can).
Wrapping Up
All things considered, you’d be remiss not to download Visual Studio Code and give it a try. Microsoft has put out what might be the most stable, most supported, quickest, and proportionately lightweight/feature-heavy editor out there. New coders, seasoned developers, or hobbyists who want to find the right tools…VS Code has been made with you in mind. That’s not an easy feat to achieve, but since it has, VS Code is worth the bits and bytes on your hard drive. And maybe even another look at Microsoft if you’d previously written them off.
What are your favorite aspects of VS Code? Have you made the switch?
The post VS Code: An In-Depth Review for WordPress Developers appeared first on Elegant Themes Blog.
😉SiliconWebX | 🌐ElegantThemes
0 notes
Text
What I Like About Craft CMS
Looking at the CMS scene today, there are upwards of 150 options to choose from — and that’s not including whatever home-grown custom alternatives people might be running. The term “Content Management System” is broad and most site builders fit into the CMS model. Craft CMS, a relatively new choice in this field (launched in 2013) stands out to me.
My team and I have been using Craft CMS for the past two years to develop and maintain a couple of websites. I would like to share my experience using this system with you.
Note that this review is focused on our experience with using Craft and as such, no attempt has been made to compare it to other available options. For us, using Craft has been a very positive experience and we leave it up to you, the reader, to compare it to other experiences that you may have had.
First, a quick introduction to Craft
Craft is the creation of Pixel & Tonic, a small software development company based out of Oregon. Founded by Brandon Kelly, known for premium ExpressionEngine add-ons. While developing some of the most used add-ons, Pixel & Tonic set out to build their own CMS, known as "Blocks." This was all the way in 2010, during its development the name was changed to Craft CMS.
Looking at the market we can see that Craft is well adopted. At the time of writing this article, there are around ~70 000 websites using Craft.
Showing market growth over the five year period.
Craft was set out to make life enjoyable for developers and content managers. In 2015, Craft proved this by winning the Best CMS for Developers award by CMSCritics. Over the years, Craft has won multiple awards that prove that Craft is on the right path.
When I am asked where Craft fits in the overall CMS landscape, I say it's geared toward small-to-medium-sized businesses where there is a staff of content managers that don't require a completely custom solution.
At the heart of things, Craft is a CMS in the same vein as WordPress and other traditional offerings — just with a different flavor and approach to content management that makes it stand out from others, which is what we're covering next.
Craft's requirements
Server requirements for a Craft setup are simple and standard. Craft requires the following:
PHP 7.0+
MySQL 5.5+ with InnoDB, MariaDB 5.5+, or PostgreSQL 9.5+
At least 256MB of memory allocated to PHP
At least 200MB of free disk space
Out of the box, you can get Craft up and running fast. You don’t need an extensive PHP or Database background to get started. Hell, you can get away with little-to-no PHP knowledge at all. That makes both the barrier to entry and the time from installation to development extremely small.
It’s both simple and complex at the same time
Craft is unique in that it is both a simple and a complex CMS.
You can use Craft to design and develop complex sites that and are built with and rely heavily on PHP, databases, and query optimizations.
However, you can also use Craft to design and develop simple sites where you do none of those things.
This was one of the main selling points for me. It’s simple to get up and going with very little, but if you need to do something more complex, you can. And it never feels like you are “hacking” it do anything it wasn’t meant to.
Craft abstracted all the field creation and setup to the admin panel. You only need to point it to the right Twig and then use the fields you connected. Furthermore, it provides localization and multi-site management out of the box with no need for plugins. This is essentially what makes it different from other content management systems. You can create the structure, fields and all the forms without ever touching any code.
Some CMSs like to make a lot of decisions for you and sometimes that leads to unnecessary bloat. Front- and back-end performance is super important to me and, as such, I appreciate that Craft doesn’t leave a lot of that up to me, should I need it. It provides a full customization experience that supports beginners right out of the box, but doesn’t constrain folks at the professional level.
Craft’s templating engine
Some developers are not keen on this, but Craft uses Twig as its template engine. The word “use” should be emphasized as a requirement, as there is no option of writing raw PHP anywhere inside the template. Here are my thoughts on that:
It is standardized in a way that, when I look at my team's Pull Requests, I don’t expect to see 100 lines of custom PHP that make no sense. I only see the code related to templating.
Twig is already powerful enough that it will cover nearly all use cases while being extensible for anything else.
Let’s say you’re not digging Twig or you would rather use one of the latest technologies (hello static site generators!). Craft’s templating system isn’t the only way to get content out of Craft. As of Craft 3.3, it provides a “headless” mode and GraphQL built-in with Craft's Pro features. That means that you can use tools like Gatsby or Gridsome to build static sites with the comfort of Craft CMS. That brings Craft in line with the like of WordPress that provides its own REST API for fetching content to use somewhere else.
There's a fully functional GraphQL editor built right inside the Craft UI.
Speaking of REST, there is even an option for that in Craft if, say, you are not a fan of GraphQL. The Element API is a REST read-only API that is available via the first-party Element API plugin. Again, Craft comes with exactly what you need at a minimum and can be extended to do more.
Craft’s extensibility
This brings me to my next point: Craft CMS is super extensible. It is built on the Yii Framework, a well-known PHP framework that is robust and fast. This is important, as all the extensibility is either through modules or plugins written in Yii and Craft API. Modules are a concept passed down from Yii modules and they provide a way to extend core functionality without changing the source. On the other hand, plugins are a Craft concept and they do the same thing as modules, but can be installed, disabled and removed. If you would like to read more about this, you can find it in Craft’s docs.
Both modules and plugins have full access to Craft and Yii’s API. This is a huge bonus, as you can benefit from Yii’s community and documentation. Once you get used to Yii, writing plugins is easy and enjoyable. My team has built multiple custom plugins and modules over the last two years, like a Pardot form integration, a Google reCAPTCHA integration, custom search behavior, and others. Essentially, the sky is the limit.
Writing plugins and modules is covered in the docs but I think this is where Craft's system has room to grow. I would recommend opening a well-known plugin on GitHub to get a sense of how it’s done because I’ve found that to be much more helpful than the docs.
Initially, you may find this aspect of the system difficult, but once you understand the structure, it does get easier, because the code structure essentially consists of models, views, and controllers. It is like building a small MVC app inside your CMS. Here is an example of a plugin structure I’ve worked with:
. ├── assetbundles ├── controllers ├── migrations ├── models ├── records ├── services ├── templates │ ├── _layouts │ └── _macros ├── translations │ └── en ├── variables ├── icon-mask.svg ├── icon.svg └── Plugin.php
If you don’t feel like writing PHP and tinkering with Yii/Craft, you can always download plugins from the official Craft plugin store. There is a variety of plugins, from image to building on top of the WYSIWYG editor. One of many things that Craft got right is that you can try paid plugins in development mode as much as you like rather than having to first make a purchase.
The Craft plugins screen.
During the course of two years, we have tried multiple plugins, there are a few that I not only recommend, but have found myself using for almost every project.
ImageOptimize - This is a must for performance enthusiasts as it provides a way to automatically transform uploaded images to responsive images with compression and convert to more modern formats.
Navigation - Craft doesn’t come with navigation management built right in, even though you technically can do it with custom fields. But Verbb did an awesome job with this simple plugin and for us it’s one of the very first plugins we reach for on any given project.
Seomatic - This is what is the Yoast SEO plugin is to WordPress: an out of the box solution for all your SEO needs.
Redactor - This is a must and should be included in every project. Craft doesn’t come with a WYSIWYG editor out of the box but, with Redactor, you get a Redactor field that includes one.
Super Table - This powerful plugin gives you an option to create repeatable fields. You can use built-in Craft field types to create a structure (table) and the content manager creates rows of content. It reminds me of ACF Repeater for WordPress.
Craft’s author experience
While we’ve covered the development experience so far, the thing that Craft got extremely right — to the point of blowing other CMSs out of the water, in my view — is the author's experience. A CMS can do all kinds of wonderful things, but it has to be nice to write in at the end of the day.
Craft provides a straightforward set of options to configure the site right in the admin.
The whole concept of the CMS is that it is built with two simple things; Fields and Sections, where fields are added to sections and entries are created by content managers.
Craft's default post editor is simple and geared toward blog posts. Need more fields or sections? Those are available to configure in the site settings, making for a more open-ended tool for any type of content publishing.
One of the neatest author features is version control. "Wait, what?" you ask. Yes, all content is version controlled in a way that lets authors track changes and roll back to previous versions for any reason at all.
Craft shows revisions for each entry.
At any point in time, you can go back to any revision and use is as a current one. You don't know how much you need this feature until you've tried it. For me, it brings a sense of security that you can't lose someone's edit or changes, same a with Git and developers.
The fun doesn't stop here because Craft nailed one of the hardest things (in my opinion) about content management and that is localization. People still find this hard in 2020 and usually give up because it is both difficult to implement and properly present to authors in the UI.
You can create as many sites as you want.
Oh, and you can host multiple websites in a single Craft 3 instance. You can define one or more sites at different domains, different versions of the entry content and using a different set of templates. Like with everything in Craft, it is made so simple and open-ended (in a good way) that it is up to you what the other sites are going to be. You can create a site with the same language but different content or create a site with another language, solving the localization problem.
All the features above are already built-in inside Craft which for me is a must for a good author experience. As soon as you start patching the essential author functionality with plugins, great author experience is lost. This is because usually when you want to add functionality there are multiple plugins (ways) to do it, which aids a different author experience on the same platform but different instances.
Craft’s community
It’s worth underscoring the importance of having a community of people you can to turn to. To some degree, you’re probably reading this post because you enjoy learning from others in the development community. It’ no difference with CMSs and Craft has an awesome community of developers.
Craft's Stack Exchange is full of questions and answers, but a lot of the information needs to be updated to reflect Craft 3.
At the same time, the community is still small (compared to, say, WordPress) and doesn’t have a long track record — though there are many folks in the community who have been around quite a while having come from ExpressionEngine. It’s not just because Craft itself is relatively new to the market. It’s also because not everyone posts on the Craft CMS Stack Exchange to the extent thatmany of the older answers haven’t even been updated for Craft 3. You’ll actually find most of the community hanging out on Discord, where even the creators of Craft, Pixel & Tonic, are active and willing to answer questions. It is also very helpful when you see Craft core members and big plugin creators, like Andrew from nystudio107 (shout out to a great performance freak), are there to assist almost 24/7.
Craft's discord has always someone to help you. Even the core team responds often.
One thing I also want to touch on is the limited learning resources available but, then again, you hardly need them. As I said earlier, the combination of Craft and Twig is simple enough that you won’t need a full course on how to build a blog.
Craft's conference, Dot All, is a great resource all its own. Chris attended last year with a great presentation, which is available to the public.
And, lastly, Craft uses and enforces open source. For me, open source is always a good thing because you expose your code to more people (developers). Craft did this right. The whole platform and also plugins are open source.
Pricing
This is the elephant in the room because there are mixed feelings about charging for content management systems. And yes, Craft has a simple pricing model:
It’s free for a single user account, small website.
It’s $299 per project for the first year of updates. It’s only $59 each year after that, but they don't force you to pay for updates and you can enable license updates for an additional year at any time at the same price.
Craft's Solo version is totally capable of meeting the needs of many websites, but the paid Pro option is a cost-effective upgrade for advanced features and use cases.
I consider this pricing model fair and not particularly expensive — at least to the point of being prohibitive. Craft offers a free license for a small website you can build for a friend or a family member. On the other hand, Craft is more of a professional platform that is used to build mid-size business websites and as such their license cost is negligible. In most cases, developers (or agencies) will eat up the initial cost so that clients don’t have to worry about this pricing.
Oh, and kudos to Craft for providing an option to try the Pro version for free on a local domain. This also applies to all plugins.
Conclusion
To conclude, I would like to thank Craft CMS and the Pixel & Tonic team for an awesome and fun ride. Craft has satisfied almost all our needs and we will continue to use it for future projects. It’s flexible to fit each project and feel like CMS built for that use case.
It boils down Craft for me is a content management framework. Out of the box, it is nothing more than nuts and bolts that needs to be assembled to the user's needs. This is the thing that makes Craft stand out and why it provides both great author and developer experience.
As you saw in the licensing model it is free for a single user, try it out and leave your feedback in the comments.
The post What I Like About Craft CMS appeared first on CSS-Tricks.
What I Like About Craft CMS published first on https://deskbysnafu.tumblr.com/
0 notes
Text
Microsoft's patent move: Giant leap forward or business as usual?
New Post has been published on http://tradewithoutfear.com/microsofts-patent-move-giant-leap-forward-or-business-as-usual/
Microsoft's patent move: Giant leap forward or business as usual?
When Microsoft surprised everyone by releasing its entire 60,000 patent portfolio to the open-source community, someone asked me if I thought the move would finally convince everyone Microsoft is truly an open-source friendly company.
“Oh no,” I replied.
Must read: Microsoft open-sources its patent portfolio
Sure enough, some folks are still convinced that Microsoft is intending to “embrace, extend, and extinguish” open source. Many others believe, however, that Microsoft has truly evolved and has become an open-source company.
Is it a trap?
On the purely positive side, we have Jim Zemlin, The Linux Foundation‘s executive director:
“We were thrilled to welcome Microsoft as a platinum member of the Linux Foundation in 2016 and we are absolutely delighted to see their continuing evolution into a full-fledged supporter of the entire Linux ecosystem and open-source community.”
Patrick McBride, Red Hat‘s senior director of patents added, “What a milestone moment for open source and OIN! Microsoft is joining a unique shared effort that Red Hat has helped lead to bring patent peace to the Linux community. Developers and customers will be the beneficiaries. Now is a perfect time for others to join as well.”
On the haters’ side, there is Florian Mueller, editor of the FOSSPatents blog, who thinks:
‘Microsoft loves Linux’ is a lie. And now Microsoft wants us to think that Microsoft battles patent trolls. This too is a Microsoft lie.”
He also said joining the OIN, which Mueller considers a pro-patent IBM front group, “imposes no actual new constraints on them.” This is just a cynical PR move from Mueller’s viewpoint.
Also: Open source: Why it’s time to be more open
Other anti-Microsoft die-hards on Reddit, Twitter, and other social networks also insist that this new Microsoft is the same as the old Microsoft. Or, as one person, harking back to Star Wars, remarked: “It’s a trap!”
Microsoft finally gets open source
At Microsoft, the company insists that it has been changing its open-source ways for years. In a recent Open Source Virtual Conference keynote, John Gossman, a distinguished Microsoft Azure team engineer, described former Microsoft CEO Steve Ballmer’s 2001 comment that Linux was “a cancer” as being ” a fundamental misunderstanding of open source.”
Also: Open-source licensing war: Commons Clause
With Satya Nadella as CEO, Microsoft finally gets open source.
What the patent experts are saying…
But it’s not just Microsoft staffers who are saying Microsoft’s attitude toward open-source has evolved. Andrew “Andy” Updegrove, patent expert and founding partner at the Boston-area law-firm Gesmer Updegrove, said:
“While this may seem surprising to those who have not followed Microsoft’s evolution in recent years, it is in fact more a formal recognition of where they, and the realities of the IT environment are today.”
Daniel Ravicher, executive director of the Public Patent Foundation (PUBPAT), whose work was once used by Ballmer against Linux, wasn’t surprised by this move:
“With the
acquisition of GitHub and other things the company is done they’ve really changed their tune in the past 15 years. They also hired as an in-house attorney a former staff attorney of the Software Freedom Law Center (SFLC). It may be like the Korean War that doesn’t have a formal end date, but I think now Microsoft and open-source software are on the same page and working together.”
Prominent open-source attorney and Columbia University law professor, Eben Moglen, also sees this as a move towards patent peace. Moglen remarked:
“Microsoft’s decision signals the transition from the period of patent war to the making of industry-wide patent peace for free and open-source (FOSS) software. Microsoft’s participation in the OIN licensing structure will be the tent pole for the extension of OIN’s big tent across the world of IT. For SFLC and other parties whose job it is to secure the interests of individual FOSS programmers and their non-profit projects, this is also the moment of opportunity to ensure their safety and respect for their mode of development across the entire industry, including by companies who continue to engage in patenting their own R&D.”
Also: Open source is 20: How it changed programming
Why is Microsoft doing this when it makes money from patents?
Scott Guthrie, Microsoft’s executive vice president of the cloud and enterprise group, described the decision as a “fundamental philosophical change” — resulting from an understanding that open-source is inherently more valuable to Microsoft than patent profits.
John Ferrell, chair at the Silicon Valley technology law firm Carr & Ferrell, thinks there may be a more pragmatic reason behind Microsoft’s move:
“Microsoft’s gesture to donate 60,000 patents to the OIN is indeed a philosophical change for this giant, but the change likely is rooted in the realization that the Company is much better suited to fight in the marketplace rather than to fight in the courtroom. Virtually every patent-owning company that gets into a patent battle with Microsoft is fighting from a position of asymmetrical advantage. Where damages are based on a percent of sales, Microsoft almost always has more to lose. Especially companies that leverage open-source software, these companies tend to be small and patent infringement for Microsoft is difficult and expensive to police.”
Ferrell, the litigator, continued:
“From a defensive standpoint, small companies with one or two patents arguably infringed by Microsoft are especially annoying and potentially damaging to this goliath. Microsoft is a huge target and is constantly barraged with patent lawsuits by small and large companies trying to gain a foothold or monetize their development efforts at the expense of Microsoft’s deep pockets.”
An additional reason for Microsoft’s change of heart, according to Rafael Laguna, CEO of Open-Xchange, an open-source network services company, is:
“Microsoft boss Nadella wants to buy new credit in the open-source industry, distancing the company from the business model and practises of his predecessors, i.e. Gates’ and Ballmer’s sincere dislike of open source developers” Nadella, however, “recognizes that Microsoft’s future revenue will come from providing cloud services, rather than selling operating system licenses. And for cloud services, Linux is now the operating system of choice – underpinned by the fact that already
half of the Microsoft Azure services are based on Linux today.”
Also: Open-source vulnerabilities which will not die: Who is to blame?
Will this bring peace to our time?
Bradley Kuhn, president of the Software Freedom Conservancy (SFC), appreciates Microsoft joining OIN patent non-aggression pact, noting: “Perhaps it will bring peace in our time regarding Microsoft’s historical patent aggression.”
Microsoft needs to do more, Kuhn added, “We call on Microsoft to make this just the beginning of their efforts to stop their patent aggression efforts against the software freedom community.”
Specifically, he said, “We now ask Microsoft, as a sign of good faith and to confirm its intention to end all patent aggression against Linux and its users, to now submit to upstream the exfat code themselves under GPLv2-or-later.”
Exfat, a file system, was open-sourced by Samsung with the SFC’s help in 2013. But Kuhn said, “Microsoft has not included any patents they might hold on exfat into the patent non-aggression pact.”
In general, it should be noticed, when asked about FAT-related patents, Erich Andersen, Microsoft’s corporate vice president and chief intellectual property (IP) counsel, has said:
“We’re licensing all patents we own that read on the ‘Linux system.'” And, in addition, all of Microsoft’s 60,000 granted patents relating to the Linux system are covered by the OIN’s requirements.
In a subsequent e-mail Kuhn noted, “Ultimately, the OIN license agreement is quite narrowly confined to the ‘ OIN Linux System Definition‘ and therefore doesn’t assure that patent aggression must stop immediately; rather, Microsoft is only required to stop for those patents that read on technologies in the OIN Linux System Definition.”
So, for example, BSD specific code, wouldn’t necessarily be covered.
Therefore, Kuhn suggested:
“Expanding the ‘Linux System Definition’ would be a useful way to solve this problem through OIN.”
Historically, OIN has been expanding the Linux System Definition.
Kuhn concluded:
“More importantly, Microsoft can help solve it unilaterally by submitting patches that implement technology from their patents into upstream projects that are already contained in the Linux System Definition. I suggest they start with upstreaming exfat in Linux themselves.
Also: Hollywood goes open source
Conclusion
So, while there are a few people who think Microsoft is up to no good, the experts agree that this is a laudable move by Microsoft to show its open-source bona fides. That’s not to say some still want to see more proof of Microsoft’s intentions, but overall, people agree this is a major step forward for Microsoft, Linux, and open-source intellectual property law regulation.
Related stories:
0 notes
Text
Help! I can’t reproduce a machine learning project!
Have you ever sat down with the code and data for an existing machine learning project, trained the same model, checked your results… and found that they were different from the original results?
Not being able to reproduce someone else’s results is super frustrating. Not being able to reproduce your own results is frustrating and embarrassing. And tracking down the exact reason that you aren’t able to reproduce results can take ages; it took me a solid week to reproduce this NLP paper, even with the original authors’ exact code and data.
But there's good news: Reproducibility breaks down in three main places: the code, the data and the environment. I’ve put together this guide to help you narrow down where your reproducibility problems are, so you can focus on fixing them. Let’s go through the three potential offenders one by one, talk about what kind of problems arise and then see how to fix them.
Non-deterministic code
I’ve called this section “non-deterministic code” rather than “differences in code” because in a lot of machine learning or statistical applications you can end up with completely different results from the same code. This is because many machine learning and statistical algorithms are non-deterministic: randomness is an essential part of how they work.
If you come from a more traditional computer science or software engineering background, this can be pretty surprising: imagine if a sorting algorithm intentionally returned inputs in a slightly different order every time you ran it! In machine learning and statistical computing, however, randomness shows up in many places, including:
Bagging, where you train many small models on different randomly-sampled subsets of your dataset; and boosting, where only the first data subset is completely random and the rest are dependent on it
Initializing weights in a neural network, where the initial weights are sampled from a specific distribution (generally using the method proposed by He et al, 2015 for networks using ReLU activation)
Splitting data into testing, training and validation subsets
Any methods that rely on sampling, like Markov chain Monte Carlo-based methods used for Bayesian inference or Gaussian mixture models
Pretty much any method that talks about “random”, “stochastic”, “sample”, “normal” or "distribution" somewhere in the documentation is likely to have some random element in it
Randomness is your friend when it comes to things like escaping local minima, but it can throw a wrench in the works when you’re trying to reproduce those same results later. In order to make machine learning results reproducible, you need to make sure that the random elements are the same every time you run your code. In other words, you need to make sure you “randomly” generate the same set of random numbers. You can do this by making sure to set every random seed that your code relies on.
Random seed: The number used by a pseudorandom generator to determine the order in which numbers are generated. If the same generator is given the same seed, it will generate the same sequence every time it restarts.
Unfortunately, if you’re working from a project where the random seed was never set in the first place, you probably won’t be able to get those same results again. In that case, your best bet is to retrain a new model that does have a seed set. Here’s a quick primer on how to do that:
In R: Most packages depend on the global random seed, which you can set using `set.seed()`. The exceptions are packages that are actually wrappers around software written in other languages, like XGBoost or some of the packages that rely heavily on rcpp.
In Python: You can set the global random seed in Python using `seed()` from the `random` module. Unfortunately, most packages in the Python data science ecosystem tend to have their own internal random seed. My best piece of advice is to quickly search the documentation for each package you’re using and see if it has a function for setting the random seed. (I’ve compiled a list of the methods for some of the most common packages in the “Controlling randomness” section of this notebook.)
Differences in Data
Another thing that can potentially break reproducibility is differences in the data. While this happens less often, it can be especially difficult to pin down. (This is one of the reasons Kaggle datasets have versioning and why we’ve recently added notes on what’s changed between versions. This dataset is a good example: scroll down to the “History” section.)
You might not be lucky enough to be working from a versioned datasets, however. If you have access to the original data files, there are some ways you can check to make sure that you’re working with the same data the original project used:
You can use cmp in Bash to make sure that all the bytes two files are exactly the same.
You can hash the files and then compare the hashes. You can do this with the hashlib library in Python or either UNF package (for tabular data) or the md5sum() function in R. (Do note that there’s a small chance that two different files might create the same hash.)
Another thing that can be helpful is knowing what can introduce differences in your data files. By far the biggest culprit here is opening data in word processing or spreadsheet software. I’ve personally been bitten in the butt by this one more than once. A lot of the nice helpful changes made to improve the experience of working with data files for humans can be just enough to end up breaking reproducibility. Here’s are two of the biggest sneaky problem areas.
Automatic date formatting: This is actually a huge problem for scientific researchers. One study found that gene names have been automatically converted to dates in one-fifth of published genomics papers. In addition to changing non-dates into dates like that, the format of your dates will sometimes be edited to be more in line with your computer locale, like 6/4/2018 being changed to 4/6/2018.
Character encodings: This is an especially sneaky one because a lot of text editors will open files with different character encodings with no problems… but then save them using whatever your system default character encoding is. That means that your text might not look any different in the editor, but all the underlying bytes have been completely changed. Of course if you always use UTF-8 this isn’t generally a problem, but that’s not always an option.
Because of these problems, I strongly recommend that you don’t check or edit your data files in word processors or spreadsheet software. Or, if you do, do it with a second copy of your data that you can discard later. I tend to use a text editor to check out datasets instead. (I like Gedit or Notepad++ but I know better than to wade into an argument about which text editor is better than the other. 😉 If you’re comfortable working in the command line, you can also check your data there.
Differences in environments
So you’ve triple-double-checked your code and data, and you’re 100% sure that they aren’t accounting for differences between your runs. What’s left? The answer is the computational environment. This includes everything needed to run the code, including things like what packages and package versions were used to run the code and, if you reference them, file directory structures.
Getting a lot of “File not found” errors? They’re probably showing up because you’re trying to run code that references a specific directory structure in a directory with a different structure. Problems related to the file directory structures are pretty easy to fix: just make sure you’re using relative rather than absolute paths and configure your . (If you’re unsure what that means, this notebook goes into more details.) You may have to go back and redo some things by hand, but it’s a pretty easy fix once you know what you’re looking for.
It’s much harder to fix problems that show up because of dependency mismatches. If you’re trying to reproduce your own work, hopefully you still have access to the environment you originally ran your code in. If so, check out the “Computing environment” section of this notebook to learn how to get you can quickly get information on what packages and their versions you used to run code. You can then use that list to make sure you’re using the same packages versions in your current environment.
Pro tip: Make sure you check the language version too! While major versions will definitely break reproducibility (looking at you, Python 2 vs. Python 3), even subversion updates can introduce problems. In particular, differences in the subversion can make it difficult or impossible to load serialized data formats, like pickles in Python or .RData files in R.
The amount of information you have about the environment used to run the code will determine how difficult is to reproduce. You might have...
Information on what was in the environment using something like a requirements or init file. This takes the most work to reproduce, since you need to handle getting the environment set up yourself.
A complete environment using a container or virtual machine. These bundle together all the necessary information, and you just need to get it set up and run it.
A complete hosted runnable environment (like, say, a Kaggle Kernel ;). These are the easiest to use; you just need to point your browser at the address and run the code.
(This taxonomy is discussed in depth in this paper, if you’re curious. )
But what if you don’t already have access to any information about the original environment? If it’s your own project and you don’t have access to it anymore because you dropped it into a lake or something, you may be out of luck. If you’re trying to reproduce someone else’s work, through, the best advice I can give you is to reach out to the person whose work you’re trying to reproduce. I’ve actually had a fair amount of success reaching out to researchers on this front, especially if I’m very polite and specific about what I’m asking.
_____
In the majority of cases, you should be able to track down problems in reproducibility to one of these three places: the code, data or environment. It can take a while to laser in on the exact problems with your project, but having a rough guide should help you narrow down your search.
That said, there are a small number of cases where it’s literally impossible to reproduce modeling results. Perhaps the best example is deep-learning projects that rely on the cuDNN library, which is part of the NVIDIA Deep Learning SDK. Some key methods used for CNNs and bi-directional LSTMs are currently non-deterministic. (But check the documentation for up-to-date information). My advice is to consider not using CUDA if reproducibility is a priority. The tradeoff here is that not using CUDA is that your models might take much longer to train, so if you’re prioritizing speed instead this will be an important consideration for you.
Reproducibility is one of those areas where the whole “an ounce of prevention is worth a pound of cure” idea is very applicable. The easiest way to avoid running into problems with reproducibility is for everyone to make sure their work is reproductible from the get-go. If this is something you’re willing to invest some time in, I’ve written a notebook that walks through best practices for reproducible research here to get you started.
No Free Hunch published first on No Free Hunch
0 notes
Text
On Page SEO with Jeffrey Smith
On-page SEO is the optimization that is done on your actual web pages to rank better in the search engines. On each page of your site, on-page SEO optimization includes making sure each page uses schema code, page titles, meta descriptions for the snippets, page headlines, and more.
It also includes the structures of navigation on your site, how your pages link to each other, and other under the hood aspects.
Jeffrey Smith is one of the best in the world at this and a mentor to Moon & Owl Marketing. He is the founder of SEO Ultimate Bootcamp which focuses entirely on on-page SEO and a super powerful plugin we now use instead of the Yoast SEO or All in One SEO plugins. It is much more robust and powerful.
He recently did a webinar with our buddies at Semantic Mastery. It’s quite good. Even if you are a business owner and don’t want to do your own SEO, this can be quite informative at what your SEO agency should be doing. Of course, if you hire Moon & Owl for your search engine optimization, you can be sure this is being done in the amazing manner he covers in the video below.
youtube
Here is a transcript of the video:
okay so here we are on attempt number 00:04 two sorry guys I don’t know what 00:07 happened there was a breakdown in Maya 00:09 with my partner’s somehow someway we got 00:12 set up on webinarjam for this webinar 00:13 which I don’t know why I I thought I was 00:16 really clear about not wanting to ever 00:18 use it again but somehow someway it got 00:21 set up on their webinarjam and I didn’t 00:23 realize that until just a moment ago so 00:24 I apologize for that guys Geoffrey 00:27 should be coming on here in just a few 00:28 moments so just bear with me while we 00:30 get this sorted out it should be on any 00:44 moment guys bear with me I’m trying to 00:46 chat with him on skype we’ve got quite a 00:49 presentation for you guys today this 00:51 dude is a as a freaking beast when it 00:53 comes to on-page SEO stuff it’s 00:55 absolutely incredible I’ll just give you 00:56 a little bit of backstory while we’re 00:57 waiting on him to come over I started 01:02 using the SEO ultimate plug-in recently 01:04 and he’s the developer behind that and 01:06 uh let’s make sure this is live sorry 01:09 guys I’m just making sure that we’re 01:11 good yes we’re live okay anyways I 01:16 started using the SEO ultimate plugin 01:17 and I started going through his training 01:20 the SEO ultimate plus plugin which is 01:22 the premium plug-in I started going 01:23 through the training on how to use it 01:25 because it’s pretty advanced there’s a 01:26 lot of stuff a lot of features it’s way 01:28 better than Yoast in my opinion and and 01:31 I started going through his training 01:32 that he has like on his YouTube channel 01:33 and I was just absolutely blown away by 01:36 the level of detail this guy knows 01:38 on-page architecture like silo 01:40 architecture and keyword clustering and 01:42 stuff like that better than anybody I 01:43 know and I know people have been asking 01:46 us to do an on-page SEO course for years 01:48 and we haven’t done it because we’ve 01:50 covered it still can’t get in the 01:52 hangout ah sorry guys 01:58 close Chrome 02:04 entirely they reopen and try again 02:13 anyway so let me continue on so if we’re 02:16 about to use people have been asking us 02:17 to do an on-page SEO course and we 02:19 haven’t done it only because we’ve 02:20 covered it in various trainings the 02:22 mastermind master class stuff like that 02:25 where it’s been part of another course 02:26 where we you know cover how to build out 02:29 the sights with silo architecture 02:30 internal linking keyword research and 02:33 that kind of thing 02:34 but when you know we didn’t have a 02:35 specific products you know set up just 02:38 for on-page and we’ve had it on the list 02:43 to do of like to-do items for you know 02:46 training products for a long time but we 02:49 just never got to it because of all the 02:50 other projects we got going on and so 02:51 when I started going through SEO 02:53 ultimate plus training about just three 02:56 or four weeks ago because I’m switching 02:57 over am using that plug-in now instead 02:59 of Yoast for all my SEO but for all my 03:01 WordPress sites 03:02 why don’t we when I started looking at 03:04 that I was like oh wow this is really 03:05 good so I reached out to Geoffrey I I 03:06 knew him from years ago and and he you 03:11 know he said oh man by the way I got 03:12 this new course coming out for SEO 03:14 bootcamp in its I’d like for you to take 03:17 a look at it and I took a look at and I 03:19 was absolutely floored with how good it 03:20 was and there’s no reason for us to 03:22 produce training when there’s something 03:24 that’s good out there so I fully endorse 03:25 it and that’s what he’s gonna be coming 03:27 on to give a presentation about he’s 03:28 gonna be talking about on-page keyword 03:30 clustering all that kind of stuff so 03:31 you’ll get to see some of that 03:32 throughout the presentation and 03:33 obviously there is gonna be an offer at 03:34 the end plus we’re gonna have some 03:36 bonuses thrown in and that kind of stuff 03:38 so stick around if we can get it to work 03:40 again I apologize for this guy’s if he 03:43 uh if he gets back in here in just a 03:45 moment he’s I’m assuming he’s closing 03:47 chrome right now and trying to get back 03:48 on if not then we’ll just have to 03:50 reschedule here he comes now okay guys 03:52 how you doing I’m doing 03:54 much better now are you sweating are you 04:00 sweating over there Geoffrey yes I am 04:04 alright cool now I’m like the total I’m 04:07 a total noob when it comes to Google 04:09 Hangouts – yeah well listen man I’m 04:12 gonna take full ownership of the fact 04:14 that the first 15 minutes or 10 minutes 04:16 of this has been my fault or 04:18 our fault because we had it set up on 04:20 the wrong platform so it’s not you it’s 04:22 us all right so that’s not let’s not 04:25 waste any more time man I want you to 04:27 get I’ve already kind of given an 04:29 introduction to who you are Jeffrey and 04:30 um I kind of gave a little bit of a 04:32 backstory as to how I came about knowing 04:35 which you’re going to be presenting 04:36 today and how I asked you specifically 04:38 to put this together for for our 04:39 audience because I think they’re going 04:41 to eat it up 04:41 so I think I built you up big enough all 04:45 right man let me uh go ahead and make 04:47 sure I’ve got the right screen up that’s 04:49 one of those things you got to do let me 04:52 see let me go ahead and try to do my 04:56 screen shooter sure we’ll make sure it’s 04:58 the right one all right 05:02 okay now I’m seeing your Google screen 05:05 okay and I’m gonna go ahead and start my 05:07 presentation at this point there you go 05:11 I’m you see the SEO Buchan yes all right 05:16 okay so I realize all right so work 05:19 we’re at it we’re on into it now guys 05:21 I’m gonna turn the floor over to Jeffrey 05:23 Jeffrey it’s yours I’ll be here if you 05:25 need me for anything otherwise I’ll just 05:26 be kind of answer questions in the 05:27 background as as required all right 05:30 sounds good sounds good first and 05:31 foremost I want to say thank you for 05:33 allowing me to be here really really 05:35 pleased to to actually speak to the 05:37 audience so I make sure that I do one 05:39 last thing I need to turn off my skype 05:41 otherwise it’s gonna drive me absolutely 05:42 nuts so if you’ll just give me one 05:44 second that would be great all right so 05:50 most of you are completely familiar with 05:53 on-page SEO basics but really this is 05:57 about fundamentals and some of things 06:01 I’m gonna be sharing today is really how 06:04 to rank your website using topical 06:05 relevance and so the main thing about 06:07 topical relevance is it’s something that 06:10 you should do it’s one of the first 06:12 thing you should do before you even 06:13 think about doing any kind of off page 06:15 let me go ahead and jump back to my 06:16 slides here 06:23 okay first of all can you guys hear me 06:27 can hear you fine yes okay perfect 06:30 perfect all right so all right so the 06:34 objective is we really going to show you 06:35 how to rank your website top routes 06:38 let me stop you man and we’re still 06:39 seeing just the the opening slide SEO 06:42 bootcamp yeah that’s what I’m having an 06:44 issue here it’s that Murphy’s Law thing 06:47 well then select screen share again and 06:49 then try to find out of the application 06:52 window tab if you’re in Chrome you 06:55 shouldn’t see like two options your 06:56 entire screen or application window and 06:58 then from the application window tab you 06:59 should be able to select just the 07:00 PowerPoint presentation okay y’all know 07:05 I’m gonna share this screen all right 07:08 yeah man this is uh definitely taxing 07:13 still seeing just that SEO bootcamp 07:15 slide that first alright let’s try this 07:18 again okay I’m back on the screen share 07:30 let’s go for screen one there now I’m 07:35 seeing it in editor mode PowerPoint so 07:37 that’s your step in the right direction 07:40 all right okay you can leave it just 07:43 like that if you want buddy I don’t 07:44 think our audience cares all right so 07:46 basically the goal is really to change 07:49 the way that you think about SEO and 07:50 I’ve been doing this for about 20 years 07:52 and typically what I’ve found is that 07:54 the foundation the fundamentals are the 07:56 most important thing that you do and so 07:58 a lot of it is eighty percent of the 08:00 time it’s really how you choose your 08:01 keywords and we found that you can 08:03 really leverage that for immense design 08:05 so I really would like to start this so 08:09 that’s fine but early makes you 08:11 comfortable min okay all right so for 08:14 whatever reason guys man I’m really 08:18 sorry this is really sort of fumbling 08:21 over here with this technology at this 08:22 point not a problem so basically I’m 08:26 gonna show you our process I’ve been 08:28 doing SEO for over 20 years and 08:31 we just learned a lot along the way and 08:33 basically if you stayed till the end 08:34 what I’m going to show you is a number 08:36 of our clients we actually are able to 08:37 get them right without links and so a 08:40 couple of case studies we’re gonna show 08:42 you one where there’s 500 plus keywords 08:44 ranking that was all done from on-page 08:45 SEO and it was in a very competitive 08:48 space it’s in real estate in Los Angeles 08:50 and at this time the domain Authority is 08:53 only thirteen and they’re up against 08:55 people like Zillow and curbed and huge 08:57 companies like that so you stay to the 08:59 end I’m definitely gonna go ahead and 09:00 share that with you none of the things 09:03 we’re really gonna get into is what 09:05 makes this different is really it’s the 09:08 it’s not about short-term gains it’s 09:10 really about the space between your ears 09:12 and how you approach your market 09:13 differently so we’re maybe sharing a lot 09:15 of battle-tested strategies 09:17 no fly-by-night fairy dust basically 09:19 we’re gonna focus on some fundamentals 09:20 and show you how you can create things 09:22 of that nature so to summarize it’s 09:27 really about I’m really flipping this up 09:30 guys 09:30 the screen is killing me I really want 09:32 to show you the other screen here so you 09:35 want to I mean dude hit the presenter 09:37 button and then go select that window 09:41 alright let’s try this try it again I 09:43 mean I can edit all this other crap out 09:45 so just I apologize for all of you that 09:48 are watching live you just got to bear 09:49 with us for a moment or just wait till 09:51 the replay so what in the slideshow is 09:58 there another way that I can actually 09:59 just show this other window that’s 10:02 that’s really the problem as I said 10:03 there’s a lot of material and I have 10:05 another window open but I’m gonna do 10:06 some over-the-shoulder stuff with all 10:09 right let me unbelievable this is uh 10:20 what happens live sometimes so let’s see 10:30 how do you so the slideshow without it 10:33 showing your entire desktop that’s the 10:35 thing because I’m looking at it 10:38 I always put stuff in Google slides when 10:40 I’m doing a presentation because it just 10:42 seems to work better but 10:44 you didn’t know that so well the problem 10:47 is right now and I know it’s this is 10:50 definitely what we tried to walk on 10:51 before but we had some issues with the 10:53 other platform so let me go back and see 10:55 if I can just change the screen that I’m 10:56 presenting on because that would 10:58 actually solve it so train chair 11:06 yeah I’m not familiar with PowerPoint 11:09 man or I can’t give you any pro guidance 11:11 on how to make it if it was slides I 11:13 could tell you GLE slides I can tell you 11:14 what to do that looks way better their 11:17 issue is just that it’s not refreshing 11:19 over there it’s sort of sticking with 11:21 that one screen yeah so that’s the 11:22 problem yeah all right time for a beer 11:29 oh uh yeah it’s Miller time okay so 11:34 basically I’ll just give you guys the 11:35 backstory again so I started off in 1995 11:38 I actually invented a product it’s 11:40 called the drive time aromatherapy 11:41 diffuser spent a lot of time monkeying 11:45 around with different types of web 11:47 applications trying to get it to work 11:48 and and to get some ranking factor with 11:51 that so we were talking about dogpile go 11:53 light goes hot but Google wasn’t even an 11:55 inception yet we were fighting it out 11:57 since the company name was Iran were 11:58 fighting out with the Aeron chair so I 12:01 learned how to do SEO back then 12:02 literally took over the first three 12:03 pages was able to get that company some 12:06 real traction we ended up distributing 12:07 to about 17 countries as a result of the 12:10 on-page SEO efforts that we created so 12:12 basically I was able to retire from that 12:16 company for about four years take some 12:18 time off and then 2006 I came back and I 12:21 create a company called SEO design 12:22 solutions started blogging my behind off 12:25 created about a thousand in-depth 12:27 articles on all things SEO and 12:30 essentially got rank for the keyword SEO 12:32 in the number four position now we also 12:36 had we’re just trying to really expedite 12:39 the process for our on-page SEO so along 12:42 the way we actually created two products 12:44 that you may be familiar with the first 12:46 one is SEO ultimate and at the time 12:48 there were only a the only real plugin 12:50 that was out there that was doing this 12:51 was the all in one SEO pack and we just 12:55 wanted to make things streamlined so we 12:56 started building our own 12:58 back at that time just to really 13:00 expedite the process for our teams to 13:01 set things up and then we ended up 13:04 sharing that plug-in with the WordPress 13:05 community and you know lo and behold we 13:08 got two million downloads so that was 13:10 one of them and the other one design 13:12 framework so it was a way that we could 13:14 essentially without having to go into a 13:16 client site because a lot of customers 13:18 had different CMS systems we could use 13:20 WordPress both on a site that looks just 13:22 like theirs and then automate all the 13:25 SEO like a beacon from that site so 13:27 that’s really the way that we created 13:28 those two products and so like I said 13:31 started from an actual you know just a 13:34 guy who had an idea and need to get a 13:36 company ranked and figured out a lot of 13:37 stuff along the way and lo and behold we 13:39 ended up getting into the SEO space for 13:42 clients and so before we go any further 13:46 it’s really important to think about why 13:49 and so if you think about neo and 13:51 Morpheus and Trinity when they go to 13:52 meet the Merovingian and the matrix part 13:55 two they went without why and so as he 13:57 so eloquently States the only real 13:59 reason and source of power is wise with 14:01 that being said let me tell you why I 14:03 created this course now you know we’re 14:06 still just seeing how to rank your 14:07 website using topical relevant slide oh 14:10 my goodness that’s what I’m saying man 14:13 you’re gonna have to show it in editor 14:14 view or else we’re not gonna be able to 14:15 see any changing somebody comment on the 14:18 event page if you press f5 it should 14:21 advance the slide I don’t know if that 14:23 I’ve never used it so I don’t now try 14:25 this let’s try this it hasn’t done 14:31 anything so yeah I think you’re gonna 14:34 have to do the editor view buddy I’m 14:35 sorry all right not a problem 14:39 you guys get to read my notes so that’s 14:41 gonna see the screen ah yes sir all 14:44 right looks like sorting make the 14:46 analogy you know talking about the 14:47 Merovingian and how essentially that 14:50 Trinity neo had Morpheus approached him 14:53 without why and so to give you a 14:55 backstory on essentially why I created 14:57 this presentation and why I put this 14:59 program together the reality is you know 15:04 if you’ve got an idea attraction and 15:06 what I found is that you know most 15:08 instances SEO just is made way too 15:10 complicated so 15:11 wanted to demystify it and explain in a 15:13 way that anybody can understand and so 15:15 it’s not always about having the right 15:17 tools it’s about knowing how to use them 15:19 and why you’re gonna use them that even 15:22 matters so we started to create this 15:26 this process where it’s very stackable 15:28 and simple where we essentially created 15:30 five modules which I’ll share with you 15:31 shortly 15:33 starting with keyword research and 15:34 competitor analysis after that we focus 15:36 on site architecture and how to 15:38 integrate those keywords into your site 15:39 architecture the third thing we focus on 15:41 is content structure and show you how to 15:44 descend big you in your content how to 15:46 use competitor’s cues to find the right 15:48 modifiers and topics and how to 15:50 integrate that into your site after that 15:53 we go into on-page SEO ops which is 15:55 really going to show you how to 15:58 implement technical SEO many of you 16:01 familiar with that but there’s always a 16:02 lot of tips and tricks and it’s one of 16:04 those things where you’re never really 16:05 done learning and if you think you are 16:07 then you’re sort of fooling yourself 16:08 there’s always something more to learn 16:10 so that’s really the premise and then 16:13 after that we’re gonna go into some off 16:14 page SEO ops which you’re already 16:16 familiar with if you’re using the 16:17 syndication networks so done a lot of 16:22 things right already explained that I 16:23 was able to write for the the actual 16:25 keyword SEO in the number four position 16:27 and that took about two years to do that 16:29 we did that with all on-page SEO and 16:34 that’s really where I came to the 16:36 realization that topical relevance is 16:37 paramount and so what we found is is 16:40 that as you know any site that you start 16:42 to create that has a lot of authority in 16:45 the beginning it’s like you’re trying to 16:47 appeal to the to the bots and the search 16:49 engines to to rank you well after you 16:51 cross that tipping point you know I 16:54 could actually publish an article and 16:55 within eleven minutes its ranking on the 16:57 first page for something that had four 16:59 million competing pages and Google would 17:01 be this is something that would take 17:03 months to get right for so the objective 17:05 is at that point is like stop focusing 17:07 on on off Beach get your on page right 17:09 but also develop a topical depth and 17:11 topical breadth and so that was the main 17:14 thing 17:16 did a lot of things wrong so back when 17:17 we started we had about 700 test sites 17:20 we’ve used I used to buy a lot of 17:22 domains and add our own pbn network set 17:24 up and you sort of figure out what 17:26 happens when you leave a footprint so 17:28 got smacked and figured out how to 17:29 penalty recovering all that stuff and 17:31 also on the of the tip of learning how 17:35 to do a lot of things wrong I learned 17:36 how to reconcile at 120 miles an hour 17:39 and survived so I was actually out for a 17:42 couple of years trying to recover from 17:44 that and that’s why you we’re all 17:45 wondering why the plugin wasn’t updated 17:47 or something like that but long story 17:48 short that’s another story but you know 17:51 any time you’re gonna learn you’re gonna 17:52 take some lumps and that’s what I was 17:53 getting out here 17:55 another thing you know a lot of people 17:57 have this notion that this is Google 17:58 that you have to run from it every time 18:00 there’s an algorithm you know change or 18:02 a shift and so when you have topical 18:05 Authority when you develop that you 18:06 really don’t have to worry so much about 18:08 algorithm shifts it’s really about as 18:10 long as your subject matter is really 18:12 dialed in you’re hitting topical depth 18:14 and topical breadth this is really going 18:17 to insulate you from a lot of those 18:19 different types of penalties which we’ll 18:21 get into here shortly so the fact the 18:24 matter is is that everybody has to start 18:26 somewhere and rather than focusing on 18:28 short term things you know we’re going 18:30 to share with you some battle-tested 18:31 strategies and really hope you enjoy 18:35 with them das here that you could go out 18:39 let your beard on fire 18:40 start swinging an axe and killing 18:41 everything but that’s really not the 18:43 best way to approach your market ideally 18:45 this is the approach that we’d like you 18:47 to take it’s far more effective for 18:48 results and so this time you’re in one 18:52 of two places you’re there starting from 18:53 scratch with a new site or you already 18:55 have a website need to optimize it so 18:57 with that in mind either way you’re 18:58 gonna expend energy so which would you 19:00 rather be at this point a flashlight or 19:02 a laser if you’re just getting started 19:04 it’s no secret you should target things 19:06 within reach you start with your long 19:08 tails you work your way up to your mid 19:10 tail keywords which we’re gonna show you 19:12 in a bit when you get all those 19:13 different tactics and you also need to 19:16 know what types of content you can use 19:17 to leverage and build it out if you have 19:19 an existing website and then basically 19:22 were going to show you a number of 19:23 different strategies that you can use 19:24 either augment or consolidate your 19:25 themes so that you can really be more 19:28 powerful like the laser beam analogy 19:29 that we 19:29 above and once we get through these 19:33 slides guys like I said we’re gonna go 19:34 and send over the shoulder training 19:35 that’s gonna be the fun stuff so I don’t 19:38 know about you but I’m a chess player 19:39 and or just share the analogy that I 19:44 feel about chess and how that really 19:46 works with us you know so much like 19:48 chess essentially one wrong move in the 19:51 opening can cause to the mid game or the 19:52 endgame and as you know when you get to 19:55 the first page that’s really where the 19:57 challenge begins it’s not too difficult 19:59 to get to the first page with many 20:00 keywords unless you’re targeting things 20:02 that are several million competing pages 20:03 but once you get to the bottom of that 20:05 of that first page that’s where the real 20:07 battle begins and that’s where the power 20:09 of your on page and your topic allottee 20:11 comes in just like SEO chess is it 20:14 slightly like one thing you have to 20:16 focus on many fronts you have to make 20:18 sure you have all the pieces working 20:20 together so they can actually create the 20:22 common goal now you see this black 20:23 knight right here fighting an entire 20:25 army it’s not really gonna work out 20:27 really well for that guy so you don’t 20:28 want to be this guy right here 20:30 and also you have to be realistic about 20:32 where you’re gonna start and how you’re 20:35 trying to outrank your competition if I 20:36 don’t know who you’re up against what 20:37 kind of domain authority they have what 20:40 kind of you know how many pages of 20:41 content they’ve dedicated to each one of 20:42 these keywords or topics you know what 20:45 their quality of the content is you have 20:47 to use all of this essentially to form 20:50 it the proper strategy so this sort of 20:54 leads into the next slide and this is 20:57 really one of the main things 20:59 particularly when you have clients you 21:01 know they all want to rank for that big 21:02 juicy keyword but for every one of those 21:04 phrases there’s a lot of things you have 21:06 to do to get rank so it’s all about 21:08 targeting things within reach so if 21:13 you’re starting out for example it’s 21:14 better to stick with something that’s 21:15 three hundred thousand competing pages 21:17 or less and instead of targeting the 21:19 larger phrases because if you you need 21:22 to really build up the proper structure 21:24 to create that reference your integrity 21:25 and you can do that by website silo 21:28 architecture which is gonna be sharing 21:29 with you tactics so consideration is you 21:34 know the more rankings you want to bake 21:35 to Teru topple you need to find that 21:37 pain in the market and find those 21:39 modifiers that people are using to find 21:41 that you can leverage those right page 21:43 to get right for questions and answers 21:46 for example and every time you create a 21:48 ranking with that it’s going to parlay 21:50 into creating more Authority for the 21:51 site so you know don’t delay you can 21:54 easily you know capture dozens of long 21:57 tail phrases rather easily and as a 22:00 result of that it sends a very clear 22:01 signal on it and then you can scale 22:04 after that you can basically use your 22:07 site like a dynamo and start ranking 22:08 other pages the analogy we use before 22:14 with chess it’s all about attacking on 22:17 multiple keyword fronts and yet to treat 22:19 your your keywords like in your 22:21 categories like pawns you have to 22:23 encapsulate the enemy or anyone who’s 22:25 trying to outrank you 22:26 depending on how you target that pain 22:28 you want to find the best transactional 22:30 keywords and know the difference between 22:32 educational transactional when a target 22:34 all the modifiers and how well you 22:36 leverage those questions and answers 22:37 that people are asking for to drive 22:38 traffic so I’ll show some some tactics 22:41 we do with it’s out of our training to 22:44 get to showcase that then we’ll move 22:48 along to one of the last slides here 22:50 which is before your attack you need to 22:52 get this right so this is way what goes 22:56 down in the south as you all know eighty 23:00 percent of your time should literally be 23:02 spent on keyword research because if 23:03 you’re targeting the wrong the phrases 23:04 really doesn’t matter 23:06 sign architecture we found it’s one of 23:09 the most effective things that you can 23:10 do to streamline ranking factor so I 23:15 know everybody likes to focus on off 23:17 page I’ve been doing this for a long 23:18 time we really like to get that on-page 23:19 dial and we found that just by creating 23:21 that structure properly you can you can 23:25 create rankings even without links so 23:28 any questions so far I don’t think so 23:32 hold on a minute 23:33 nope just some comments with no 23:35 questions yet all right I know it’s 23:37 pretty basic we’re going through the 23:38 slides here but so just to give you an 23:40 idea of what those trainings about we’re 23:43 gonna show people up how to find the 23:45 best keywords we’re gonna go through how 23:48 to structure the site architecture how 23:50 to write optimized content that ranks 23:53 excuse me guys and then after that I’m 23:56 gonna show you how to uh create some 23:59 schema structured markup and things of 24:00 that nature internal linking so I prefer 24:03 to just go ahead I’ve sort of specified 24:06 this I sort of jumped ahead but the idea 24:09 here is that market recon is the first 24:11 module it’s really about finding the 24:14 best keywords this allows you to find on 24:15 multiple fronts and so some of the 24:18 things that we teach for example how to 24:21 find the most relevant keywords and 24:22 modifiers related search how to leverage 24:26 the power to knowledge graph and how to 24:27 use questions found the knowledge graph 24:30 inside of your content so we jump over 24:33 here go ahead 24:37 alright so you can see my screen yes sir 24:41 we see Google all right so it says 24:44 pretty much where it all starts and just 24:45 think of a key where we’re gonna look 24:46 for so it’s using the word modern homes 24:49 for example know nothing about basically 24:56 mining for relevant keywords here so 24:59 this one the first in jail we suggest 25:00 basically try to find phrases that have 25:03 some type of commercial value and so 25:08 this is see here actually I should 25:14 prefer to use others phrase I think in 25:16 the training I was outlining something 25:17 about insurance so this is use on an 25:21 insurance people outside of auto 25:24 insurance you’re gonna find related 25:26 searches but you also want to find 25:27 things that have commercial intent so 25:29 chief auto insurance quotes for example 25:31 would be one of those phrases so from 25:33 here we dig in a little deeper 25:34 we’d also look at these the questions 25:38 that people also ask and so there’s ways 25:40 that you can actually implement 25:41 question-and-answer schema markup where 25:44 you’re going to actually use questions 25:46 in your content like this and then when 25:48 you add schema markup around those 25:50 questions it actually increases your 25:52 chances of surfacing in the featured 25:54 snippets and the Google results so one 25:56 of the first exercises we do is we ask 25:58 people to go ahead collect all these 26:00 different modifiers and then continue to 26:01 drill into each and every one of these 26:03 making sure that you write them down and 26:07 sort of collect as much data as possible 26:08 in addition to the questions that person 26:10 would be using to find or mind the pain 26:13 in the market that’s what you would do 26:17 is you would actually go to the the 26:19 first competitor see what page they’re 26:26 ranking for 26:27 there’s also a number of ways you can do 26:29 that so let’s let this open up here they 26:33 got a terribly slow loading page man 26:35 yeah okay so without getting into all 26:40 that the top go relevance which we’re 26:42 gonna get into a little later if they’ve 26:45 got sixty nine deep links to the page 26:46 but here’s an example one of things I 26:47 want to show you you know for them to 26:49 get ranked for this this is great 26:50 because they’ve actually done their 26:52 homework they built dropper deep links 26:54 but this is actually showing up this is 26:56 a commercial keyword that’s showing up 26:57 in a question that’s highly sought after 27:00 which is who has the cheapest car 27:02 insurance but one of the things like you 27:04 want to look at is what did it take them 27:06 to get right for that keyword so we just 27:09 go ahead and create a new tab colon 27:12 command I mean doc however it is then 27:17 you can actually look at the phrase by 27:18 packing the keyword that you’re looking 27:19 at so not rocket science but this gives 27:23 you the topical depth so you need to put 27:27 a space here right now you can see that 27:31 business of modifiers occurring this is 27:33 a very you know deep authority site 27:36 they’ve got 50 100 instances of this 27:38 keyword now this could also be 27:40 navigation but typically its gonna show 27:41 you in rank order here the different 27:44 types of things are there they’re doing 27:46 within their site so you can see all the 27:47 different modifiers they have here the 27:50 gut stuff this is one of the phrase 27:51 that’s what I thought was funny as I’ll 27:52 show you this a little later but you 27:53 know car insurance for college students 27:56 this is a perfect example of a longtail 27:57 you can see that DUI car insurance so 28:00 they’ve got numbers of instances that 28:01 are also using geographic modifiers 28:03 they’re targeting that route phrase by 28:05 ball rolling all those different 28:07 keywords so right now I mean this is a 28:09 really steep spike if you wanted to find 28:11 out what their entire site was like 28:13 obviously just using the site colon from 28:15 a 28:18 eight thousand nine hundred pages been 28:19 50-100 of those pages are in some way 28:21 shape or form incorporating that shingle 28:23 chief car insurance and so it’s like 28:25 sixty percent or more yeah that’s crazy 28:27 precisely so that’s just how they’re 28:29 essentially using this this type of 28:31 on-page relevancy so that’s one of the 28:33 first things we suggest but you can go 28:35 back this is a very simple exercise you 28:38 start with a root phrase you find 28:40 something that has commercial intent you 28:42 can go through and at that point find 28:45 the top-ranking person for that keyword 28:48 use that that keyword using the Select 28:50 colon command put a space after it to 28:52 find out how many shingles or how many 28:54 pages they’ve dedicated to that topic so 28:56 that’s just one little simple technique 28:58 we go back let’s go here I want to show 29:03 you a couple other techniques we use in 29:05 market recon a lot of you know that 29:11 questions and answers are really big so 29:14 one of the tools if you haven’t seen it 29:15 before it’s called answering the public 29:18 great tool because it allows you to 29:20 simply ask questions and this is the 29:23 kind of content if you know the 29:24 thresholds that you’re looking at this 29:26 is the kind of thing that’s really going 29:27 to show you a way that you can just show 29:31 you this earthquake 29:37 it’s pretty straightforward your keyword 29:43 use your location and as a result of 29:46 that’s gonna show you all these 29:48 different types of questions that people 29:49 are asking now this is where you 29:50 essentially get your content that you’re 29:53 going to use to build your supporting 29:54 articles your blog posts your different 29:57 types of social content etc so it’s 30:01 pretty straightforward you can see how 30:02 they’ve broken things into you who what 30:05 how why where when and if you are 30:07 familiar with semantic entities and 30:10 things of that nature it’s pretty much 30:12 who what where places etc so this also 30:15 gives you a lot of really cool ideas you 30:17 know can auto insurance be deducted on 30:19 taxes can auto insurance companies deny 30:21 coverage so you can see just in the can 30:23 or in the will and if you go down and 30:25 actually show you a number of different 30:29 modifiers you can use so you’ve got in 30:31 this case see you can see the 30:33 propositions and questions there’s a way 30:35 you can just download it to CSV which is 30:37 really straightforward and this is just 30:41 a great place to mine new ideas so let’s 30:46 go ahead and jump over here and the 30:54 objective is you can see the questions 30:56 and the propositions here so this is 30:59 something that you could actually write 31:01 a blog post out there if you’re going 31:02 after the word auto insurance premiums 31:03 for example couple ways you can actually 31:07 find out how competitive that is okay if 31:11 I were to just use quotation marks and 31:14 Google and then place this return you’re 31:22 gonna find out how many competing pages 31:23 you’re up against now you see that 31:24 that’s not very competitive so the way 31:26 that we do this there’s competitive 31:28 keyword thresholds you want to take in 31:29 consideration and so anything that’s 31:31 really a million competing pages or more 31:32 should technically be a silo or a 31:34 primary target or topic that you would 31:36 then use to build out a site segment a 31:40 sub-domain a subfolder if you will now 31:43 in this case we’re coming in so 50,000 31:45 competing pages or less as typically 31:47 things that you’d like to use for a blog 31:49 posts and 31:50 ideas you know and it’s 50,000 to 31:53 300,000 competing pages right for right 31:56 away unless you have some authority but 31:57 if you’re starting from scratch this is 31:59 where you can actually start to build 32:00 relevance and authority and so this 32:04 would be the category if you will auto 32:06 insurance premiums under auto insurance 32:08 but if I were to find out you know I was 32:10 asking about that one particular 32:11 question and where would it fit in the 32:12 site architecture you can really quickly 32:14 determine that by just pasting that in 32:18 in quotation marks you know this is 32:20 really basic stuff it’s not too 32:22 complicated but once it gets really 32:23 about how you put it together and 32:26 understanding where are things fit in 32:27 the site architecture this will really 32:28 help you so you can see that this is a 32:30 keyword that has four thousand competing 32:31 pages in Google this is literally 32:34 something that you could topple you 32:35 could write this you could use this kind 32:38 of a question on a page you can make 32:41 this the topic of the article or use it 32:44 in sequence using something that’s a 32:46 little more preferred now which would be 32:47 using in-depth content so if you looked 32:50 at this as finding 10 questions about 32:52 your primary topic that’s about auto 32:54 insurance premiums which you could 32:56 easily do if you just simply just mined 32:58 the data from looking at the different 33:00 questions that we saw here this also 33:02 could be another thing where you’re 33:04 going to look for auto insurance 33:06 premiums and then you’re gonna use that 33:07 first technique that we showed you and 33:11 then you’re going to go back and drill 33:14 into that alone so let me go ahead and 33:15 just type in auto chest premiums again 33:17 and you’re gonna simply use Google to 33:21 data mine itself so right here you’ve 33:22 got additional questions that you can 33:24 use to leverage and content you can look 33:27 at more related phrases here and you can 33:30 also use things like buzzsumo if you’re 33:32 interested in finding excuse me what ii 33:40 can use things like buzzsumo 33:46 some keyword now my keyboard hit one of 33:56 these crazy keys duh yeah that’s 33:59 just going in like narrow region mode no 34:05 pardon my french it’s right min I don’t 34:10 know how to get out of this are you 34:14 serious I literally cannot use my 34:19 keyboard oh my god 34:21 ha yeah it’s like some kind of uh 34:24 anybody know how to get out of narration 34:27 mode 34:27 perhaps I’m gonna have to go google it I 34:32 wish I could I can hit the back button 34:33 yeah right how do you recover from 34:37 narration mode oh you know what though 34:48 the stars weren’t lined up right today 34:50 buddy I can tell you that that’s all 34:52 right Murphy’s Law but a control panel 34:56 how to disable narrator is that what it 34:58 is yeah it’s some kind of it’s uh crap 35:04 oh here’s windows keyboard shortcuts for 35:10 accessibility maybe that’s what you’re 35:11 looking for right yes yes that’s 35:14 accident so open narrator settings 35:21 windows key + ctrl + n okay gotta love a 35:29 live webinar yeah that’s the only thing 35:44 that’s giving me on Windows 10 is that 35:45 one Windows key ctrl + n 35:55 here’s another one let me find this one 36:01 windows plus f-type narrator in the 36:05 Start menu search box if you could type 36:09 yeah we go thank you thank you that was 36:12 it alright okay so alright no I’d like 36:20 to go back to where I was I was going to 36:21 buzzsumo alright so I just want to show 36:23 you guys oh now this isn’t working on 36:27 the screen 36:30 he just greg dreamer just said press 36:33 caps lock + on f our windows caps lock 36:39 caps lock excuse me 36:40 caps lock + escape alright let’s try 36:45 this 36:45 Oh dice I’ve never had that problem 36:58 thank God now that I know about it it’s 37:00 gonna happen though I don’t even know 37:02 what I type it’s not working guys over 37:19 here I’ve got it yes go back to Firefox 37:25 it’s not allowing it can you open Chrome 37:29 instead or was that not possible 37:31 let’s try I hate to make you move switch 37:36 again but that’s alright let’s just try 37:38 this it’s working there buddy alright 37:42 Firefox is doesn’t like you so there 37:44 must be a narrator setting in Firefox 37:46 yeah I think so 37:50 something else is going on yeah look 37:54 something else my friend it’s not even I 38:02 working well we could reschedule for 38:08 another date man if we have to looks 38:10 like I think we might have two guys I’m 38:12 not just coming up with something I 38:14 literally I’m locked out of my system 38:15 all right well guys he’s locked out and 38:19 we apologize for the terrible mishap 38:21 there’s more today with this webinar 38:23 obviously and definitely don’t feel bad 38:26 man we screwed up to start with so it’s 38:27 really our fault to begin with so don’t 38:29 don’t sweat it we can reschedule do you 38:32 want to talk just about the offer now 38:34 for those that are still on and I mean 38:36 we can go ahead and make I know some 38:37 people are going to want to buy the 38:39 product or anything so let’s just talk 38:41 about that if you don’t mind okay well 38:43 hopefully I can still get to my slides 38:45 over here so if nothing else so 38:47 basically today what I’ve done for you 38:49 guys is typically the course goes for 38:51 about a thousand I did do a pre-launch 38:53 before what I’m just gonna do is 38:55 actually throw in the five-week online 38:58 training which is the course itself and 38:59 I want to throw in a twenty site license 39:01 of SEO ultimate plus and so if you’re an 39:03 agency you can actually get started with 39:05 your clients using SEO ultimate plus we 39:07 actually have seventeen step setup guide 39:09 that’s part of that whole site which is 39:13 SK ultimate plus com 39:15 I’m gonna throw in the SEO design 39:16 framework which also has a silo training 39:18 course in there 28 different skins 39:20 that’s so you can actually use the 39:22 framework it’s an unlimited license to 39:23 build out as many sites as you’d like 39:24 for the first ten people is going to 39:27 throw in an hour consultation so if you 39:28 got stuck somewhere along the way then 39:30 you can technically call me up or 39:32 schedule a time we could just walk 39:34 through and make sure that you’ve got 39:35 your settings set up the way you want 39:36 talk about sign architecture some 39:38 on-page tips getting some semantic stuff 39:40 whatever you wanted dude that’s that’s 39:42 worth the price of admission right there 39:44 and you know that was the idea that’s 39:46 crazy 39:47 no I’ve got a set up over at the 39:50 ultimate SEO boot com forward slash 39:53 semantic mastery is over there then you 39:56 can use support at ultimate SEO bootcamp 39:59 calm and I’ll be standing by there and I 40:01 just really wish that this sort of 40:06 scenario we’re gonna have you on again 40:09 we’re gonna we’re gonna do a second 40:10 attempt at this and next time we’ll let 40:12 you know like I out if you need some 40:14 help or whatever I’ll help you get 40:15 power point in the Google slides and all 40:17 that so it’ll work much more smoothly so 40:20 so that’s okay this was a practice run 40:22 and we apologize for any of you guys 40:23 that are on that uh you know you either 40:26 have to come back and watch it when we 40:27 do the the encore presentation which 40:29 will be much better I promise or just 40:32 wait until we send out the replay but I 40:33 also want to talk about a quick bonus 40:35 that we’re gonna throw in also but and 40:37 in Jeff I’ll let you go through the last 40:38 couple slides too but I just wanted to 40:40 mention this first of all the SEO design 40:42 framework guys I absolutely I mentioned 40:45 at the beginning of this well while we 40:47 were waiting for Jeffrey to come on that 40:49 I switched over to using the SEO 40:51 ultimate plus plugin now from now on for 40:53 all my WordPress sites no longer am I 40:54 going to be using Yoast it’s a much 40:56 better plugin 40:57 also the SEO design framework going 41:00 forward that all my sites are gonna be 41:02 built on that – in fact I’m working on 41:04 building a new agency right now some of 41:06 you are probably already aware of that 41:08 and our new agency site is being built 41:10 on that framework in all subsequent 41:11 sites for potentially any clients that 41:13 come in are gonna be built on that same 41:15 framework because it’s so customizable 41:17 and it’s just it’s so great for setting 41:20 up siloed sites and it’s just really 41:22 really powerful and so there’s a bit of 41:24 a learning curve for that at least for 41:26 me because I’m not a web designer type 41:27 guy but again it’s such a powerful site 41:30 that a powerful framework excuse me that 41:33 I can’t see wanting to use anything else 41:35 is my point 41:36 so well I did have let’s see if I can 41:39 actually jump over or since I was going 41:41 to go through and walk through some of 41:42 the stuff you can see here I won’t be 41:44 able to click anything but I’ll just 41:45 show you this is how the inside of the 41:46 framework looks and so we’ve actually 41:48 broken things up into modules much like 41:51 the plugin so this is where you’d 41:53 actually implement your website silo 41:55 architecture for example and so you can 41:57 see that many of you familiar with the 41:59 network Empire silo plug and that’s 42:01 actually was our our design and we 42:03 shared that with them and then they 42:04 ended up doing some other things with 42:06 the create in the video silo plugin but 42:07 this was originally just not for our 42:08 framework so it’s really straightforward 42:10 and quick they’re centrally compatible 42:12 so if you build things out with network 42:14 empires plugin you could actually use it 42:15 here as well it will recognize that 42:17 because the same core you can see that 42:20 you literally have the ability to 42:21 implement silo architecture quickly and 42:24 easily in the SEO tab we put all the 42:28 settings for 42:29 see ultimate so that essentially they 42:30 it’s seamless how it works together all 42:33 right 42:34 you have options for example let’s just 42:36 go ahead and load this up so you can see 42:38 how this works so you have all your 42:39 modules that are available from here you 42:41 could jump into any one of your modules 42:42 if you haven’t used the plug-in and 42:44 really customize things which also has a 42:46 global and page level settings format 42:49 for the framework you have global layout 42:51 settings and so this is the global area 42:53 so whatever you do here unless you 42:55 change the settings on the front-end 42:57 then essentially you’re going to have 42:59 that on every page or post but you also 43:01 have the ability say for example that 43:03 you want to do a left sidebar both 43:04 sidebars no sidebar etc on any of your 43:07 other pages you could overwrite any of 43:09 the settings that you create 43:11 I can local paints level settings this 43:14 is where you can really go crazy with 43:15 your Styles now instead of having to 43:18 implement things in your style sheet 43:20 which you can do as well because you can 43:22 actually add code to your header or 43:24 footer etc but you have all the things 43:26 here now our developers extremely 43:29 granular so I mean it’s built on 43:33 bootstrap so it has all of the bootstrap 43:35 variables aside so if you really want to 43:36 get in there and tweak every little 43:39 thing you can do that too or you can 43:42 simply leave the defaults in place and 43:44 go to the next section and really 43:46 customize that you can create child 43:48 themes with this I mean it’s really 43:49 ridiculous so for you can import all of 43:52 your different you can use your Google 43:53 font families set up your default 43:55 typography I’m gonna give you a couple 43:57 examples too you can completely 43:59 customize your navigation structures 44:01 your menu styling you have different 44:03 types of menus from sticky default 44:05 overlay a fixed to top etc you can 44:07 really get in there extremely granular 44:08 so there’s like a thousand options 44:11 inside of the admin and that’s what he 44:13 was saying that it is there is a 44:14 learning curve but once you get it it’s 44:16 just like literally whatever you want to 44:18 do and if you don’t what to you don’t 44:20 have to touch a line of code to 44:21 implement yeah that’s what I like about 44:24 it because I mean and again it’s there’s 44:26 a bit of a learning curve but it’s one 44:27 of those things guys when once you 44:28 learned the framework and how to 44:30 customize it then it’s you can repeat 44:32 that over and over again and you know 44:34 I’ve always used one particular theme 44:35 developer for the last several years 44:37 because it was just the one that I’d 44:38 known and I was comfortable and that 44:40 could build sites 44:42 quickly with it but throughout that it 44:44 was very limited in the capability its 44:48 functionality whereas the SEO design 44:50 framework is like almost unlimited it’s 44:52 so customizable it’s crazy it’s awesome 44:54 and I appreciate it a couple guys I’ve 44:56 already purchased they can they realize 44:58 that you know I told you how we’re gonna 44:59 get some lumps every now and then you 45:01 learn the hard way 45:02 not to set up your slides and all that 45:04 stuff in events that was definitely me 45:06 so but but yeah just just going back to 45:09 the tools I mean this literally came 45:10 from trial and error and basically 45:13 trying to save time so we didn’t have to 45:15 worry about you know what our clients 45:19 sites could or couldn’t do another thing 45:21 that’s really cool I was gonna show you 45:22 guys when we got to the center protect 45:26 your component of this is that you 45:28 really want to eliminate a lot of 45:29 duplicity on your site architecture so 45:31 things like you know have you sidebars 45:34 or similar footprints in the header or 45:36 footer can really leave a lot of they 45:38 can really it’s about creating more 45:40 disambiguation what I mean by that is 45:44 getting rid of all the noise you know 45:45 green widgets should only have things 45:46 about green widgets or things that are 45:48 related to widgets they should have 45:49 everything so in the framework you have 45:51 the ability to create as many navigation 45:54 structures as you want and then on a 45:56 page-by-page level you can actually show 45:58 that navigation structure for in any 46:01 section that you’re in and so if you 46:03 think about that like if you look at 46:04 Amazon what I did is I dug into 46:07 electronics the electronics silo but you 46:09 see that they have deals bestsellers and 46:11 things of that nature but they also have 46:12 all these computers routers etc and if I 46:15 go into computers and routers then 46:18 dynamically the navigation structure is 46:19 going to change and become even more 46:21 refined and that’s what you could do if 46:23 you could implement that type of 46:24 granularity and so you’re not creating 46:28 it so you’re talking about as mush a 46:30 silo specific menus that you can custom 46:33 design on page or post level precisely 46:36 and I guess and I won’t be able to click 46:37 which creates a challenge in and of 46:39 itself but I can still show you what it 46:41 would look like if I were to do that I 46:43 would simply come down to and this is 46:45 the SEO ultimate page level settings you 46:47 can do a lot of cool stuff like you know 46:49 you can do real Previn next I don’t know 46:51 if you guys are familiar with that but 46:53 there’s a distinction between canonical 46:56 and and Ralph Previn next and rel Previn 46:58 next it takes a series of documents and 47:01 if you if you’ve seen a paginating 47:03 article where it might be spread across 47:04 five pages you can actually use the rel 47:08 prem next option to consolidate that 47:10 article so this is one of the tactics 47:12 that we talked about is that you could 47:14 create a you know two thousand five 47:16 thousand eight thousand word article 47:18 across an entire segment of your site 47:20 and then instead of treating it like a 47:22 different document you can use the rel 47:23 prep next inside of SEO ultimate plus to 47:26 essentially ball roll or daisy chain 47:29 these articles together and the way it 47:31 works is wherever you start you put the 47:33 next URL in and then you’ll go to the 47:36 dyke server so if I started at number 47:37 one well then I would say that this is 47:39 you know number two and on the next page 47:42 I would say the previous page is 1 the 47:43 next page is three so on and so forth so 47:45 you know we get into well we have 47:47 tutorials on how to set that type of 47:49 markup up but this is one of the things 47:50 you can do the distinction between this 47:53 and the canonical tag is when you use a 47:56 canonical tag it’s essentially telling a 47:58 search engine that I don’t want any of 48:00 these other pages to rank I want to pass 48:02 all the referential integrity to the URL 48:04 that I want but if you’re using a 48:07 variety of different longtail say for 48:09 example using questions and answers like 48:10 we were talking about before you can get 48:12 all those questions and answers to rank 48:14 so that each one of those pages is still 48:16 indexed but it would then these then 48:19 Google would then determine if it wants 48:20 to rank that individual page or the 48:23 first page that’s your considered you 48:25 know the head of the article if you will 48:27 so this is a really powerful function 48:29 and that’s inside of the plug-in but 48:31 let’s go ahead and go down to the 48:33 framework and so you can see how these 48:34 things work together if I went over to 48:36 header structure excuse me the ability 48:40 to turn my navigation off I can pull 48:43 from global for example but if I wanted 48:45 to get custom navigation on this page 48:48 then I just would choose which custom 48:50 navigation I would want for this page 48:51 only so extremely powerful stuff you 48:55 could also display a custom header which 48:57 means I could take away or put things 48:59 where I wanted set it and what the 49:00 slider I’d simply drag it down here 49:02 I want to grab on this page and add 49:04 another header block and in my header 49:05 block too I can use things like hide on 49:07 or show on so it’s got 49:09 really robust functionality mm-hmm so 49:12 were to look at the widget areas let’s 49:15 just do that real quick you could set up 49:17 widgets that only show on certain pages 49:20 so let’s just take that Amazon example a 49:22 little further I’m in a custom site 49:24 segment I’ve got my custom navigation 49:26 but say that I want to have custom I 49:29 could either create unlimited sidebar 49:32 and put that wherever I want it on the 49:33 page I could use that in shortcode 49:36 format or I could literally put it on 49:38 the page inside the Builder or I could 49:41 literally go over here and now how we 49:42 showed you that there’s header block – I 49:44 could add something and say only show 49:46 that on you have to hide on our show on 49:49 option you could say I’m gonna show that 49:52 on a certain page so this is what I’m 49:55 talking about about granular options 49:56 it’s overkill in most instances but if 49:59 you want to get something done and you 50:01 really want to dial it in to make sure 50:03 that you’re sculpting on-page elements 50:04 properly this is the theme that you 50:06 could actually do it with so that’s 50:08 super powerful because you can prevent 50:10 bleeding your themes that way by cutting 50:14 your silos really tight and like he 50:16 mentions that that you know includes 50:19 header navigation sidebar and add footer 50:21 navigation all those things that 50:22 typically are those are those bleed your 50:24 themes guys think about it so that’s you 50:26 know actually it’s very important to try 50:28 to compartmentalize that stuff and 50:29 that’s how you really reinforce the the 50:32 relevancy of an entire silo precisely 50:35 precisely and then on top of that you 50:37 have a weak red groan drag-and-drop 50:39 builder we can actually just basically 50:42 you know it’s pretty straightforward you 50:43 click the plus sign and you add what you 50:45 want so you can see here and we’ve got 50:47 text blocks video embeds if I wanted to 50:49 add a custom widget like if I created a 50:51 custom widget another page then what 50:54 would happen is I would choose it from 50:57 here would actually show up in my custom 50:58 widget so I could create a custom widget 51:00 for a custom section and then that 51:01 custom widget I could only show the 51:03 navigation that I wanted so really cool 51:05 stuff like that extremely granular yes 51:07 sir I forgot to mention the bonus that I 51:10 was going to throw in guys anybody that 51:13 ends up picking up through from from the 51:16 webinar today or from the replay or 51:17 whatever just reach out to us at support 51:19 at semantic mastery comm and we’re gonna 51:20 throw in the content kingpin which is 51:22 that 51:23 she mastery PR product but it’s a 300 51:25 hour product and it talks about how con 51:27 you know how to curate content and how 51:29 to outsource content curation which is 51:31 perfect because I think it’s a perfect 51:33 complement for this because one of the 51:35 things you’re going to want to do is 51:36 start building out your content adding 51:38 depth to your silos which is what he was 51:39 talking about right so go wide or go 51:43 deep well using curated content you can 51:45 add as much depth as you want to a 51:47 particular silo so the content kingpin 51:49 course we’re gonna throw that in as well 51:51 powerful and we didn’t even get to the 51:53 on page off page stuff but yeah the 51:55 syndication networks works hand-in-hand 51:56 with that you can even use that to boost 51:58 your syndication down gaps and so get 52:00 those on autopilot and rinse and repeat 52:02 yes sir so also for those of you who 52:05 purchase I will be going through and 52:07 I’ll have to go ahead and send you some 52:09 additional links to get the actual SEO 52:11 ultimate license as well as I’ll sign 52:13 you up then the framework so I look 52:16 forward to trying this again when we 52:17 don’t have the Murphy’s Law thing really 52:19 none me and I’ll probably just too 52:20 strictly since this you know the intro 52:23 is what it is I’m not really a slide guy 52:25 this took me like days to put the slides 52:26 together I’d rather just I’d rather to 52:28 show you some over the shoulder stuff 52:29 and get into it like that so that’s fine 52:31 whatever you’re comfortable with me and 52:32 I’d rather you be comfortable and give a 52:34 good presentation you know what I mean 52:35 all right well like I said guys I really 52:37 appreciate the time wish it would have 52:40 been a little different as far as the 52:42 technology but really grateful 52:45 nonetheless and look forward to the next 52:47 one just a couple questions real quick 52:49 Jeffrey before we wrap it up and in 52:51 again guys all of you that had signed up 52:53 originally through webinar Jam we’re 52:55 gonna send out notification when we 52:57 schedule the encore and then we’ll also 53:00 send out actually we probably won’t send 53:02 this version out as a replay we’ll wait 53:04 until you have rank or you know take me 53:08 too long to edit all this stuff out 53:09 anyway Mohammed had a question he said 53:13 does SEO design framework work worth any 53:15 theme or a theme in itself no it’s a 53:17 brain itself yeah so you can do things 53:20 with it you can build your own child 53:22 themes with it or you can just use the 53:23 the global settings to pretty much 53:24 customize what you want and there’s also 53:26 an export function that allows you to so 53:29 you create a template that’s perfect for 53:30 you well now you just 53:32 export that it just creates a little 53:34 base64 export and you can copy and paste 53:37 that into your next framework and all of 53:39 your settings will be there meaning that 53:40 the look field navigation the way that 53:43 you set up your your your global 53:45 settings will all be there yep 53:47 and Kendrick had a question can you 53:49 clarify the difference between what had 53:51 the 20 site license and what has 53:52 unlimited build option the framework has 53:56 the unlimited build option and the SEO 53:59 design I’m sorry on the SEO ultimate 54:00 plug-in has the 20 site license there is 54:03 a license for the plug-in the framework 54:05 you can use on as many sites as you’d 54:06 like 54:07 very good we can still benefit if we 54:11 don’t use the theme right yes Mohamed 54:13 obviously the the theme works hand in 54:15 hand but you can still apply the SEO 54:18 bootcamp training to any sort of theme 54:20 that you want it doesn’t have to be the 54:22 SEO design framework correct and that’s 54:24 just you know we’re just really showing 54:26 you ways that you can leverage that the 54:28 framework was really built in such a way 54:29 that it makes it extremely easy to 54:31 implement the stuff that we’re going to 54:32 show you but it can be done with 54:34 anything there’s always a way yeah and 54:36 you know what Jeff you’re right it can 54:38 be done with anything but trust me I 54:40 know from as well as you do I’m sure 54:42 when you try to patch together advanced 54:45 silo architecture into themes that 54:47 aren’t designed to support that kind of 54:50 thing it becomes a nightmare trying to 54:52 set up custom menus using like widget 54:55 logic and that kind of stuff it’s just a 54:57 real pain in the ass so if you’re gonna 55:00 be doing you know advanced silo 55:02 architecture Mohammed you’re I would 55:05 suggest that you learn the framework if 55:07 you can or hire somebody to do it so 55:09 yeah it’s gonna it’s gonna save you a 55:11 lot of time and then if you harvest 55:13 questions and answers you know there’s 55:14 tools like answer the public there’s my 55:17 favourites was power suggest Pro you can 55:18 use that with different modifiers and 55:20 wildcards and get you know just 55:23 literally tackle the market in a way 55:24 with hundreds of questions at that point 55:26 pop it in there put it into a silo use 55:30 the deep link juggernaut inside of SCO 55:32 ultimate I was going to show you some of 55:33 that stuff basically you have the 55:35 automatic linking option a lot of people 55:38 say oh don’t automatically link well you 55:40 do have the option to sort of show you 55:41 this really quick since we’re over here 55:43 inside of SEO ultimate – hmm to rein it 55:48 in if you will you don’t have to you use 55:52 every instance and so in the content 55:53 link settings what you can do is you can 55:55 use a variety of different modifiers so 55:57 if you know that you’re gonna be 55:58 targeting like auto insurance or the 56:01 word quotes or near me or something like 56:05 that and you integrate that into your 56:07 content structure when you’re building 56:08 out your content what you can do is you 56:10 can simply say don’t use the same anchor 56:13 text any more than one time for pager 56:15 post and don’t link to the same 56:16 destination any more than one time and 56:18 so then what happens is if you’re 56:20 creating content over here and you’re 56:22 saying this we call this feature instant 56:24 post propulsion so for example if you 56:27 want say that this was like auto 56:30 insurance premiums as my page and I just 56:32 wanted the word anytime the word premium 56:34 was mentioned this the destination and I 56:37 would go down to to the links tab right 56:39 here and I would simply type in the 56:42 keywords the various modifiers if you 56:44 will that I want this page to rank for 56:46 and as a result of that anytime that 56:48 keyword were mentioned somewhere else it 56:50 would then link to this page so you can 56:52 if you just if you know that you’re only 56:53 gonna be writing about certain topics in 56:55 certain areas you can use this to sculpt 56:56 ranking factor as well what I like to do 56:59 is at least do the root phrase plural 57:01 variation and maybe a couple of 57:03 modifiers and then by doing that and 57:05 then using that in tandem what they 57:06 don’t link to the same anchor text more 57:08 than one time per page or post you don’t 57:10 have multiple instances linking but you 57:12 also have that topical variety where 57:15 you’re not over you’re not using the 57:17 same anchor text over and over and over 57:19 again that makes sense yeah I can also 57:21 go over here and you can actually 57:22 globally decrease settings and you can 57:24 add a dampener so that basically next to 57:28 each keyword so if I save this and went 57:29 back to the main settings say there’s a 57:32 keyword that I don’t want to have 57:35 basically over optimized I can literally 57:37 set it like so so you’ll notice in my 57:40 content link settings you have this 57:42 dampener so basically if I were to say 57:46 90 percent then it would only make 10 57:48 percent of the time that it would have 57:49 before so I can literally really 57:53 then it doesn’t over optimize or make 57:56 too many times internally within the 57:57 site so that’s another thing you can do 58:00 it’s another strategy are going to share 58:01 where at least you could use some of 58:03 your modifiers or your group phrases or 58:05 your plural variation or something 58:07 you’re going to make sure that no matter 58:08 what somewhere in your content you get 58:10 some length flow back to the pages that 58:11 really need to need more emphasis yeah 58:15 and that’s really powerful too because 58:17 the like you said the the pages that you 58:19 end up ranking on page one you can use 58:21 those very strategically to link with 58:24 internal links to other pages that you 58:26 need to push some power or you know link 58:29 equity over to and even you can rank 58:31 other pages just from internal link 58:33 thanks guys yes structure done correctly 58:37 absolutely and I can even show you a 58:39 case in point two that I wish I could do 58:42 more but I still can’t use the let me 58:44 see if I can find this that’s gonna show 58:46 you guys best way neighborhoods is the 58:51 site that we actually built out right 58:53 and so this is somebody who’s in an 58:56 extremely competitive space once against 58:58 a real estate for Los Angeles 59:00 there were various nodes that we were 59:02 looking at modern homes for example they 59:05 were looking for things to do and all 59:06 these different types of things well if 59:07 we settle on building out the modern 59:10 homes mid-century modern homes silo 59:13 really and one of things that we did you 59:16 can see here is that we also focused on 59:18 architect the different architects and 59:20 we started to rank for keywords that 59:22 were related to the different architects 59:25 like the long house the different places 59:27 now you’ll see that at the top of page 59:29 two that all these the keywords that 59:31 really are important to the business 59:33 because they were considered within that 59:35 cluster are now bubbling up and this is 59:37 a site that has a domain authority of 13 59:40 and I just go back to the to the site 59:44 here I did include this I was gonna 59:46 progressively move across it see if I 59:49 can find here they are so you’ll notice 59:51 that we kept a really sort of strict 59:54 silence structure where we just want to 59:55 be we used mid-century modern we need 59:57 homes for sale architecture and decor 59:59 furniture these were the really the two 60:01 that were we focused on another thing 60:03 that we did is we built up neighborhood 60:05 pages for each one of them where we 60:07 in-depth articles on each one of these 60:09 pages cut those are called geo pages 60:11 right – yeah they were geo for the 60:14 neighborhood in this case because you 60:15 know Los Angeles is a big place 60:17 yeah and the third thing that we did is 60:19 we actually used word lift which is a 60:23 very powerful plugin that allows you to 60:24 creature on triples where essentially it 60:26 leverages the knowledge graph that you 60:28 just link data for example you see 60:30 anything like this it actually has the 60:32 ability to curate this content from link 60:34 data you know every one of these things 60:37 so you’re essentially creating a 60:38 Wikipedia effect within your site where 60:40 you’re actually creating triples and 60:42 then if you edit this content and the 60:44 person actually added to it so that 60:46 you’re minimizing the footprint of 60:48 duplicity that so that it’s considered 60:50 unique and as a result of that you know 60:52 anytime other phrases were mentioned you 60:55 have the ability to determine in what 60:58 other phrases you want to actually have 60:59 marked up so you can see that taste 61:01 study house is one of the keywords to 61:02 ranking for that’s because we chose that 61:04 in this case study houses content was 61:07 actually curated as a result of that and 61:10 so we’re curing it every one of them has 61:13 their own page in addition to that we 61:16 actually optimized the MLS so that we 61:18 use when you see the difference between 61:21 the curated content here and things that 61:24 have links if we were to go back to I 61:27 went to mid-century modern homes for 61:29 sale or any one of the neighborhood 61:30 pages for example you’d see that I’d go 61:33 here the MLS actually has hyperlinks and 61:36 so we use the topical breadth of every 61:38 time you see any of these different 61:41 properties over here if I were to just 61:44 find it let me just go ahead and go to a 61:45 neighborhood and then say look for 61:48 things for sale in that neighborhood so 61:50 Atwater Village real estate so we use 61:52 relevant anchor text to go to the right 61:54 place and then we followed that using 61:56 canonical tags and then really built up 62:00 a silo phrases for the neighborhoods and 62:03 you can see mid-century modern 62:04 architecture etc which resulted in these 62:09 types of pages literally ranking by a 62:11 virtue of their own internal link 62:13 structures sounds crazy that’s where 62:15 you’re seeing all this stuff when you 62:16 see all these different you know this is 62:19 rude 62:20 because these are all really great 62:21 phrases well they’re neighborhood based 62:25 and so all this happens basically from 62:27 exactly what you’re sent Bradley by how 62:29 you leverage those term weights and you 62:31 how you leverage the the on page aspect 62:33 so we like to use you know the internal 62:37 link ratio of for every hundred words 62:39 using one link so if you’ve got a 62:40 thousand words you can have up to ten in 62:42 Bob or ten internal links on that page 62:44 somewhere else but if you’ve got a lot 62:46 of link low coming into that page you 62:48 can literally funnel that wherever you 62:49 want so you know minimizing removing 62:52 links from a page makes each one of them 62:53 stronger so you want to make sure that 62:55 you’re sculpting where that moves and 62:58 I’ve seen examples of websites 63:00 particularly in the legal industry back 63:02 in the day case you’re not familiar with 63:04 it but PDFs they pool a lot of link flow 63:06 and so if you have PDFs in your site one 63:09 of the quickest ways to get those things 63:10 to ranked or to actually funnel that 63:13 ranking factor somewhere else to go back 63:14 edit them make sure that you put 63:16 internal links back to the pages that 63:18 you want and that’s where you hit your 63:19 silo pages with links that you can hit 63:21 in PDFs you can also do some stuff 63:24 that’s like the no index come a follow 63:25 and hit those hard with links and it 63:27 doesn’t leave a footprint because it’s 63:28 not technically indexed in Google but 63:31 it’s it’s like a sponge that just helps 63:33 to rank you know hundreds of gathers 63:35 Google knows it’s there guys that just 63:38 don’t display it in the index precisely 63:40 so you know we we’ve actually expect 63:43 with off page we’ve experimented with no 63:46 index PB ends and it still works so oh 63:49 yeah absolutely absolutely and you can 63:51 you can beat them up and they don’t 63:52 really leave a signal because they’re 63:53 not trying to rank therefore it’s not on 63:55 the index 63:55 hints you know there’s no penalty so 63:58 just keep that one under wraps kind of 64:00 thing but yeah there’s powerful stuff 64:01 method so this is really just you know 64:03 leveraging link though in the site the 64:05 right way we did end up putting a 64:08 syndication network just where was it I 64:10 have to teaching network from what are 64:11 you guys what you guys did on this site 64:14 just so that it can create some signals 64:15 I think there’s uh I think there’s about 64:17 I think there’s a hundred seventy links 64:19 that came back to the site but this was 64:21 literally just as a result of the site 64:23 itself you know keeping the the silos 64:25 tight using the neighborhood’s and then 64:27 she was actually doing some blogging on 64:29 top of that so I suspect that this is 64:31 this should hit page one form it’s 64:33 we modern architecture as well as homes 64:37 for sale and all that it’s on the move 64:39 and this is the time now wow it’s been 64:42 online for about a year we had to start 64:44 from scratch it was really jacked up 64:46 there were some stuff going on where 64:48 somebody had used the silo plugin wrong 64:50 it was just a mess there was like 60 or 64:53 70 different plugins so we had to scrap 64:55 the site start from scratch rebuild the 64:57 site you know it super specific on what 65:01 silos we allowed and how it was all 65:03 gonna work put it all together and then 65:06 just you know let the site get indexed 65:08 and played the waiting game but it’s 65:09 like I said spent about nine months to a 65:11 year now and there hasn’t been any off 65:13 page link building and it’s like I said 65:15 it’s pushing up if you were to google 65:17 the Wong house or anything I’m sure do 65:21 it yet let me just see if I can go back 65:23 to krump let’s try this just to show you 65:28 try this 65:37 60 or 70 plugins what’s that look like 65:40 man I don’t even want to know like holy 65:42 crap yeah it’s still not letting me not 65:45 still not yeah the letter H is weird the 65:51 letter H is gone so the wrong house 65:52 right that’s what I was getting at 65:54 you’ve got things like la curb but 65:56 here’s this here’s the client site so 65:57 here’s what I’m getting at these are all 65:59 you know mass was realtor.com they’re 66:02 out ranking realtor PopSugar really 66:04 strong sites take sunset these guys are 66:06 killing it but here’s an example just 66:09 this stuff really works and so that’s 66:11 all I was getting at and we’ve done this 66:13 numerous times 500 plus keywords no 66:15 backlinks so it’s really that’s the kind 66:18 of thing that we’re teaching it’s a 66:19 linear process that we lay out instead 66:21 of the SEO bootcamp training showing you 66:23 how to look for all the different 66:25 metrics like I said it doesn’t have to 66:27 be ridiculously complicated but it just 66:28 shows you the stackable process start 66:30 here do this create your in-depth 66:32 content pay attention to you like I said 66:34 this is really like going back to site 66:36 architecture you know we spent a lot of 66:38 time dialing this in but now that you’ve 66:41 done that you can really compete with 66:42 the 800-pound gorilla in the space so 66:44 yeah because here’s the thing guys this 66:46 is what this I mean this is why I really 66:48 and fully endorsed this course because I 66:50 couldn’t create it we couldn’t create it 66:52 anything as in-depth as this and and 66:54 Jeffrey is absolutely right if you if 66:56 you’re on site you’re on page is done 66:59 right your structures tight your 67:00 internal linking your theme clustering 67:02 or keyword clustering is done correctly 67:04 then you can rank with just a fraction 67:07 of the off page that the competitor your 67:09 competitors have to do literally I mean 67:11 yeah it’s incredible oh and and 67:13 sometimes you don’t even need you know 67:16 you could just be from content marketing 67:17 and syndication networks and especially 67:19 you add like a drive stack to it that 67:21 kind of stuff guys it’s crazy yeah I 67:23 mean that’s that’s I’ve always loved 67:25 what you guys are doing man you guys are 67:26 on the cutting edge and so I just had my 67:29 head down for the last few years was 67:31 work focusing on the healing of that 67:33 after that motorcycle wreck but I’m more 67:35 I’m sort of the the little nerd who’s 67:37 behind the scenes paying attention to 67:39 things like this so that I can create 67:40 tools and we’ll just allow us to 67:42 automate a lot of stuff even quicker so 67:43 that’s really the focus you know I’m 67:46 looking forward to rolling out some new 67:47 things in se ultimate Plus 67:49 that’s gonna allow for some really cool 67:51 questions and answers schema markup that 67:53 you can implement on the fly for your 67:55 pages that’s the next thing you know 67:57 it’s all about surfacing in those those 67:59 featured snippets because they 68:01 technically at the top of any search 68:03 result if you can take that over that’s 68:05 really gonna prevent a lot of people 68:07 from having to worry about the 6-pack or 68:08 anything else you just yeah I mean 68:10 you’re right there not only that but 68:13 once you get those featured snippets you 68:15 start getting a ton of traffic and that 68:17 traffic alone will keep you ranked 68:18 exactly what signal and you can do this 68:21 once again targeting phrases that have 68:23 very low competition when we look back 68:25 at the answer the public I just go back 68:27 to that this is an interesting challenge 68:29 it’s like hey by the way how would you 68:31 do a webinar if you couldn’t type well 68:34 thank God I have some tabs open here 68:38 this is it’s interesting so far I guess 68:40 we’re just we’re winging it but you can 68:42 see my point that these are all those 68:44 types of questions and we know that 68:46 people are asking this stuff who are the 68:47 largest auto insurance companies you 68:50 know so this there’s a lot of 68:53 opportunity here and if you’re putting 68:54 this with structured markup that’s got 68:57 the itemprop that’s saying you know 68:58 suggested question suggested answer and 69:00 you’re doing an in-depth article you’re 69:02 integrating some kind of you know you’re 69:05 linking out to the other authorities in 69:07 the space to create co-citation it’s 69:09 sort of like the me to like hey by the 69:10 way we’re in authority to we’d make to 69:12 you so therefore we’re considered in the 69:13 same realm of influence if you’re using 69:18 some kind of embedded content you’re 69:20 using either videos or tweets things 69:22 like that that creates another you know 69:24 strong on page relevancy signal you’re 69:26 using the internal link structures you 69:28 know that if you’ve already selected at 69:30 the top of that phrase or the top of the 69:33 silo term for that keyword like auto 69:35 insurance you’re gonna have all the 69:36 modifiers rise up there you know once 69:40 you start to rank for these long tails 69:41 and mid tails these are all equity and 69:44 all that equity bubbles up so they’re 69:46 all gonna start ranking each other and 69:47 before you know it you’ve just created a 69:48 ranking juggernaut by the way that 69:49 you’ve built your site so yeah extremely 69:51 powerful stuff I’ve got a couple 69:53 questions here for you man and we’re 69:54 gonna wrap it up and then we’ll schedule 69:57 with you separately Jeffrey for another 70:00 encore presentation where we get it 70:02 right 70:05 let’s see first one does the SEO this is 70:08 from David he says does the SEO ultimate 70:10 Auto automate the generation of schema 70:13 no oh yes I’m sorry yes Wow it doesn’t 70:17 auto generate but you can do it so I’ll 70:19 show you how that works really quick I 70:20 should be able to go to a page or a post 70:22 what you’ll notice is that over here 70:26 under the general settings you have an 70:28 option where you can show rich snippet 70:30 type and then if you have like article 70:32 then you’ll see all of your all these 70:35 elements will jump will appear so you 70:37 have local business for example you can 70:39 create local business markup that 70:40 validates by simply adding all the 70:42 fields here that’s great you know what I 70:44 didn’t even realize I I just started 70:47 using your welcome and your framework 70:48 again man and like honestly I didn’t 70:50 realize that option was there 70:52 organization so organization you want to 70:54 use on your actual on your your like who 70:57 we are about page for the local 71:00 businesses you would want to use your 71:01 local business market on your home page 71:02 if you had you know if you’re talking 71:04 about the person who owns the business 71:06 you could actually include this so 71:08 somebody googled their name you have all 71:09 the different metrics and metadata 71:12 that’s relevant to them so you can you 71:13 know show the right images you could 71:15 share all their social properties so if 71:17 your Google like Obama for example you’d 71:19 see all of his information that’s the 71:21 same type of markup they’re using for 71:23 person so you can do that yourself you 71:25 can mark up your products can do recipes 71:28 reviews software etc so we’ve got 13 71:31 times in structured markup that you can 71:33 implement that does validate in Google 71:35 from just simply adding that to this tab 71:38 you can also customize the output for 71:40 your open graph data for Facebook or 71:42 Twitter or Google+ independently so you 71:45 can actually split test different have 71:47 blunts and calls to action and then use 71:49 the image that you want and then once 71:51 you set up your global settings over 71:53 here inside of the open ground plus 71:56 module and at the page level so you 72:00 don’t have to continue to add that so 72:01 this was this will actually be your your 72:03 fallback option and you can even 72:05 determine you know what type of open 72:07 graph type that you want for your 72:08 business so if you want to use website 72:10 or product or if it’s a blog or whatever 72:12 you can do that here for your different 72:14 post types so you could say you know 72:16 my ads are this my products or this type 72:18 of you know markup etc and you can 72:21 actually put your global defaults here 72:22 as well for the different products I’m 72:26 sorry for the different social 72:27 properties such as Twitter and Google+ 72:29 and then if you want you can also go 72:32 here and you can mask edit up to a 72:33 hundred of time we can grab that we use 72:35 an auto update feature so if I started 72:37 to type something in which I can’t at 72:39 this time but it would find the 72:41 appropriate image you can actually add 72:43 your Open Graph data here in your title 72:45 and then it would allow you to 72:46 essentially view or edit this page so 72:49 you can you know instead on how can we 72:50 do it one at a time you can actually 72:51 masse at it so we’re like that 72:55 very cool alright the next yeah enough 72:58 to walk you guys through oh they’ll 73:00 really get I think I said no I will just 73:01 go for the over-the-shoulder stuff will 73:03 just go it’s only I think so I think 73:06 that would be good for the next one so 73:07 just a couple more questions 73:09 sure dawn says does the SEO ultimate 73:12 plugin and dream work on multi WordPress 73:16 the the plug-in you’d have to use 73:18 multiple licenses one for each we 73:21 haven’t really dialed that them yet on 73:23 the because the api’s are an issue it 73:25 has like for example using if you have a 73:27 multi-site installation each 73:29 installation essentially would have its 73:31 own license so well I haven’t really 73:35 tried to use the framework on mu yet so 73:39 I can’t say yeah I I’ve never even used 73:42 multi WordPress so I don’t know I can’t 73:45 speak on that at all 73:47 Paul says can you get in the off page 73:49 work you do Paul will save that one for 73:51 the Encore when we do the new version 73:54 with with Jeffrey so just keep that 73:57 question Paul and it since you’re on 74:00 this list for this for this one you’ll 74:02 get the replay once we record the second 74:03 one okay and David last one he says oh 74:06 well this is just more of a comment he 74:08 says by the way I have a neighborhood 74:09 index by State City for the US scraped 74:12 from Zillow illegally if anyone is 74:13 interested thanks David appreciate that 74:16 we’ll be out ranking it in no time 74:18 anyways it doesn’t heck yeah so that’s 74:20 it for questions guys like I said if you 74:23 go check out the offer if you’re ready 74:25 for that now if not wait and we’ll send 74:27 out the notification when we schedule 74:29 the next webinar 74:30 where you can see more of what Jeffrey 74:31 has to offer I can tell you we fully 74:33 endorse his training in these products 74:35 they’re really really good and so you 74:38 know again we we couldn’t have created 74:40 any better of an SEO course than he’s 74:42 created that’s no doubt I was blown away 74:44 with the level of detail so highly 74:47 encourage you to pick it up and we’re 74:49 gonna throw in well you saw the bonuses 74:51 that he threw in which is crazy an hour 74:53 of his time for the first ten is just 74:54 amazing that in itself is worth the cost 74:56 but also we’re gonna throw in content 75:00 kingpin because you’re gonna need 75:02 content once you build these badass 75:04 silos so well thank you so much guys and 75:10 really appreciate your time I guess I 75:13 guess with Murphy’s Law if it can’t 75:14 happen it will so we learn the hard way 75:16 I will be on that call a half an hour in 75:18 advance with everything ready to go man 75:23 well we appreciate you man and 75:26 definitely we’ll hook it up with you and 75:27 we’ll get get straight in the next week 75:29 or so alright alright thank you so much 75:31 alright guys thanks everybody 75:33 bye take care good bye
The blog post On Page SEO with Jeffrey Smith is courtesy of: https://www.moonandowl.com/
0 notes
Text
13 Best Ways to Learn CSS Grid
CSS Grid an increasingly popular method for creating complex responsive web design layouts that render more consistently across browsers. Now is the time to familiarize yourself with CSS Grid, so we’ve collected 13 of the best ways to get started learning today.
Rather than old-school methods such as tables or the box model, CSS Grid allows you to create more asymmetrical layouts and more standardized code that is cross-browser compatible. Most major website browsers already support CSS Grid and it is a W3C Candidate Recommendation, which would formalize it as a standard practice. It’s widely believed that CSS Grid will be the future of website layouts.
1. MDN: CSS Grid Layout

Mozilla has great resources in the MDN Web Docs guides, proving simple explanation of how things work and code examples to get you started.
Here’s what MDN says about CSS grid:
CSS Grid Layout excels at dividing a page into major regions, or defining the relationship in terms of size, position, and layer, between parts of a control built from HTML primitives.
Like tables, grid layout enables an author to align elements into columns and rows. However, many more layouts are either possible or easier with CSS grid than they were with tables. For example, a grid container’s child elements could position themselves so they actually overlap and layer, similar to CSS positioned elements.
The documentation offers code and examples plus elements that you can open and play with on your own in CodePen or JSFiddle. This might be the best starting place in terms of thinking about CSS grid.
2. Learn CSS Grid

Learn CSS Grid is a guide to learning the technique from Jonathan Suh, based on the method in which he learned it. The guide is well-organized and starts with a table of contents that allows you to jump from section to section.
This guide is easy to follow – some coding knowledge required – and is a great resource for CSS grid beginners. Each element comes with a visual example, which might be the key piece in bringing all the information together.
3. Tuts+ Guide to CSS Grids
Tuts+ has built a complete guide to help you learn CSS Grid, whether you’re just getting started with the basics or you want to explore more advanced CSS. It’s done through a series of clear and thorough tutorials, with practical examples throughout.
4. Codeacademy: Introduction to Grids
Codeacademy has long been one of the best places to learn coding skills in a practical format. While you do have to create an account to access the tutorials, it is a great way to learn using a lesson-plan based format.
Here’s the description of the Introduction to Grids course:
In this lesson, we introduce a new, powerful tool called CSS grid. The grid can be used to lay out entire web pages, Whereas Flexbox is mostly useful for positioning items in a one-dimensional layout, CSS grid is most useful for two-dimensional layouts, providing many tools for aligning and moving elements across both rows and columns.
Codeacademy course can be taken in sequence – start at the very beginning if you are completely new to backend web design – or on their own. It’s free to create an account and take many of the courses.
5. FreeCodeCamp: Learn CSS Grid in 5 Minutes

Get what CSS grid is about and only have a few minutes to really dive into it? This quick start tutorial from FreeCodeCamp will help you get familiar with it in just five minutes. (Granted you need to know some basics already.)
Here’s the takeaway: “The two core ingredients of a CSS Grid are the wrapper (parent) and the items (children). The wrapper is the actual grid and the items are the content inside the grid.”
The 5-minute guide also includes relevant markup.
6. The CSS Layout Workshop
The CSS Layout Workshop is a set of paid courses from Rachel Andrew, one of the leaders in CSS grid work. The courses are an online, self-study program that are ideal f you like more structure to learning something new.
To see if this course is right for you, the first part is free. It focuses on CSS basics and explains all the basics you need to go deeper into the material. The good thing about the complete set of courses is there aren’t any additional costs; you just need a web browser and text editor to get started.
7. Game: Grid Garden

Grid Garden is a game that uses CSS to grow a successful carrot garden. It’s a good primer on how CSS properties work to get you in the right mindset to think about CSS grid.
And well, it’s a lot of fun. See if you can make it through all 28 levels.
8. Grid by Example

Grid by Example shows how different CSS grid configurations will look in supporting browsers. Each grid configuration includes a visual example with links to pages with more information about the technique and the code.
There’s a fun bonus as well with dummy page layouts so you can see how different CSS grid examples look with real content applied to them.
9. Video: Learn the CSS Grid

If learning via seeing someone do something is more up your alley, watch the Learn the CSS Grid video. (It also has accompanying text.)
The 18-minute video is a quicker starter tutorial that the maker hopes will “facilitate your eagerness to explore the full potential of the CSS grid.”
The video takes you through setting up a project, defining the HTML, defining some basic rules, defining grids, nesting the CSS grid and template areas and a few responsive tricks. The video and text include screenshots of all the markup.
10. CSS-Tricks: A Complete Guide to Grid

CSS-Tricks has long been one of the go-to places to learn about coding. The Complete Guide to Grid is no exception. The guide, which was published in November 2017, is an up-to-date primer on the 2D system.
This guide is fantastic and broken into digestible sections.
But the best part might be the gallery of CSS grid in action. Make sure to spend some time in the gallery for inspiration.
11. CSS Grid Ask Me Anything
Have questions about CSS grid? Ask an expert.
This Git includes simple questions and answers from Rachel Andrew. While she only answers reduced questions, it is still pretty helpful. Just make sure to follow the ground rules.
12. Video: Progressing Our Layouts

Jen Simmons shares her talk from the 2016 Enhance Conference for users. The 30-minute video walks through examples of CSS grid in action and she touches on how to write code in a time of transitioning techniques.
Watch the video and then you can also go through the slide deck as well. The pair is pretty useful for thinking about CSS grid in broader terms.
13. CSS Grid Playground

Ready to test all those CSS grid ideas and skills? The CSS Grid Playground includes grid container and grid items locations so you can start coding and see what happens in real time.
It’s a good way to get comfortable with the basics and see changes as you make them. (It almost feels like a game.)
Conclusion
Stop procrastinating if you haven’t taken the leap into at least familiarizing yourself with CSS grid. Make it a goal to get comfortable with best practices so that you can better understand how it works and how to use it.
Here’s a shout out to all the folk out there who are providing great resources about CSS grid. You have to admit the web design and development community is pretty amazing when it comes to knowledge sharing. If you use one of these lessons or tutorials and get a benefit from it, make sure to share the love with the author and let him or her know they’ve helped you.
0 notes
Text
ART OF THE CUT – editing “Lost City of Z”
Editor John Axelrad, ACE, has been in post-production since 1991, including a stint as Oscar® winner Anne Coates’ assistant on Out of Sight and Erin Brockovich (I interviewed Coates previously here). He’s edited dozens of features including Slither, We Own the Night, Two Lovers, Crazy Heart, Something Borrowed, Miles Ahead, and Krampus. Editor Lee Haugen started as an apprentice under Axelrad and then cut several films including Repentance, Dope, and Miss Stevens before joining Axelrad again to cut Amazon Studios’ new theatrical release, The Lost City of Z.
L-R; John Axelrad, ACE, 1st AE Scott Morris, and Lee Haugen.
HULLFISH: Tell me a little bit about working on a movie with two editors. How did you guys work that collaboration out between the two of you?
AXELRAD: Well it was kind of a natural process to have the two of us cutting. I was hired as the editor and I had been co-editing with another one of my former assistants, Kayla M. Emter, who’s now cutting on her own. It was such a liberating experience for me to be co-editing with somebody. We did Miles Ahead together and we did James Gray’s previous movie, The Immigrant, and I just found that experience to be creatively fluid. It allowed us to really explore the film from different points of view and to really enhance the creative process of putting the film together, because when you’re cutting with someone and you’re bouncing ideas off each other, I think it is to the benefit of the movie. It’s not just one perspective of how something can be done. And my favorite part of the process personally is working with the director and trying out ideas: trying to look at the entire structure of the movie; making sure that character arcs are working; that the film itself is well paced; that the characters are alive and vibrant. Lee was also a former assistant of mine, and I saw an opportunity when Kayla was unable to work with us on The Lost City of Z. Lee actually started the film first, because I was finishing up Krampus, so I couldn’t start on the dailies right away. So Lee actually went to Northern Ireland and started the assembly process.
HAUGEN: It was my first time working in the two editor system as well. John has been so gracious to allow creative contributions from all positions. He was the first person to have given me an opportunity in feature films in 2008 on James Gray’s Two Lovers. Our paths just didn’t line up after that. Two Lovers was such an eye opening experience of how to make a film. Working with James and seeing his process and working with John and how they interact together. It was the best learning experience I could ever have and it was fun to be reunited again eight years later.
AXELRAD: It definitely was.
HAUGEN: To have the whole crew, except for Tom… Tom was busy.
AXELRAD: Tom Cross was another assistant of mine. We did five films together. I was not in a career position yet to share editing credit with him, although he did get additional editing credit on everything. But I really wanted to make sure Tom got the exposure working with directors and being able to sit in the chair and collaborate. I like having a team. I’ve got three guys around me that we can bounce ideas off of. It just speeds up the process of editing because different ideas are flying back and forth at a much faster rate. Lee edited the movie Dope, which won him the Best Editing award at Sundance. That was two years ago?
HAUGEN: Yeah, two years ago.
HULLFISH: Congratulations
HAUGEN: Thank you
AXELRAD: So it wasn’t that Lee just fell off a truck. He established himself as a successful editor.
HULLFISH: I didn’t think that was the case. So I’m assuming you’re cutting in AVID. Are you using a Nexis, or how are you collaborating?
AXELRAD: Yeah, Avid shared storage on Nexis. We have a good shorthand we developed together. If one of us does changes or does something, we use markers in Avid so I know what he’s done and he knows what I’ve done. And we just kind of develop a process as we go. So much obviously depends on how the director likes to work.
HULLFISH: Tell me a little bit about versioning. I talk to editors all the time about organization. There’s all the organization of doing versions which is compounded when you have two editors. Do you have naming conventions that you’re using? Or bin conventions to keep that stuff straight?
AXELRAD: If it’s dailies we simply divide up the scenes. Lee would be editing certain scenes and I would be editing certain scenes. Sometimes, I’ll take Lee’s scene that he’s edited and I’ll work on it after he’s done. I’ll give him a scene that I did and let him play with it and come up with something else. Then we cut into the reel the version of the scene that we feel is best. We’re obviously editing out of order, so it’s always a guessing game when you’re cutting along with dailies.
Avid works very well with shared projects and shared storage. So only one of us has access to a bin at one time, and if we’re dividing a show into reels in the case of The Lost City of Z or in the case in the film we’re editing now, which is Papillon. At one point we had seven reels. We would just call back and forth to each other, like “Hey can I get reel 1? Are you working on it?” It really just comes down to communication. Before we attack something, we get a series of notes from the director and we do a little pow-wow. And we come up with a gameplan of “okay you’ll work on this and I’ll work on this.” Then after that we show it to each other and give each other notes on the edits.
HAUGEN: John is probably the most organized editor I’ve ever met.
AXELRAD: I’m a little anal retentive. I think that is what he’s trying to say.
HAUGEN: Organized is a nicer word.
HULLFISH: Do you both organize your bins differently so you had to use different bins? Or did you come up with a mutually agreed bin layout and organization?
AXELRAD: I confess that I made him conform to how I like to do it.
HAUGEN: That’s the way I learned to organize the bins from John back on Two Lovers. I adapted that into my own workflow as an editor after that. So coming back to it wasn’t really that much of a stretch. We organize each dailies bin by scene, in script order and frame view. And when we do the first cut, we leave that editor’s version in the bin. And if we continue to do multiple editor versions, we leave them in the scene bins and then we’ll also add them to the reel as we start to build the show. That way we always have a back up of what we originally did so we can reference them.
AXELRAD: That’s an advantage of co-editing with people that have worked with me before in an assistant capacity. I can imprint a certain system so we already have a shorthand going in. I imagine it would be different for me if I joined a show – like I know Tom Cross was editing on Joy with multiple editors. These are people he hadn’t worked with before and they’re all established and I’m sure they have different methods. So I would say that that’s much more complicated if they have different ways of working.
HAUGEN: It’s a good system. I changed things I as I grew, but I think we worked out a good solution.
AXELRAD: Communication is key.
HULLFISH: Are either one of you guys selects reel guys? Or is it always working straight out of the bins – straight off the dailies of the clips themselves?
AXELRAD: It depends on the scene. I’m a huge, huge fan of ScriptSync. For dialogue scenes it is invaluable to me. Often times it is very time consuming for the assistants to do the ScriptSyncing. So if I need to edit a scene right away I’ll just dive in, but ideally I could be working on something while they’re ScriptSyncing a scene. I like to watch the dailies first, mark everything up, mark the takes within the takes, and mark moments and things that I like. Making sure the system, which is kind of a color-coded system of markers, will help my assistant do the ScriptSyncing. And once that’s done, we dive into it. Then we’re able to edit and compare performances. If it’s an action scene with action beats, then I do like to do a selects reel. I’ll go through and mark everything up and maybe break the action scene down into beats, and just cut together the beats that I think are the best moments. And then, from that selects reel, I’ll do the assembly.
HULLFISH: I will pull a selects reel but I will do the selects reel in sections, beats in other words. I don’t do every line or every exact moment. A lot of times if there’s a blocking change in a scene, I’ll say, “Okay here’s where they’re standing up against the window” and then it’s all the shots when they move to the other side of the room. And then one character moves over to the couch and sits down and then I’ll break all that stuff down. Is that kind of what you’re talking about?
AXELRAD: That’s what I mean when I say in terms of beats, and it depends if it’s an action sequence. What I usually will do is make select reels based on character or camera angle. So I’ll say, “Okay, these are all the selects from this character” and they’d be – for example – in the order of a fight scene. Therefore when I’m editing, I instinctually say, “Okay, now I feel like I need to go to this other character” and then I have all the selects for that part of the scene in a separate sequence. It’s different for a scene with dialogue, which may have less “beats,” but I think the method you’re suggesting of how you organize is the same.
HULLFISH: Then I do my bins the same way: organizing bins the way that I imagine the edit might go. So if I know that there’s a big jib shot or something that begins a scene, the director probably wants that to be the first thing. So even if that’s setup I’ll put that at the top of my bin. And then normally wideshots. Then all the two-shots together. I do them visually, not necessarily in the order that they were shot; but in the order of if I’m “looking for a specific visual where can I find it the easiest?”.
This is Axelrad’s method of color coding locators, colors have different uses in the source and record monitors.
AXELRAD: And that’s exactly what we do. We do it in frame mode, and kind of in the visual order of where they occur in the scene. I think it’s easy to scroll down and say “well I’m done with the stuff up top because we’ve already been through that and the coverage doesn’t continue”.
HULLFISH: And then if you’ve got pickups or something you know that those pickups are not going to be at the top but further down or wherever they belong.
Axelrad: I feel like I can’t really get into a scene until I’ve really gone through all the footage and marked it up. It’s like I can’t make my bed until I’ve cleaned and organized everything in my bedroom first.
HULLFISH: So tell me your process, Lee.
HAUGEN: I do sit down and watch the dailies. But I first start with the wide shots so I get a layout of what the scene is going do and where it’s going. Mentally I can prepare as I watch the rest of the dailies. I can envision how the scene is going to playout in my mind. Then, as I go through and watch all the dailies, I will sometimes pick selects of things that I really like and I’ll just put them into the sequence. And then I assemble the first rough pass very quickly, because I do want to see it as a whole. That gives me more of a grasp on where the scene is going to go and what the director is really going for. A lot of the time I will set that scene aside and revisit it the next day, because I do like to walk away, especially if I’m frustrated with a scene or it’s just not fitting together easily. Seeing it with fresh eyes is a good solution for me. That way, I don’t waste too much time fighting with myself during dailies.
AXELRAD: On The Lost City of Z, I did not participate in the assembly since I was finishing up Krampus, so Lee was in Northern Ireland during all of principle photography. A 50 day shoot or something?
HAUGEN: 55 I think.
AXELRAD: 55 day shoot, and Lee put together a 4 hour assembly, and he was caught up to camera. And I don’t know how he did that. I mean it was a good assembly, the problem is that it was 4 hours long. That’s something I struggle with the most is going through dailies. There’s the perfectionist thing in me that I can’t even move on to the next scene until I feel like I polished it. I think that can be a handicap, because in dailies, the purpose is to make sure that there’s not a major problem that your director needs to know about or damaged film or that they don’t have a scene they think they have. So it is a handicap being a perfectionist in that phase. That’s why, for me, I’ve found so much creative energy working with another editor because we can stay efficient together. It’s a very symbiotic thing that we have going.
HULLFISH: So it was a 55 day shoot. Then what happened? How long did you guys go from there?
AXELRAD: Well, for James Gray that’s when it all begins. Every director is different, but James is very much, “Okay, let’s start with scene one, and perfect scene one before we go on to scene two.” So, I learned very quickly, having done three films previous with James, you can put all this sweat and blood and tears into the assembly and it doesn’t matter. Because James wants to rediscover all the footage for himself with you in the room. And to his credit, he’s there with you twelve hours a day and you’re reviewing and he’s watching the paint dry as we edit. But that helps his process and he’s seeing how things go together. That’s where Avid ScriptSync comes in so handy, especially for someone like James, who’s very, very performance-based and wants to be able to compare performances. In the case of The Lost City of Z that’s where the work begins. What Lee did with his assembly was a wonderful blueprint. We were able to watch the whole thing together and kind of know ahead of time, “Okay we’re gonna have some trouble when we get to the second act.” It just puts it in the back of your head. Some directors just like to work on the overall structure first, but working with James he likes to paint the fence one panel at a time before he steps back and looks at the whole fence.
HULLFISH: Martin Scorsese has the quote you may have heard that “No movie is ever as good as the dailies, or ever as bad as the first cut.” What’s the value of doing that first cut, then? What’s the value of putting it together at four hours and saying, ”Okay now we don’t have a movie”?
HAUGEN: I think it’s very important to watch the editor’s assembly because it does lay out the blueprint. It also gives you a rough idea of things to look out for on the first pass of the director’s cut. I know in The Lost City of Z, the jungle is a large portion of the film. And it became Charlie Hunnam and Robert Pattinson for a long period of time in that four hour cut. After viewing we realized we needed to keep Nina, Percy’s wife, more present. So when we were in the jungle we used her voice to read some poetry. which kept her character alive.
AXELRAD: There’s a reason they call it the ‘assembly.’ At first I was offended when I started out. I said, “What do you mean assembly? This is the editor’s cut.” But it really is an assembly because your responsibility is to put things together, which you have to do quickly. You have to do it out of order, based on the order they shoot the scenes. As the editor, I would say during the shooting process it’s 50% politics and 50% editing. Because it is your responsibility to let people know if there’s a problem. And you have to really dance a very fine line: knowing to speak with the right people and not get people panicked. But oftentimes there’s a problem with the film. In the case of The Lost City of Z, they shot on 35mm film and there was a lot of film damage from the humidity and the jungle heat and things like that. So these are the things that – if they simply just shot it and didn’t look at the footage until shooting was over – there would be some unpleasant surprises. So it’s kind of the purpose of an assembly to make sure they have what they need to start to work when shooting is over.
HULLFISH: Great point. The other thing with an assembly: calling it the editor’s cut is kinda funny, because it’s a cut where you have to kinda set your ego aside a little bit. Because I look at a scene and I’m think, “This scene’s never gonna make it into the movie, so I might as well just cut it out.” or “These three lines that start this scene are never going to make it, so let’s just start here”. But in an assembly you can’t, right? You have to do it by the script. So it’s not really the “editor’s cut” because if it was the editor’s cut, you would have cut those scenes out and you would’ve cut those lines out.
AXELRAD: Exactly. If editors had a more time, I’d first deliver the “assembly,” and then I’d trim it down and deliver an “editor’s cut.” It really depends on who you’re working for, because I always felt obligated to include everything. And when I started working with James a few times he said, “You know, you don’t have to include everything I shoot.” So it depends on the needs of your director. But I’d say the general consensus among editors is that you don’t purposely remove dialogue or scenes in an assembly just because you don’t think they’ll make the final cut. You have to swallow your pride at this stage, because the intent of the assembly is not to show what you would do, but rather to have the whole film laid out for you and the director to have a starting point from which to work.
HAUGEN: It’s true even if one dialogue scene is ten minutes long, which we had a couple of those, and I thought, “Wow, that’s a long scene.” But there’s really great moments in it and you can just tell it’s going to be really great once we tighten it up.
AXELRAD: It’s also helpful to reference back to the assembly. Working with James, we’ll edit on the scene for weeks and weeks and change it every which way. And then he’ll say “you know, let’s look at the assembly for that scene again,” which basically has everything in it and the kitchen sink. And it’s just a way to hit “restart” if we’re working a scene and beating it down and it’s become too contorted to a way that’s not working. Watching the full assembly of that scene again may make a few lightbulbs go off to how to approach it differently.
HULLFISH: Tell me a little bit about your feelings on temp music?
HAUGEN: That’s an easy question for a James Gray film, because he has very specific music that he wants to put in his films. And he does so much research ahead of time that we do not temp during the assembly. And we work the scene with James first before we put in music. We want to make sure that it is working and that it is functioning without the help of music. Then as we start to build the show, we add music.
AXELRAD: Personally I love editing temp music and sound effects. I love doing that sound design as a picture editor. I always joke that I’m a frustrated music and sound editor working as a picture editor. But working with James, it’s absolutely true, sometimes music during early stages of a cut can be a band-aid to the point where it is detrimental to the structure of the scene. Among the directors I’ve worked with, I’d say maybe a third of them say absolutely no temp music at the start, because they really want to see if a scene is true to itself and not being masked by music. But it also depends on the genre. When I edited Krampus, the expectation was to thoroughly do a temporary sound design and work with temp music. For a horror-comedy it’s so critical to the genre. Obviously we’re editing first before we include the music, but I think you can really know if something is working after you’ve cut it and watched it and then to have experiment with music. Sometimes I have found the structure of the music perfectly fits the edit that you did. Or sometimes it’s a little off and you might adjust the picture edit to the music, because there is a rhythm to editing. If you find the right piece of temp music, you kind of know when you’ve got that connection when the two are working together in harmony.
HULLFISH: David Woo was telling me he cut a motorboat chase scene to Steppenwolf’s “Born to Be Wild… you know… ”Get your motor runnin’, head out on the highway…” Then he stripped that completely away and the final audio track was just the sound of the motorboats. But the song gave him his rhythm and impetus.
AXELRAD: It depends on what you’re editing. With an action sequence I would say “why not edit with music?” When you’re doing a big drama scene, I always say dialogue first– dialogue should be pacing it. For something like “Crazy Heart” and “Rudderless,” I often let the music guide the edit.
HAUGEN: When I did Dope it was very music heavy. The main characters are in a band and they had four original songs. So those songs were what I pulled from to find the rhythm and the pace of the film: that energy and that pulse within the music they were creating.
HULLFISH: Talk to me a little bit about sound design on Lost City. You were just saying that you’re a frustrated sound editor dressed as a picture editor. Tell me about what you use. Is it all production tracks? Do you have a big sound effects library you like to use? Are you using a lot of like pad room tone and stuff to make things smooth over?
AXELRAD: To start with, there’s the production sound. I always like to talk with the sound recordist before the shoot starts. Make sure that we’re on the same page. I think it’s imperative that we work with the full eight tracks of the WAV file. Lee can attest that I’m very fussy with dialogue editing. And you know the first step is making sure the dialogue is smooth. And I like being able to access all eight tracks of the poly WAV file. Not that I’m using all eight tracks, but I like to be able to choose between them to find the best track for dialogue, whether it’s the lav or the boom. Or if I need ambience or fill from what he or she records. So the first step is to make sure that the production tracks, technically, are smooth and synced and no drop-outs. And then we talk start talking sound design.
HAUGEN: And we mostly pull from a large library that I received from John back in 2008.
AXELRAD: It’s many Terabytes big and many, many sound libraries and custom designs.
HAUGEN: We monitor LCR (left-center-right, as opposed to plain stereo). We fill out everything, every scene has stereo even if it’s just room tone. In The Lost City of Z it was a lot of jungle, and we tried to change it up to help with change of location. Give you a different feel from scene to scene as they were traveling throughout the jungle.
HULLFISH: In Avid, how are you achieving left, center, right? As direct outs? How are you monitoring that?
AXELRAD: We do direct out. I know some people that use a digital interface to do that, I think Tom has a digital interface to do that. But we’re still accustomed to the Mackie Mixer and so we have all 8-channel direct output. So it’s analog but we designate the first four channels as mono (to the center speaker) and the second four channels as stereo (left and right speakers). Editing 3.1 is just a fuller experience, especially when you preview the movie in a theater, to have the dialogue coming out from the center speaker and then you’ve got your stereo music and effects coming out left-right. It’s funny because when I go to New York to edit they just don’t do that. They edit stereo. I think LCR is maybe more of an L.A. thing. And I even know a lot of people that are editing in 5.1 out of the Avid, which it’s very capable of doing.
HULLFISH: I just talked to somebody who was doing that.
AXELRAD: I think like on a big show, if you’re doing a Marvel movie, you want to be editing in 5.1 or 7.1. For a James Gray movie I think editing 5.1 would be overkill, at least at this stage. We’re expected to do a preliminary sound design, but we don’t have time to mix 5.1. They usually don’t hire the sound supervisor until later in the process – usually in the middle the director’s cut. You’re expected to have some sort of sound design blueprint in the movie and I do enjoy working on that—including temp music. I know it’s a sensitive union issue because a lot of music editors get upset: why is a picture editor doing music editing? And I do know a lot of picture editors who respectfully say “that’s not my job.” They don’t want to do it, because they don’t want to take away work from a music editor or from the sound designer. I do it because I enjoy doing it, and that’s always my defense when I get criticized for doing it. Once you know the edit is working, sound design is really the polish you need to really sell it.
HULLFISH: I agree. Let’s talk about the overall or story pacing of the movie.
HAUGEN: This is an epic film, and starting with a four hour assembly, it was very challenging to get it down to the run time that it is, which I think is 2:20 with credits.
AXELRAD: We first started by cutting out every other frame to get it down to two hours.
HAUGEN: For some reason it didn’t work.
AXELRAD: Nah, that didn’t work.
HAUGEN: But as we kept cutting it down and focusing the story more we found the most difficult part was the beginning of the film. Trying to get to the jungle in an efficient amount of time, because that is where the film really takes off.
Avid screenshot of the timeline and partial locator list, indicating VFX, ADR and other items. Right click on image to open in another browser tab for more detail.
AXELRAD: And it really depends on the director you’re working with. We’re editing Papillon right now with director Michael Noer, and he comes from more of a documentary background. So his approach is to look at the whole structure. So we constantly are watching the whole movie. We’re just working first on overall structure and arc and looking at overall pacing before we start getting into scene by scene micro-pacing. With James Gray it’s the opposite. He’s a very linear thinker and he wants to go scene by scene by scene. So each scene, with James, has its own internal pacing that works. And then once we get through that process – which takes many, many weeks – then we sit back and look at the whole movie and say “wow okay, now we have an overall pacing issue.” These individual scenes work by themselves, but now let’s look at the big picture and fix that. So, as editors, we have to be adaptable depending on how a director likes to work. For us we can’t rest until the pacing of the entire film is working, and that’s when the tough choices come about: what to cut? what to shorten? what to omit and what to put back in?
HULLFISH: That’s an interesting point: you take some stuff out and it seems to work for a while, and then you decide, “I think we need that scene back.”
AXELRAD: That happens a lot.
HULLFISH: Were there any scenes like that in this movie? Can you describe why that might have happened? Why a scene that seemed like a good idea to take out ended up going back in? What was the purpose of that?
AXELRAD: There’s two different levels to that. You’ve got experimentation where we say, “Hey, we don’t need this let’s take it out” and we kind of retroactively change the story through loop (ADR) lines and through editorial sleight of hand. And so we try to do that a lot to help streamline things, to help change the flow of the story where we feel that there’s a problem. Sometimes it doesn’t work or sometimes the narrative gap is too big and then we’ll abandon that idea and restore something.
HAUGEN: There’s one sequence that we did take out scenes and put scenes back a couple of times. It was the World War I sequence, there were three of four main scenes. And at one point we wanted to make it all about his character, Percy Fawcett, and about him being a great leader in the war. And when we sat back and watched it, we didn’t think it flowed with his story. He’s striving to get back to the jungle – to get back to what he’s really deep down inside looking for. And so we did swap out some scenes to make sure his obsession came through.
AXELRAD: I can remember over the course of the films I’ve cut, and even what we’re editing right now, we’ve taken scenes out, deciding, “We don’t need this, we can tell the story more efficiently doing it this way.” And then we screen it for people and things bump for them. Then we say, “Okay well maybe what we thought would be a better way to tell the story is causing some character interpretation problems.” Even though it’s narratively efficient, character-wise it’s confusing. So we undo something that we thought we were very clever to do, and we restored a scene that we cut out. Even though, narrative-wise, it drags a little bit, it’s essential for understanding the characters and their relationships with each other.
HULLFISH: Let’s talk about crafting performance from actors. I’m sure you’re working with very talented actors, but they’re not working in context sometimes.
HAUGEN: James is great at getting different levels of performance from the actors which is a blessing for editors. That way we can help tell the story as best we can. There’s one scene where an actor goes from being this stern person that keeps all his emotions inside and ends up getting up and yelling at another character. We experimented with many different ways to see which performance fit better: whether he should yell and storm out of the room, or be calm and cool and just see in his anger through his expression. We stuck with the internal performance.
AXELRAD: I think it’s the adage of “less is more.” Sometimes outward emotion or yelling and screaming maybe a tendency for an actor. But I think you can craft a performance about what is not said, the unspoken word, and just through nuance and facial expression. And in the case of The Lost City of Z, there are a couple of instances where we can say much more by saying less. And that really was by removing dialogue from a scene. I know in Two Lovers, which we worked on together, there were whole sequences that just weren’t resonating with people on an emotional level. So we simply removed the dialogue and made this montage out of it, set to the right music, and it really brought sympathy to the characters. A famous case of this is in Raging Bull, where there’s a scene where they’re not speaking — but if you look at it, Scorsese just simply removed their dialogue. Their mouths are still moving, yet you don’t notice it because you are emotionally invested in the scene. You could imagine that scene with dialogue being a far different scene emotionally than without. And so those are some of the things we do to craft and hone performance.
This was fun to construct, editorially speaking. The images from the church of the baptism were from a deleted scene that was to open the movie. During the editing process, we wanted the aftermath of Percy stopping the arrow with his book to be more spiritual, so we cannibalized footage from this other scene. This was a great example of building an emotional moment in the editing process that was not intended from the original shoot.
HULLFISH: Thank you so much for your time guys. Good luck on your film.
Axelrad: Thanks so much, Steve. It was a pleasure speaking with you. We love to talk about the craft of film editing.
Haugen: It was great speaking with you, Steve. Thank you so much.
To read more interviews in the Art of the Cut series, check out THIS LINK and follow me on Twitter @stevehullfish
The first 50 Art of the Cut interviews have been curated into a book, “Art of the Cut: Conversations with Film and TV editors.” The book is not merely a collection of interviews, but was edited into topics that read like a massive, virtual roundtable discussion of some of the most important topics to editors everywhere: storytelling, pacing, rhythm, collaboration with directors, approach to a scene and more.
Thanks to Moviola’s Todd Peterson and Evan O’Connor for transcribing this interview.
The post ART OF THE CUT – editing “Lost City of Z” appeared first on ProVideo Coalition.
First Found At: ART OF THE CUT – editing “Lost City of Z”
0 notes