#No use of duplicate tags or bloated formatting
Explore tagged Tumblr posts
Text
Eid-el-Kabir: Governor Makinde Urges Oyo Residents to Remain Vigilant During Celebrations
Governor Seyi Makinde of Oyo State has appealed to the citizens to stay vigilant and maintain heightened security awareness, particularly during festive periods. This call was part of the governor’s Eid-el-Kabir goodwill message. The announcement was conveyed through his Chief Press Secretary, Dr. Sulaimon Olanrewaju. The statement was made available to LMSINT MEDIA on Thursday night. In the…
#Content is structured using semantic HTML headings#Inline links optimized for mobile viewing#No use of duplicate tags or bloated formatting#Readability and load performance ensured for fast AMP delivery
0 notes
Text
Fixing 'Crawled - Currently Not Indexed' Errors in Google Search Console
Dealing with pages that Google crawls but doesn't add to search results? This comprehensive guide from RapidURLIndexer explains how to diagnose and fix the 'Crawled - Currently Not Indexed' error in Google Search Console.
What Is This Error?
The 'Crawled - Currently Not Indexed' status appears in Google Search Console when Google has visited your page but decided against including it in search results. Unlike the 'Discovered - Currently Not Indexed' status (where Google has only found links to your page), this error indicates that Google has evaluated your content and found it lacking in some way.
This decision is based on various factors including content quality, uniqueness, site structure, and technical aspects of your website. When Google encounters this error, it's essentially saying, "We've seen this page, but don't think it adds enough value to include in our search index."
Why Is Google Not Indexing Your Pages?
Several common issues can trigger this indexing problem:
Content that's thin, low-quality, or doesn't provide unique value
Duplicate or highly similar content that exists elsewhere
Poor site structure that fails to signal page importance
Technical issues like improper use of noindex tags or robots.txt directives
Slow loading times that create a poor user experience
Low authority or trust signals for your website
Content that's too similar to existing indexed pages
How to Diagnose the Problem
Start your diagnosis in Google Search Console:
Navigate to the Coverage report under the Index section
Look for the 'Crawled - Currently Not Indexed' status
Examine the affected URLs to identify patterns
Use the URL Inspection Tool to check individual pages
Supplementary tools like Screaming Frog SEO Spider, Ahrefs Site Audit, or SEMrush can provide deeper insights into potential issues with these pages.
Content Quality Improvements
Content quality is often the primary factor in indexing decisions. To improve your chances of getting indexed:
Expand thin content with more comprehensive information
Add unique insights, data, or perspectives not found elsewhere
Improve readability with better formatting and structure
Include relevant multimedia elements like images, videos, or infographics
Update outdated information with current facts and statistics
Ensure proper grammar and spelling throughout
Google prioritizes content that delivers genuine value to users, so focus on creating resources that solve problems or answer questions better than existing pages.
Site Structure and Internal Linking
Your site's architecture and internal linking strategy play crucial roles in indexing:
Create a logical hierarchy that shows page relationships
Implement breadcrumb navigation for better user experience
Link to important pages from your homepage and main navigation
Use descriptive anchor text that includes relevant keywords
Create contextual links within content where appropriate
Consider implementing a 'related posts' section for blog content
A well-structured site with strategic internal linking helps Google understand which pages are most important and worthy of indexing.
Technical SEO Fixes
Technical issues can prevent indexing even when content quality is high:
Improve page speed by optimizing images and reducing script bloat
Check your robots.txt file to ensure it's not blocking important pages
Review meta robots tags to confirm they allow indexing
Optimize your XML sitemap to prioritize important content
Fix any mobile usability issues that could affect crawling
Ensure proper use of canonical tags to address duplicate content
Advanced Strategies
For persistent indexing issues, consider these advanced approaches:
Content pruning: Remove or consolidate low-quality pages that might be diluting your site's overall quality
Strategic use of the 'Request Indexing' feature for high-priority pages
Improve page authority through quality backlinks from relevant sites
Enhance user engagement metrics like time on page and bounce rate
Consider updating publication dates for refreshed content (but only when making substantial changes)
Expected Timeline for Results
After implementing fixes, be patient. Google typically takes anywhere from a few days to several weeks to recrawl and reassess pages. Higher-authority sites often see faster results than newer websites.
Monitor progress in Google Search Console by regularly checking the Coverage report and using the URL Inspection tool to track changes in status.
Common Mistakes to Avoid
Don't fall into these common traps when addressing indexing issues:
Relying solely on automated indexing tools without fixing underlying problems
Over-optimizing content with keyword stuffing or unnatural phrasing
Using black hat techniques that could result in penalties
Making isolated changes without a comprehensive approach
Expecting immediate results without allowing sufficient time for Google to reassess
How RapidURLIndexer Can Help
While addressing the root causes is essential, tools like RapidURLIndexer can complement your efforts by expediting the indexing process. Our service helps signal to Google that your content deserves reconsideration through a specialized submission network.
Unlike manual methods, RapidURLIndexer offers faster indexing, bulk submission capabilities, and detailed reporting on indexing status. With our pay-as-you-go model and credit refunds for unindexed URLs, it's a cost-effective solution for websites of all sizes.
Remember that while no tool can guarantee 100% indexing (as the final decision always rests with Google), combining on-site improvements with strategic use of indexing tools gives you the best chance of success.
https://rapidurlindexer.com/fixing-crawled-currently-not-indexed-errors/
0 notes
Text
E‑commerce Faceted Navigation SEO: How to Avoid Infinite Crawl Traps
E‑commerce stores thrive on faceted navigation—offering filters like size, color, brand, and price to enhance shopping experiences. But these filters can unintentionally generate thousands of URL combinations, leading to technical SEO nightmares like duplicate content, crawl budget wastage, and diluted link equity.
To scale profitably, brands need to treat faceted navigation as both a user-benefit and a crawl-risk. And for multi-location or rapidly growing stores, partnering with the right experts is essential—something only the Best SEO agencies in Mumbai are equipped to handle effectively.
In this guide, we'll explore how to structure faceted navigation to avoid crawl traps while still reaping the user engagement benefits.
1. The Facet Trap: How Infinite URLs Kill SEO
Creating infinite URLs isn’t futuristic—it’s happening now:
Every combination of color, size, brand, price creates a unique URL.
Google and other bots see each as a distinct page, but the content often isn’t unique.
That results in duplicate content, crawl waste, and index bloat similarweb.com+15goinflow.com+15searchenginejournal.com+15lumar.iolegiit.com.
A Reddit SEO expert illustrates:
“If your URL scheme allows, disallow them in robots.txt to save crawl budget… Googlebot tries to recognize the pattern of useless faceted pages and reduces the crawl ratio.” reddit.com+7reddit.com+7reddit.com+7
Common symptoms:
Thousands of near-duplicate URLs flooding your index
Rapidly depleting crawl allowance without indexing value
Lost authority and reduced rankings across category pages
Partnering with Best SEO agencies in Mumbai ensures these issues are identified and remediated before they spiral.
2. Best Practices to Avoid Crawl Traps
a. Limit Facets That Generate URLs
Only allow crawling/indexing for high-value facets with both search volume and unique content impact. For low-demand or redundant combinations:
Remove <a> links in the UI
Use JavaScript or AJAX to update listings without changing the URL webflow.com+1reddit.com+1reddit.comsimilarweb.com+1webflow.com+1reddit.com+2reddit.com+2reddit.com+2
Example: color filters might need indexing, but size+brand combinations usually don’t
b. Use standard URL structures & parameter conventions
Stick to ?key=value&... formats—not directory-like or session IDs—so search engines parse them correctly reddit.com+6reddit.com+6reddit.com+6reddit.com+11developers.google.com+11seoclarity.net+11.
Keep facet order consistent:
?category=shoes&color=red
Avoid duplicates caused by reordered parameters
c. Self-Canonicalize or Use Canonical to Base Category
Every faceted URL should include a <link rel="canonical" href="base-category-url">. This tells search engines, “This is just a variant of the category page” goinflow.com+7reddit.com+7shopify.com+7prerender.ioreddit.com+11searchenginejournal.com+11ahrefs.com+11.
Pragmatically, the base category retains all ranking signals and rankings.
d. Noindex Low-Value Faceted URLs
When you can’t block crawling entirely, add <meta name="robots" content="noindex, follow"> to prevent indexing while preserving link flow similarweb.com+1reddit.com+1thegray.company+1reddit.com+1.
However, don’t block these pages with robots.txt if they carry a canonical tag—Google must crawl them to see the canonical directive.
e. Block Crawling via robots.txt
For low-value parameter combinations, you may add Disallow patterns in robots.txt (e.g., Disallow: *?price=*) developers.google.com+12searchenginejournal.com+12reddit.com+12.
But beware—blocking crawling without noindex means these pages may still appear in SERPs without content ("Indexed, though blocked by robots.txt").
f. Manage Pagination Properly
Use <link rel="prev"/><link rel="next"/> or canonicalize to a “view-all” version to consolidate authority on long category listings ahrefs.com+15searchenginejournal.com+15reddit.com+15developers.google.com+1reddit.com+1.
3. Real‑Life Implementation: Tiered Facet Control
A recommended strategy involves a tiered approach, heavily supported by experts like Best SEO agencies in Mumbai: Facet TypeSEO TreatmentHigh-value (color, price)Crawled, canonicalized, indexedMid-value (size, material)Noindex–follow via <meta>Low-value (session, tracking)Block via robots.txtUser-generated search facetsAJAX only (no href URL)
Redditors confirm this layered tactic:
“Separating brand, category, sale type and other facets is a good start… 1–3 filters allowed makes sense most of the time.” reddit.comdevelopers.google.comseoclarity.net+6reddit.com+6reddit.com+6reddit.com
4. Crawl Budget & Index Bloat Management
Crawl Budget Management
Implement smart disallows + canonical/noindex rules to cut unnecessary bot visits.
Tech-savvy firms like the Best SEO agencies in Mumbai also use log file analysis to fine-tune robots.txt and link structures.
Combatting Index Bloat
Proactively monitor Google Search Console—when you see thousands of URLs marked as “Indexed, though not in sitemap,” action is overdue .
Large retailers often reduce indexed pages by 80% through canonical + noindex + robots approaches, leading to cleaner site health and better performance.
5. When to Allow Some Facet Landing Pages
Some facets are high-traffic, high-converting—like “black leather jackets” or “NIKE running shoes.”
Options:
Create clean, dedicated, indexable SEO landing pages for them
Optimize titles, H1s, unique intros
Noindex rest through canonical or meta robots
SEO check:
“Index only facet values that point to indexable pages… focus on user intent” reddit.com+2reddit.com+2reddit.com+2lumar.io+2reddit.com+2reddit.com+2reddit.com
This selective indexing strategy balances discoverability and crawl control.
6. AJAX-based Faceted Navigation
Make low-value filter buttons trigger AJAX updates rather than unique URLs. Use hashes or data-href attributes instead of <a> tags so URL doesn’t change, preserving crawl budget webflow.com+4searchenginejournal.com+4reddit.com+4reddit.com.
This safeguards user experience without polluting your index with filter variants.
7. Monitoring & Auditing Continuously
This isn’t set-and-forget. Faceted navigation changes over time. Here's how experts maintain control:
Monthly log-audits: Identify spike in bot requests to filter pages
GSC checkups: Ensure unwanted facet URLs are disappearing
Crawl simulations: Use tools like Screaming Frog in AJAX mode
Link audits: Remove internal links to noindexed URLs
Sitemap hygiene: Only include canonical category URLs, not all variants
The Best SEO agencies in Mumbai include this in strategic ongoing SEO plans for large e-commerce clients.
8. Business and UX Benefits
Getting faceted navigation right isn’t just SEO—it boosts conversions:
Reduced 404s, cleaner UX
Better site speed and crawl efficiency
Increased rankings for core categories
Easier indexation of valuable keyword pages
Lowered risk of content duplication penalties
By bringing user intent and SEO discipline together, brands can unlock both search visibility and sales performance.
Final Thoughts: Facets Done Right
Faceted navigation—done poorly—creates more problems than it solves. Done smart, it enhances UX and SEO. But implementing this at scale for growing e‑commerce brands requires:
Technical precision
Continuous oversight
Business-aligned content strategy
That’s why serious retailers collaborate with the Best SEO agencies in Mumbai. They have the expertise, tools, and frameworks to:
Segment facets by value
Implement canonical/noindex/robots rules
Execute AJAX-first filter UX
Monitor, audit, refine over time
If you’re scaling an e‑commerce store and want to avoid crawling yourself into an SEO hole, now is the time to partner with an expert.
0 notes
Text
13 Technical SEO Tips You Need to Implement Right Now
Let’s face it SEO is no longer just about keywords and backlinks. These days, if your site isn’t technically sound, Google won’t even give you a second glance. Whether you're running a blog, eCommerce store, or local business website, technical SEO tips are your backstage passes to visibility, speed, and SERP success.
This isn’t just another generic checklist. We’re diving deep from the technical SEO basics to advanced technical SEO strategies. So buckle up, grab your coffee, and get ready to seriously level up your website.
1. Start with a Crawl See What Google Sees
Before you tweak anything, see what Google sees. Use tools like Ahrefs Technical SEO Guide, Screaming Frog, or Sitebulb to run a site crawl. These will point out:
Broken links
Redirect chains
Missing metadata
Duplicate content
Crawl depth issues
It’s like doing a health check-up before hitting the gym, no use lifting weights with a sprained ankle, right?
2. Fix Crawl Errors and Broken Links Immediately
Crawl errors = blocked search bots = bad news. Head to Google Search Console’s Coverage report and fix:
404 pages
Server errors (500s)
Soft 404s
Redirect loops
Remember: broken links are like potholes on your website’s highway. They stop traffic and damage trust.
3. Optimize Your Site Speed Like It’s 1999
Okay, maybe not that fast, but you get the idea.
Speed isn't just an experience thing, it's a ranking factor. Here’s how to trim the fat:
Compress images (use WebP or AVIF formats)
Enable lazy loading
Use a CDN
Minify CSS, JS, and HTML
Avoid heavy themes or bloated plugins
This is one of the powerful technical SEO guides that Google loves. Faster site = better UX = higher rankings.
4. Make It Mobile-First or Go Home
Google’s all in on mobile-first indexing. If your site looks like a disaster on a smartphone, you’re practically invisible. Ensure:
Responsive design
Readable fonts
Tap-friendly buttons
Zero horizontal scroll
Test it on Google’s Mobile-Friendly Test. Because if mobile users bounce, so does your ranking.
5. Get Your Site Structure Spot-On
Think of your website like a library. If books (pages) aren’t organized, nobody finds what they need. Make sure:
Homepage links to key category pages
Categories link to subpages or blogs
Every page is reachable in 3 clicks max
This clean structure helps search bots crawl everything efficiently a technical SEO basics win.
6. Secure Your Site with HTTPS
Still running HTTP? Yikes. Not only is it a trust-killer, but it’s also a ranking issue. Google confirmed HTTPS is a ranking signal.
Install an SSL certificate, redirect HTTP to HTTPS, and make sure there are no mixed content warnings. You’d be surprised how often folks overlook this simple technical SEO tip.
7. Use Schema Markup for Rich Snippets
Want star ratings, FAQ drops, or breadcrumbs in Google results? Use schema!
Product schema for eCommerce
Article schema for blogs
LocalBusiness schema for service providers
FAQ & How-To schemas for extra real estate in SERPs
Implement via JSON-LD (Google’s favorite) or use plugins like Rank Math or Schema Pro.
8. Eliminate Duplicate Content
Duplicate content confuses search engines. Use tools like Siteliner, Copyscape, or Ahrefs to catch offenders. Then:
Set canonical tags
Use 301 redirects where needed
Consolidate thin content pages
This is especially critical for advanced technical SEO consulting, where multiple domain versions or CMS quirks cause duplicate chaos.
9. Improve Your Internal Linking Game
Internal links spread link equity, guide crawlers, and keep users browsing longer. Nail it by:
Linking from old to new content (and vice versa)
Using descriptive anchor text
Keeping links relevant
Think of internal links as signboards inside your digital shop. They tell people (and bots) where to go next.
10. Don’t Sleep on XML Sitemaps & Robots.txt
Your XML sitemap is a roadmap for bots. Your robots.txt file tells them what to ignore.
Submit sitemap in Google Search Console
Include only indexable pages
Use robots.txt wisely (don’t accidentally block JS or CSS)
Sounds geeky? Maybe. But this combo is one of the advanced technical SEO factors that separates rookies from pros.
11. Check Indexing Status Like a Hawk
Just because a page exists doesn’t mean Google sees it. Go to Google Search Console > Pages > “Why pages aren’t indexed” and investigate.
Watch for:
Noindex tags
Canonicalization conflicts
Blocked by robots.txt
Monitoring indexing status regularly is essential, especially when offering technical SEO services for local businesses that depend on full visibility.
12. Avoid Orphan Pages Like the Plague
Pages with no internal links = orphaned. Bots can’t reach them easily, which means no indexing, no traffic.
Find and fix these by:
Linking them from relevant blogs or service pages
Updating your navigation or sitemap
This is an often-missed on page SEO technique that can bring old pages back to life.
13. Upgrade to Core Web Vitals (Not Just PageSpeed)
It’s not just about speed anymore Google wants smooth sailing. Enter Core Web Vitals:
LCP (Largest Contentful Paint): Measures loading
FID (First Input Delay): Measures interactivity
CLS (Cumulative Layout Shift): Measures stability
Use PageSpeed Insights or Lighthouse to test and fix. It's a must-have if you’re targeting powerful on-page SEO services results.
14. Partner with a Pro (Like Elysian Digital Services)
If your brain’s spinning from all these technical SEO tips, hey, you're not alone. Most business owners don’t have time to deep-dive into audits, schemas, redirects, and robots.txt files.
That’s where pros come in. If you’re looking for advanced technical SEO consulting or even a full stack of on page SEO techniques, Elysian Digital Services is a solid bet. Whether you're a startup or a local biz trying to crack the Google code we've helped tons of businesses get found, fast.
Final Thoughts
There you have 13 technical SEO tips (and a bonus one!) that are too important to ignore. From speeding up your site to fixing crawl issues, each one plays a crucial role in helping your pages rank, convert, and grow.
The web is crowded, the competition’s fierce, and Google isn’t getting any easier to impress. But with the right tools, a bit of tech savvy, and the right support (yep, like Elysian Digital Services), you can absolutely win this game.
#advanced technical seo#advanced technical seo factors#powerful technical seo guides#advanced technical seo consulting#technical seo services for local businesses#ahrefs technical seo guide#powerful on-page seo services
0 notes
Link
What Are the Best Online Text Editors for Blog Writing?
I have a blogging platform, why would I want to use an online text editor for blog writing? If you’re a blogger there are some BIG advantages to using online blog editors, learn why.
Image Source: Trix-Editor
The whole deal with bloggers is that they have to do a lot of blog writing quickly and efficiently.
Professional bloggers do this for their own sites as well as other sites. When it comes to quick and efficient writing for blogs, you’ll learn quickly that formatting content ‘the right way’ for online publishing isn’t alway that easy.
While publishing content online — you don’t have time to check html and proper code structure. Using online text editors will allow you to write, edit and publish quickly without having to worry about errors.
The text editor allows bloggers to edit the blog post within a few seconds.
Online text editors are used to quickly submit the content without any mistake.
Oh, and many online blog editors even suggest sentence structure and check for duplicate content while you write!
We’re going to cover the Pros of using an online text editor as well as what the Best online blog editors are and where to find them.
Pros of the Online Text Editor in Blog Writing
Online text editors provide the following benefits to bloggers when writing blogs:
Direct Content Uploading
The editor allows you to directly paste the document into the editor without opening a proper software for editing installed in the system.
Writing Options
It provides bloggers and other content creators with some best writing options to quickly edit the document.
You can use online editors to make quick and desired changes in the content.
For example, you can create clean HTML output with the easiest WYSIWYG editing possible. If you’ve already started writing rich-text content, all you have to do is paste it the online blog editor, make your adjustments, extract HTML output from view-source mode and reuse it anywhere on the web!
Faster to Edit
The online editors are best to make changes quickly in the content using the best features for cross platform application.
Saves Time
Online editors save your precious time and allow you to quickly edit the blog post while proofing content.
This also helps you to submit the blog or any other report within the given time. Besides, manually coding HTML can be extremely time-consuming and is ripe with the potential for errors.
Provide Listings
The text editors also provide you with the best bullet and numbered listing options.
You can add these listings anywhere in the content just within a single click.
Multiple Formats
One of the best benefits of this online editor tool is that it also allows you to change the format of the document.
You can select the desired format by using the online blog editor.
For example many bloggers begin writing posts using another editor (such as Google Docs or Microsoft Word). The problem with moving your writing for websites from an outside editor over to WordPress or another online blog platform is that a lot can get lost in translation.
Have you ever noticed some extra code when switching between the visual editor and the text editor after copying and pasting text from outside of the WordPress environment?
Unnecessary tags can become tedious to manually remove. If you opt not to remove these extra code characters, it can actually hurt your ability to rank well in relevant search engine inquiries because of the “code bloat.”
Other Media
You can quickly add media files, tables, and other offered features by a text editor.
Save the File
Here you have the option to directly copy the content from the editor or can easily save it in the system within a single click.
As a blogger, you don’t want to waste your time checking technical errors — you want an online blog editor that will get the job done.
That’s exactly why, we’re going to discuss the best online text editors that can help to edit content quickly. There are a lot of options out there, but here are our favorite online blog editors:
5 Best Online Text Editors
1. Trix-Editor
Trix is a text editor that is widely used to edit blogs, articles, assignments, and other documents with the best features.
It provides an attractive formatted text in the web application to quickly edit reports and blogs.
Simply paste the text into the editor and start editing as:
Features:
File Uploading: This editor allows you to upload files directly from the system.
Bold & Italic: Here, bloggers and writers have the feature to Bold the text and also have an option to change the text into the Italic font.
Insert a Link: You can easily insert a link into the anchor text by using this text editor.
Headings: It also allows you to make headings within a single click.
Listings: This editor has the feature to create bullet and numbered listings.
2. Prepostseo
The online text editor by Prepostseo allows you to edit the text to make a quick content submission.
This plain text editor provides excellent features to edit blogs or any kind of document.
Directly paste the text into the editor as:
Features:
Emoticons & Special Characters: This editor allows you to add emoticons and special characters into the plain text.
Listings: It provides numbered and bullet listing features to edit blogs.
Insert Links & Date: This word pad online allows users to insert a date and a link into the anchor text.
Tables: This editor also provides a table feature to add to the plain text.
Find & Replace: You can quickly find and replace the specific words in the text by using this editor.
File Saving: It also allows you to save the file in PDF or DOC form.
Plagiarism & Grammar Check: You can easily check plagiarism and grammar of text by using multiple Prepostseo online tools.
3. Adresults
It is also one of the best text editors that can be used online to edit blogs and other documents.
It is widely used to edit the plain text which helps in submitting content within the given time.
Simply paste the document or write it into the editor as:
Features:
Empty Lines: The editor removes all the empty lines in the text.
Multiple Spaces: It also removes multiple spaces in the text document.
Replace Letters: This editor is widely used to replace lowercase and uppercase letters.
Capitalization: By using this editor, you can capitalize the first letter of each word within a single click.
Safe & Free: The editor is 100% secure and safe to use.
4. Onlinehtmleditor
The online HTML editor provides some excellent features to edit the text in the editor.
This editor can add basic text formatting to the document.
Directly paste or write the text into the online editor as:
Features:
Styling and Formatting: This feature allows you to write the text in proper format with the best styling features as well
Tables: The editor also provides an excellent feature to add tables in the text.
Inserting Images: You can easily insert images and other media in the text by using this editor.
Embed Media: The editor allows you to embed media in resources in with a perfect output in the text.
Accessibility: You can easily inspect the accessibility level of the content by using this editor.
Font and Size: The editor provides some features to change the font and size of the text.
5.
Wordhtml
This online editor comes with some unique features to edit the plain text.
This editor comes with built-in best code cleaning features.
Simply paste the content into the editor and start editing the text as:
Features:
Formatting: The editor provides excellent features to change the format of the text to look impressive.
Add Listings: The online editor allows you to add a bullet and numbered listings anywhere in the text.
Emoticons and Tables: The editor allows you to add new emoticons and tables in the document.
Insert Links: You can easily insert a link in an anchor text with the help of this editor.
Print: You can quickly print the text written in the editor within a single click.
Wrapping Up
Online text editors are widely used by professional bloggers to quickly edit blog posts and other content.
All the pros of the online text editor in blog writing have been mentioned above.
These are free online text editor tools that will improve your blog writing — don’t wait!
Give them a try and revolutionize the way you write blog content starting today.
0 notes
Text
45.99% Earnings Increase in 5 Months for a Digital Infoproduct [SEO Case Study]

You’re about to get the strategy behind one of the most challenging SEO campaigns my SEO agency has ever run.
Why was it so challenging? 3 reasons:
First, the niche is massively competitive: A make-money-online infoproduct in the financial niche. Nuff said.
Second, we only had 5-months to pull this off.
Third, just like any other client, they were extremely hungry for results and demanded quality work.
In the case study below, you’re going to learn the technical playbook, the onsite content strategy, and the link building techniques we carried out to get this 45.99% revenue growth win for this infoproduct business.
The Case Study
Our client takes advantage of the wide reach of the interwebs to teach his students how to earn money trading online. We’re talking currencies, forex, stock markets, crypto, etc.
The business’ revenue is generated solely through the sale of digital download products – in this case, trading guides in an ebook format and video trading courses.
When the owner of this profitable business (which already built some authority in the niche) approached The Search Initiative (TSI) about helping to grow their organic reach and find new students, we were excited to take on the challenge in one of the most competitive spaces there is.
There was also a catch – the campaign was planned for only 5 months, which sounded really scary in this case.
To accomplish this, the game plan was to focus hard on a quick-win strategy, while setting the stage for long term gains post-campaign.
Our strategists were certain that the value we could provide would have a considerable impact on his business’ bottom line.
How? Because…
By focusing on increasing organic traffic, we could improve sales, while allowing the client to pull back on ad spend.
Over the course of the campaign, our technically-focused SEO strategies were able to grow organic traffic by 23.46%.
But what did the best job for the client’s business was the 45.99% increase in the number of conversions comparing 1st vs last month of the campaign. Sales went up from just over 2,100 a month to 3,095 – this really bumped their monetization.
And we did it in time.
These gains were achieved within only 5 months of the client signing with TSI and our team starting the campaign.
Here’s how we did it…
The SEO Playbook for Infoproduct Websites
Phase 1: A Comprehensive Technical Audit
I’ve said this in every TSI case study we’ve published so far… and I simply cannot emphasize enough:
A comprehensive technical audit is the most crucial part of any SEO campaign.
So before you begin any campaign, always start with a full technical audit.
Starting with…
Page Speed
First, our technical SEO strategists started at the bottom of the client’s tech stack… and you should too.
This starts with you digging into the web server’s configuration, and running a series of tests to measure the site’s speed.
This enables you to ensure that the performance of the web server itself wasn’t causing a penalty or disadvantage on either desktop or mobile connections.
So, what tests we run?
PageSpeed Insights (PSI) – this should be everyone’s go-to tool and shouldn’t need an explanation.
In cases where transfer speeds or latency are too high, we advise you (and our clients) to consider migrating to faster servers, upgrading to better hosting or better yet, re-platforming to a CDN.
Luckily, most of the time, you can achieve most of the gains through WPRocket optimization, as was the case with this case study.
Your Golden WPRocket Settings
Cache → Enable caching for mobile devices
This option should always be on. It ensures that your mobile users are also having your site served cached.
Cache → Cache Lifespan
Set it depending on how often you update your site, but we find a sweet spot at around 2-7 days.
File Optimization → Basic Settings
Be careful with the first one – it may break things!
File Optimization → CSS Files
Again, this section is quite tricky and it may break things. My guys switch them on one-by-one and test if the site works fine after enabling each option.
Under Fallback critical CSS you should paste your Critical Path CSS which you can generate using CriticalCSS site.
File Optimization → Javascript
This section is the most likely to break things, so take extreme care enabling these options!!
Depending on your theme, you might be able to defer Javascript with the below:
Note that we had to use a Safe Mode for jQuery as, without this, our theme stopped working.
After playing with Javascript options, make sure you test your site thoroughly, including all contact forms, sliders, checkout, and user-related functionalities.
Media → LazyLoad
Preload → Preload
Preload → Prefetch DNS Requests
The URLs here hugely depend on your theme. Here, you should paste the domains of the external resources that your site is using.
Also, when you’re using Cloudflare – make sure to enable the Cloudflare Add-on in WPRocket.
Speaking of Cloudflare – the final push for our site’s performance we managed to get by using Cloudflare as the CDN provider (the client sells products worldwide).
GTMetrix
If you don’t want to use additional plugins (which I highly recommend), below is a .htaccess code I got from our resident genius and Director of SEO, Rad Paluszak – it’ll do the basic stuff like:
GZip compression
Deflate compression
Expires headers
Some cache control
So without any WordPress optimization plugins, this code added at the top of your .htaccess file, will slightly improve your PageSpeed Insights results:
Internal Redirects
You know how it goes – Google says that redirects don’t lose any link juice, but PageRank formula and tests state something different (there’s a scientific test run on 41 million .it websites that shows PageRank’s damping factor may vary).
Whichever it is, let’s take all necessary precautions in case there is a damping factor and redirects drop a % of their link juice.
Besides, not using internal redirects is just good housekeeping. Period.
As we investigated the configuration of the server, we discovered some misapplied internal redirects, which were very easily fixed but would have a considerable effect on SEO performance – a quick win.
You can test them with a simple tool httpstatus.io and see results for individual URLs:
But this would be a long way, right? So your best bet is to run a Sitebulb crawl and head over to the Redirects section of the crawl and look at Internal Redirected URLs:
There you will find a list of all internally redirected URLs that you should update and make to point at the last address in the redirect chain.
You might need to re-run the crawl multiple times to find all of them. Be relentless!
Google Index Management
Everyone knows that Google crawls and indexes websites. This is the bare foundation of how the search engine works.
It visits the sites, crawling from one link to the other. Does it repetitively to keep the index up-to-date, as well as incrementally, discovering new sites, content, and information.
Over time, crawling your site, Google sees its changes, learns structure and gets to deeper and deeper parts of it.
Google stores in their index everything it finds applicable to keep; everything considered useful enough for the users and Google itself.
However, sometimes it gets to the pages that you’d not want it to keep indexed. For example, pages that accidentally create issues like duplicate or thin content, stuff kept only for logged-in visitors, etc.
Google does its best to distinguish what it should and shouldn’t index, but it may sometimes get it wrong.
Now, this is where SEOs should come into play. We want to serve Google all the content on a silver platter, so it doesn’t need to algorithmically decide what to index.
We clean up what’s already indexed, but was not supposed to be. We also prevent pages from being indexed, as well as making sure that important pages are within reach of the crawlers.
I don’t see many sites that get this one right.
Why?
Most probably because it’s an ongoing job and site owners and SEOs just forget to perform it every month or so.
On the other hand, it’s also not so easy to identify index bloat.
With this campaign, to ensure that Google’s indexation of the site was optimal, we looked at these:
Site: Search
Google Search Console
In our case, we found 3 main areas that needed attention:
Indexed internal search
If you’re on a WordPress site – you have to pay attention to this one.
Most of WordPress websites offer a built-in search engine. And this search engine is usually using the same pattern: ?s={query}.
Bear in mind that ?s= is the default one for WordPress, but if your theme allows you to set this up yourself, you might end up having something else instead of the “s” param.
To check if this is also your problem, use this site: search operator
site:domain.com inurl:s=
If it comes back with any results, it means that your internal search pages are being indexed, you’re wasting Google’s crawl budget, and you want to block them.
For our client, we suggested implementing noindex tags.
If your SEO plugin doesn’t have the option to noindex search results (I know that Rankmath does, but can’t remember if Yoast offers it as I’ve been off Yoast for a long time now), you might alternatively add the following line to your robots.txt:
Disallow: *?s=*
Duplicate homepage
This is another fairly common issue in WordPress if you’re using a static page as your homepage.
You see, the CMS may generate the pagination on your homepage, even if you don’t really have it paginated.
Why does this happen? Well, usually when you have a section where you list some of your newest posts. Or (thank you WordPress!) when you used to have your homepage set up as “Latest Posts” and Google managed to index them.
This creates URLs like these:domain.com/page/12/ domain.com/page/2/ domain.com/page/7/ domain.com/page/{number}/
The problem is caused because Google sees different content on these pagination pages – of course, the articles on page 2, 3, x are different, so the paginated list changes.
If you don’t have enough of the other, non-listed content on your homepage, to convince Google that these pages are similar enough to obey canonical – you have a problem.
In this case, even if you have the correct canonical tags in place, but Google finds these pages to not be identical, it might choose to ignore the canonicals. And you end up having all this stuff in the index.
It’s worth a check if you have similar pages indexed – and you should definitely pay attention:
To find these, run another site: search:
site:domain.com/page
To solve this for our client, we set up the 301 redirects so all of these pagination pages were pointing back to the homepage and we also removed them from XML sitemap:
(If you’re wondering, this screenshot is from Rank Math, which is a great free Yoast alternative, but you can also use Redirection plugin for WordPress.)
Please note that if your homepage is set up as a blog page (see below screenshot), this is most likely NOT a problem!
Other unwanted indexed pages
In our case, we also found other pages that were indexed but shouldn’t be:
Old forum pages
Old template pages
Blog tags
Media pages (thanks again, Yoast…)
Each of them might be different in your case, so you might want to consult an agency or professional SEO.
For this client, we removed the pages and used a 410 Gone HTTP header to remove them from the index faster.
Protip: Site: search queries you need to know
site:domain.com
This one is your foundational search queries and allows you to go through the entirety of what Google has indexed under your domain.
I like to run a search like this and switch to 100 results per page, by adding a num=100 parameter on Google:
https://www.google.com/search?q=site:domain.com&num=100
Then, I just click through the SERPs and inspect what’s there.
Things that are the most common issues are:
Query strings
Login/Cart/Checkout
Pagination
Tags
Anything that surprises you 🙂
Note that it doesn’t work for big sites as Google will only show you a sample of URLs.
site:domain.com/{folder}
This is just an extension of the standard site: search and allows you to find everything in a folder.
For example, on a Shopify site, you can list all category pages by running this search:
site:domain.com/collections/
Moving on…
site:domain.com inurl:{part-of-the-URL}
I love this one. It allows you to list all pages that share a common part of the URL.
For example, let’s say you want to find all pages that have “guide” in the URL:
site:domain.com inurl:guide
Voila!
site:domain.com -inurl:{part-of-the-URL}
Did you notice the little minus sign here “-inurl”? This one allows you to list all URLs that do not contain a certain string in the URL.
Let’s say you want to list all pages that do not contain “blog” in the URL.
Here’s how you’d do it:
site:domain.com -inurl:blog
The combination: site:domain.com -inurl:{part-of-the-URL} inurl:{another-URL-pattern}
Get ready for a really serious tool now! This one is a combination of “inurl” and “-inurl” (not in URL) operators and allows you to list pages that have a specific string in the URL, while don’t have another part in it.
For example, if you want to list all pages that are guides on your site, but not the buying guides – here’s how:
site:domain.com inurl:guide -inurl:buying
Make sure not to use spaces between the “:” and the string!
Also, be careful with the queries where operators cancel each other out – Google won’t return any results for these!
There are plenty of other combinations and search operators, so if any of the above is new to you, you should definitely read more about them here:
Get Your Sitemap in Order
In this case study, the team ensured that the XML sitemap was configured correctly so that Google’s crawlers and indexation engine were able to fully understand the site’s structure and present it to their users accurately.
Run a crawl with Screaming Frog to ensure that no URLs that are noindexed or missing are added to the sitemap.
First, switch to “List Mode” in Screaming Frog. Then select Upload → Download XML Sitemap. Type in the URL and let it crawl.
There should be no other pages than only the ones returning a 200 status code.
If there are, just remove them from the sitemap!
Soft 404 Errors
Soft 404 is a URL that displays a page telling the user that the page does not exist, but it returns a 200 OK (Success) instead of a 4xx HTTP status code.
This can definitely be a big problem for your site because, when it occurs, Google will start selecting what it thinks is a 404 with incorrect (200) HTTP response code on its own and, let’s be honest, algorithm sometimes often gets it wrong!
So, you’re facing an issue that good pages, which you’d rather keep in the index, are being thrown out because Google thinks they’re 404s.
Why does it think so?
Most probably there are similarities between the genuinely good and Soft 404 pages.
Unfortunately, these similarities are not obvious and, when analyzed algorithmically, they can be mistakenly taken as anything common and silly: footer, sidebar, banner ads, or whatnot.
So let me give you an example – this is how my 404 page looks like:
It returns a correct 404 status code, so everything is fine:
Now, if it was returning a 200 code – it would’ve been a soft 404. Google would figure it out and it could all be fine.
But there’s a but.
Let’s say I had a page with just a little bit of content – like this made up one:
As you can see – it has a different content, but everything else is the same: header, sidebar, footer.
When you approach it as Google does – algorithmically, it might end up being very similar to the soft 404 page example above. In fact, Google may class it the same. And this is what you don’t want. You don’t want Google to decide for you.
My rule is – don’t allow Google to make any decisions for you!
Our job, as SEOs, is to make it ridiculously easy for Google to crawl and index your site. So don’t leave anything you don’t have to for the algorithm to figure out.
In this case, we had all 404 pages set up to 301 redirect back to the homepage. It’s a common practice, but occasionally a dangerous one.
Why would it be dangerous?
Because we’ve seen cases where Google would simply treat all 301 redirects to the homepage as Soft 404s. And when it does that, it might also start treating your homepage as a Soft 404 page, because all these Soft 404s are defaulting to your homepage, right?
And what does that mean?
No homepage.
And when there’s no homepage? No rankings!
But if you’re really unlucky, Google will think that if your homepage got removed (Soft 404’d and thrown out of index), your entire domain should go out the window! And it’ll go on and de-index everything.
Sounds harsh!? It does, but we’ve seen extreme cases like this, so it’s better to be safe than sorry.
So why were we comfortable doing it?
At TSI our approach to this is simple: 404s are a natural thing on the Internet!
Therefore, we only 301 redirect the important pages, where applicable. By important, I mean pages that have external or internal links and some history.
We leave 404s where its a legit page of content just removed from the site, but has no value anyways.
I know what you’re thinking: What about Excluded or Errors under Index Coverage in Google Search Console?
To put it simply, in this case – Nothing! Because 404s are normal. Google will report them in GSC, but that’s fine.
Fixing Facebook Pixel Issues
Most infoproduct businesses leverage Facebook retargetting, so if you have an infoproduct (or your client does) you need to consider the following issue.
This problem was quite tricky to find a solution to, but our crawls showed that spiders can follow a pixel image:
So as you can see (or not see, because most of it is blurred) above, crawlers were accessing pages like:
domain.com/“https:/www.facebook.com/tr?id={client’s FB ID}&ev=PageView&noscript=1”
The part in red shouldn’t be there. As you can imagine, this was the case for every single URL on the site. Not good!
We didn’t really know how this was possible or what caused it, but the plugin generating Facebook Pixel was doing it wrong…
The problem was the backslashes “escaping” single and double quotes in the Javascript code generating the pixel:
We retired the plugin and inserted the pixel code directly in the source code (header.php file).
Our tech SEO guys keep complaining that there’s a plugin for literally everything in WordPress. Even for the easiest and smallest things.
So maybe next time, when you’re thinking of installing a plugin do us and yourself a favor – think if it’s really needed.
Don’t use plugins where they’re simply an overkill and the same can be accomplished faster and smoother by just a simple copy-paste.
Heading Structure
This was quite simple, but also an important one.
This site did not use any headings other than H2s… None. At all.
I mentioned the importance of semantic headings in another case study, so I’ll just say that the fix here was to simply organize them on every page and use all headings from H1 to H5.
Simple, but important.
HTTP pages and YMYL
Non-secure webpages are quickly going out of style.
The Electronic Frontier Foundation is aggressively promoting the movement of the secure HTTPS protocol being used across the entirety of the web.
Google is also supporting the idea through flagging of non-HTTPS content as “not secure” in Chrome.
This client did indeed have the correct SSL implementation in place, but there was a big problem.
The old HTTP pages were not redirected to their HTTPS versions.
Being in the YMYL (Your Money or Your Life) niche, you shouldn’t leave any loose ends.
I mean, you shouldn’t leave any loose ends at all, but when you’re in the YMYL niche specifically, you simply must not.
You could fix it with the use of Really Simple SSL plugin, which enables the HTTP→HTTPS redirects out of the box.
But as I said above, you don’t need WP plugins for every small action.
Here’s the .htaccess code we installed to have a proper HTTP to HTTPS and non-www to www redirect in place:
RewriteEngine OnRewriteCond %{HTTP_HOST} !^yourdomain.com [NC,OR]RewriteCond %{HTTP:X-Forwarded-Proto} =httpRewriteRule ^(.*)$ https://yourdomain.com/$1 [R=301,L]
Be careful, though! Make sure you have access to your FTP server before you click “Save” in the configuration.
In some cases, it might break things and to re-gain access to your site you’ll have to manually amend the contents of your .htaccess file.
All in all, this is what you wanna see if your preferred canonical domain is https://domain.com/:
Content Taxonomy & Internal Linking
In order to improve the internal linking of our client’s numerous blog posts, we recommended a re-organization of the site’s content categorization and taxonomy.
To start with, we suggested creating more categories in WordPress and adding them to the main menu.
This sounds simple, but prior to joining TSI, this site had only 1 big category (about 300 posts): Blog.
Moreover, to save the crawl budget, someone, unfortunately, noindexed all category and pagination pages.
When guys at TSI saw it, they were like this:
We created 16 new categories in WordPress – all corresponding with the type of course offered and the area of expertise it was teaching. We ensured that there were no more than 20 posts under each category.
See what I mean here? We’re all about them quick wins.
We also removed the noindex tags from the category pages.
The final trick was to add short, topically relevant text on top of each category page (above the posts), so Google would see them as more than just a list of articles. It meant more love from the G!
Kind of like what I’ve done here for my “SEO News” category page.
Through this, we created topical clusters (silos) under each category.
To create better topical relevance, you can also ensure that the articles would in most cases internally link only within the silo (article to article and article to its root category page).
This helps to better organize the content for the user’s benefit and also made it easier for crawlers to discover the pages.
The process built more internal links to the content, indicating its importance within the site’s information architecture.
A related posts content section was also added beneath each blog post, which amplified the same benefits, as well as providing the additional pros of helping users to find more of our client’s relevant educational content, also improving user metrics and click-through.
Stack those gains!
Phase 2: Creating a Winning Content Strategy
Once the server, site, taxonomy, and Google index were in advantageous positions, it was time to think about creating targeted content that both served the target demographic and would have the potential to rank for their most essential search terms.
Using Ahrefs, our technical team looked at competitor content for potential target keywords and studied metrics that indicated how difficult it would be to rank against them.
Trust me, once you have a list of keywords or topics you’re considering to go after, Ahrefs’ Keyword Explorer becomes very helpful:
And to find great keyword suggestions, from the Keyword Explorer you just need to go to Newly Discovered and you’re seeing all examples of new keywords related to your chosen one:
Another worthwhile option is Questions:
From there you can just pick keywords that appeal to you, taking into consideration their difficulty vs search volume.
But if you really want to up your content plan game, you have to check out the Content Explorer on Ahrefs:
It’s an extremely powerful tool, so I suggest you watch the below video to really take full advantage of it:
For our client, we estimated average monthly search volumes and considered the probable user intent behind each keyword vertical.
And speaking about the user intent – trust me, this is already a huge factor, but it will get even bigger in 2020.
If you would like to learn more about user intent, its types, and discovery, we had a great workshop during the Chiang Mai SEO conference this year. Here’s a video of one of TSI’s resident geniuses, Rad Paluszak, who held the presentation:
This content research process will give you the information needed to construct a strategy that focuses on creating content to serve users searching for the highest opportunity keywords.
Content Optimization & Keyword Cannibalization
The next task was to look at the existing pieces of content in 2 ways:
Run a keyword cannibalization audit.
I’ve talked about keyword cannibalization quite a bit in the past.
In fact, I think this is one of the most common, content-related on-site issues of this year.
It’s a plague on the industry, I tell you!
At TSI, we’re predicting that keyword cannibalization issues will become less of a problem with Google becoming smarter in natural language understanding (hint: Neural Matching and BERT), but it will probably remain as a hot topic and a big problem for years to come.
So in this case, we faced quite a serious case of keyword cannibalization. Out of around 300 articles indexed, 50 of them were double- or triple-ranking (cannibalizing) around positions 20-40. This was a strong suggestion that it needs to be solved.
This is just one of the keywords:
Since we are not experts in market trading and financial instruments, we had to ask the client for advice. We combined the list of all cannibalizing URLs and keywords, and supplied it to our client for a review.
When we received feedback regarding which pages can be merged, deleted or updated, the work began: We moved and combined the content.
And this is what you want to see:
In the meantime, we purged the pages that were not required and optimized (or deoptimized) the ones that were not preferable but had to stay within the site.
In doing so, we were able to increase the value of the existing content and get the most traffic possible from the client’s previous investment in the content.
Phase 3: An Authority Link Building Strategy
An essential part of any high-impact SEO campaign is the building of high-quality backlinks.
When this client joined us, we did the standard thing we do on every campaign, which you should do as well.
Perform a full audit on your backlink profile and you’ll likely find a mix of lower quality backlinks and some higher-quality inbound links too.
Immediately, some of the lowest quality backlinks were disavowed. You can read more about our approach to the backlink audit here.
Also, do an audit of your anchor text distribution.
In our case, we were slightly concerned about the anchor text distribution having too many exact match, partial match and compound (related to keywords, but not necessarily including the keywords directly – examples of these would be questions, sentence-long anchors, etc) anchors.
It looked like this:
And should look more like this:
With this in mind, during the first month of the campaign, we threw around 25 pillow links (we really propped up client’s social media accounts, created a few About Author pages on the publications he’s been contributing to and posted a few Medium articles) with branded anchors into the mix.
In the next 2 months, we also took a slightly safer approach to anchor texts in our outreach. This was all to balance things out.
Our outreach team began the process of reaching out to relevant sites who were pleased to place our client’s backlinks on their domains.
In the first month, the team negotiated and built 9 strong (DR 50+) outreach backlinks to the site and were able to negotiate 5-8 high-authority links each ongoing month.
Here are some link stats of our outreach job:
This quickly grew the domain’s authority, thus driving up rankings and improving discoverability on the web.
Here’s the link growth over the course of the campaign:
Results
Through completing our campaign using the techniques described in this case study, we were able to achieve considerable tangible growth for this client.
After 5 months of TSI working on the site, the client had enjoyed a 28% growth in the top 10 position rankings in Google, up from 1,713 positions to 2,188.
Stable growth is also shown in SEMRush:
This significantly increased the education business’ organic reach within just 5 months and translated into a 23.46% increase of sessions, an 18.46% increase in users and a 45.99% increase in earnings when comparing the 1st and 5th months of the campaign.
Comparing month-to-month with the previous year, with our help, the site reached a 252.78% increase in organic traffic and a 263.24% increase in goal completion.
The results of this campaign speak for themselves.
After 5 months of working with TSI, our client had seen a nice return on investment, and our proven strategies will continue to bear fruit as the business continues to develop in the long-term.
Conclusion
When a client puts their trust in you, you need to look at it from their perspective.
They’re trading their hard-earned cash for you work on their business, their baby.
With this particular case study, the pressure was on with a 5-month timeline in one of the hardest niches imaginable.
But by focusing on quick wins and optimizing what the client already had, results like this are achievable.
Let’s recap… remember to focus on:
Technical SEO first – Without a strong boat, you’re not going to sail anywhere. Don’t skip anything in the tech-SEO section above.
Content optimization and strategy – This is the area you want to bank on in the coming years.
Quality Backlinks – Focused on authority and balanced anchor distribution.
As long as you’re doing the right things: fixing everything, providing value and making the site easy for Google to understand – you’re going to win.
And if you need help, you know where to find us: The Search Initiative.
Get a Free Website Consultation from The Search Initiative:
[contact-form-7]
This content was originally published here.
0 notes
Text
Version 380
youtube
windows
zip
exe
macOS
app
linux
tar.gz
source
tar.gz
I had a couple of difficult weeks, with illness and other IRL problems getting in the way, but I got some hopefully pretty neat work done. A new fast video and audio player is available for advanced users to test, and there are a bunch of fixes and ui improvements as well.
A user just notified me that the Duplicates page has crazy layout! I apologise. Nothing is broken, it is just sizing wrong, and part of a longer fight I am having to convert my old wx layout code to Qt. I know exactly what happened here, and I will have it fixed for 381. If you discover more UI like this (the system predicate panels have a bit of it), please let me know.
mpv
This is just for advanced users this week. It is a basic prototype that is not ready for real use. I will improve a bit before turning it on for everyone, hopefully next week.
MPV is a good free video player. One of the core benefits of moving hydrus to Qt was the potential of being able to neatly embed it into the media viewer. I am happy to report that these past two weeks have been successful, and it looks like hydrus is finally getting hardware-accelerated video playback and full native audio support!
If you are using the windows build in advanced mode, please check out the big ugly filetypes list under options->media. Under all the video and audio filetypes, you can now set 'show using mpv' as a show action. Please forgive how large this list has grown--before I turn mpv on for everyone, I am going to rework this list so you have the option to just say 'for all video: xxxxxx', rather than always having every single filetype.
Once you have it set, it should load video almost exactly the same as my native renderer, with the scanbar below for navigation. Audio files will show as a (for-now fixed-size) black box or album art. Be warned: there are no volume/mute controls yet! I have set it at 70% volume for now. MPV has an optional and configurable on-screen display, but for technical reasons I had to stick with my custom controls.
I know what features I need to add, so the feedback I am most interested in this week is in where mpv breaks. If it doesn't work for you, or if it fails to load a file, or it has bad performance, or it sizes wrong, or you get a crash after closing the media viewer, anything like that, please let me know. Performance in my tests has been excellent--rendering 4k 60fps no problem as long as you have any sort of branded GPU. I have had some audio jitter on some unusually encoded videos and some crashes (which I think I have fixed).
One annoying thing is mpv requires a hefty 57MB dll, making the already bloated hydrus download even bigger. One user suggested perhaps putting out two releases--a 'core', and an 'update', which wouldn't have the big stuff. I am still thinking about it--while I like the overall idea, the hydrus build is held together with duct tape, and having a small non-functional release will increase complexity and inevitably cause ugly syncing difficulties whenever the big stuff does need to be updated.
Users running from source will need libmpv and python-mpv to enable this. I haven't pinned down the most convenient and functional way to do this for Linux and macOS yet, so we'll have to figure this out over the coming weeks and get it into the help. Any knowledgeable feedback here would be great.
For the near future of mpv, I would like to: figure out the technical side of including mpv in the Linux and macOS builds; write volume/mute UI and shortcuts; add global volume/mute management; test mpv with gifs and apngs; explore custom shaders and configuration files; and generally polish the whole thing.
quality of life
The tag right-click menu has had a significant overhaul. More commonly used actions are moved up, less common down. Tags with siblings now provide all siblings under the 'copy' menu! System search predicates have better copy and search-action support (like 'exclude' system:archive, which will actually add 'system:inbox').
Right-clicking on the thumbnail grid's 'whitespace' no longer does a 'deselect all' action, so you can now easily right-click in an empty area and go remove->selected!
Across the program, all filetypes are now referred to with human language. Instead of 'image/jpg', it is now just 'jpeg'. Instead of 'application/x-7z-compressed', it is now just '7z'. Please let me know where it looks bad, and if you would like to edit these labels.
The 'new page chooser' dialog now dismisses itself if you click off it! So, if you accidentally middle-click in some page tab whitespace and it annoyingly pops up, just left-click, and it will go away. Also, this dialog now catches enter/return key, and will hit the 'first' button (if you imagine the nine possible button locations as a number pad, starting at top-left). Hitting enter twice will typically open a new 'my files' search page.
A user surprised me recently by asking where to find the 'pause video' shortcut--I hadn't realised there wasn't one! Now this is fixed: 'pause_media' and 'pause_play_media' are added to the 'media_viewer' shortcut set, and 'pause_play_slideshow' is added to 'media_viewer_browser' set. Previously, the slideshow pause/play was hardcoded to space bar--this is now gone, so if you used this, you'll want to set it up yourself.
the rest
I have added simple support for RealVideo, RealAudio, and TrueAudio files. These are some formats, so if you come across some variable bitrate rmvb file or something that doesn't work, please send it in and I will have a look. MPV couldn't handle a couple of my test .rm files' audio tracks, but the .ttas seemed great.
Ever since the CloudFlare-8chan break, I have been telling myself to figure out some new downloader objects for the bunkers everyone scattered to. This job kept being put off, long enough that 8kun came back(!), but I managed to fit in some time in this morning and got 8kun and vch.moe support added for today's release. 8kun was obviously easy to convert from the old 8chan parser, and vch's engine provides an API very similar to 4chan that I was able to duplicate and modify for. I had a proper look at julay, smuglo.li, and endchan, but these use their own API formats (or no API at all), so I will have to write new parsers, which will take longer. I really would like to get them done for 381--please remind me if it looks like this job has slipped away again.
full list
basic mpv support is added. it comes with the windows build this week, and is a prototype meant for initial testing. the library is optional. users who run from source will want 'python-mpv' added via pip and libmpv available on their PATH, more details in running_from_source help
took an qt-mpv example kindly provided by a user, updated it to work with the hydrus environment, and integrated it into the client as a new choosable view type under audio/video filetypes under options->media for advanced users
reworked how the 'start paused' and 'start with embed button' media viewer options work under options->media. these are now separate checkboxes, not combined with the underlying 'show action'. existing embed/paused show actions should be converted automatically to the correct new values
unfortunately, due to some python/qt/libmpv wrapper mouse interaction issues, mpv's 'on screen controller' overlay is not available
for now, left click pause/plays the mpv window, just like the native mpv window.
preview/next frame shortcuts should work for the mpv window when playing video
no volume/mute controls yet, these will come in the coming weeks, including global mute settings
updated media show and sizing code to account for mpv widgets
reworked my animation scanbar to talk to mpv, and for my mpv window to talk back to it
improved the animation scanbar to be more flexible when frame position and num_frames are not available, both in displaying info and calculating scanbar seek clicks
mpv api version added to help->about
.
new downloader objects:
thanks to a user, updated the 'pixiv artist page' url class to a new object that covers more situations. the defunct 'pixiv artist gallery page' url class is removed
added 8kun and vch.moe download support. I got started on julay, smug, and endchan, but they were a little more tricky and I couldn't finish them in time--fingers crossed, next week
.
menu quality of life:
a right-click on thumbnail whitespace will now not send a 'deselect all' event! feel free to right-click in empty space to do an easy remove->selected
remorked the tag menu layout to move less frequently used actions down:
- moved the discard/require/permit/exclude search predicate actions down
- moved 'open in a new page' below select and copy
- moved copy above select
and some misc menu layout improvement on this menu
fixed some labelling with the discard/require/permit/exclude verbs on negated tags
right-clicking on system search predicates now shows the 'copy' menu correctly
system predicates that offer easy inverse versions (like inbox/archive) should now offer the 'exclude' verb
when right-clicking on a single tag that has siblings, its siblings and those siblings' subtags will now be listed in the copy menu!
copying 'all' tags from a list menu, with or without counts, will now always copy them in the list order
across the program, all menu 'labels' (menu text items that do not have a submenu and have no associated action, like 'imported 3 years 7 months ago') will now copy their text to the clipboard. let's see how it goes
.
other ui quality of life:
across the program's UI, filetypes are now referred to with simpler terms rather than technical mimetypes. instead of 'image/jpg', it is now typically just 'jpeg'
the 'remove selected' buttons on the gallery and watcher pages are now smaller trash icon buttons
the new page chooser will now auto-dismiss if it loses focus--so if you accidentally launch it with a middle-/double-click somewhere, just click again and it'll go away
hitting enter or return on the new page chooser now picks the 'first' button, scanning from the top-left. hitting enter twice now typically opens a new 'my files' search page
added pause_media and pause_play_media shortcuts to the media_viewer shortcut set. new clients will start with space keypress performing pause_play_media
added pause_play_slideshow shortcut to the media_viewer_browser shortcut set. this shortcut is no longer hardcoded by space keypress
the six default shortcut sets now have a small description text on their edit panels
the options->media edit panels now enable/disable widgets better based on current media/preview action
added a checkbox to _options->gui pages_ to set whether middle-clicking a tag in the media viewer or a child tag manager to open a tag search page will switch to the main gui. default is false
mr bones now reports total files, total filesize, and average filesize
mr bones now loads your fate asynchronously
.
the rest:
added tentative and simple realvideo (.rm) and realaudio (.ra) support--seems to work ok, but some weirder variable bit rate formats may not, and I have collapsed the various different extensions just down to .rm or .ra
added trueaudio (.tta) audio support
fixed a bug from the recent search optimisations where a bare inbox search would not cross-reference with the file domain (so some trash could show up in a simple inbox/'my files' query)
fixed an issue with searching for known urls by url class where the class was for a third-or-higher-level domain and was not set to match subdomains (this hit 4chan file urls for a few users)
fixed the issue with 'open externally' button panel not clearing their backgrounds properly
fixed some of the new unusual stretchy layouts in the options dialog
removed overhead from subscriptions' 'separate' operation, which should stop super CPU hang when trying to split a subscription with hundreds of thousands of urls
fixed an issue where the advanced file delete dialog would not show the simple 'permanent delete' option when launched from the media viewer's right-click menu
fixed the select/remove actions for local/remote
fixed 'set_media_focus' from manage tags to correctly activate the underlying media viewer as well as set focus
stopped the 'file lookup script' status control from resizing so wide when it fetches a url
fixed a rare mouse wheel event handling bug in the media viewer
reduced db overhead of the 'loading x/y' results generation routine. this _may_ help some users who had very slow media result loading
cleaned up how the server reports a bootup-action error such as 'cannot shut down server since it is not running'--this is now a simple statement to console, not a full error with trace
improved client shutdown when a system session shutdown call arrives at the same time as a user shutdown request--the core shutdown routine should now only occur once
fixed an issue with thumbnail presentation on collections that have their contents deleted during the thumbnail generation call
misc wx->Qt layout conversion improvements
updated the github readme to reflect some new links and so on
misc code cleanup
next week
I pushed it a bit hard this week, so I am going to try to take it easy. Hopefully I will be able to get some volume/mute controls going for mpv and then launch it for all users, including on the Linux and macOS builds. Assuming there are no catastrophic problems this week, I will be overwriting user settings to move all 'show as normal' video and audio options to 'show with mpv'. Then these new imageboard parsers and some more Qt cleanup work.
0 notes
Text
How Progressive Web Apps could rule mobile ecommerce in 2018
Championed by Google since 2015, Progressive Web Apps (PWA) are set to explode as an option for brands to increase traffic and conversions on mobile.
According to a recent study, a 100-millisecond delay can cost a site 7% of its conversion rate, and Google frequently reiterates that when it comes to capturing consumer attention, every second counts.
Looking after your customers with great user experience is also rewarded with a ranking boost, with Google announcing in that from July this year page speed will officially be a mobile ranking signal.
Discussion of adopting PWAs are still in their infancy among some sectors and brands, perhaps because the technology is still relatively young and PWA capability issues persist.
For instance, only now is progress being made with Apple now adapting its web browser engine to handle key features of PWA. As these barriers begin to dissipate, the reluctance from web developers will also diminish – paving the way for PWAs to become a key platform for retailers to build fast, mobile-optimized websites in 2018.
What are Progressive Web Apps?
In the context of the mobile ecosystem, a Progressive Web App is described as a website that offers the near identical experience of a native app but within a standard browser.
This technology can apply to a range of devices from mobile right up to the desktop version. No app store is required to access a PWA site.
The significant advantage of PWAs over native mobile apps is that they do not require any different technologies or in-depth understanding of adapting your code to run on either iOS (Apple) or Android phones. Developers can get by with HTML5, CSS and JavaScript build for PWA.
They offer the pop-up notifications directly to the user, implement home screen icons for easy access and sites can operate in full-screen mode. All this is bundled in with speedier page delivery.
What’s so cool about PWA?
As we’ve established, mobile speed is very important for Google, so much so that the search engine has offered any fast site a rank boost both for desktop and now mobile SERPs.
Its site ethos is to deliver fast, integrated, reliable and engaging experiences for mobile devices. Progressive web apps can cater for all these factors in many ways, while the framework and caching makes PWA code light and efficiently stored. Push notification and ‘Add to Home Screen’ functionality also keep users coming back.
Here are the key areas where Progressive Web Apps can give your e-commerce site a significant advantage.
Speed
The rate at which a site responds is key and PWAs make speedy responses possible, largely thanks to the Service Worker. The caching ability allows the browser to store repetitive elements of the layout that are in the template and inject it into the browser when required, speeding up download times significantly and saving the all-important seconds that will make any e-commerce site more user-friendly.
Keep file pages less than 1mb and make sure that first interaction with the page happens in less than five seconds. Enable HTTP/2 to allow multiple retrieval of assets and compression to quicken the site further.
Responsiveness
User experience is at the heart of Progressive Web Apps so they are to be built with a responsive layout. With responsive sites, the layout can accommodate screens regardless of size.
So no matter what device you are looking at, the experience will be tailored correctly. The PWA app manifest file allows developers to control full screen view, enhancing the experience further.
Offline functionality
The Service Worker and cache api are key for Progressive Web Apps. The Service Worker is essentially a program that work predominantly behind the scenes retrieving and storing site components in the browser cache.
For example, when you are travelling on underground transport where offline functionality is most needed, the wifi connection could be working intermittently or the networks could be poor.
PWA can bridge the gap between offline and online by at least offering a basic navigation in the interim.
The only issue with Service Workers is that not all current browsers support them. Safari and Explorer are developing beta versions that work with this technology. Browsers are slowly catching up but tracking technology compatibility can be monitored by using the caniuse site.
User retention
We all know that native apps serve users’ notifications directly to screen. The great thing is that Progressive Web Apps also offer the same inbuilt functionality of push notifications that alert users of current site developments.
This is invaluable for e-commerce sites to alert users of updates, particularly when new products come into stock, or to alert them to sales and discounts. The ‘Add to Home Screen’ function allows users to place a shortcut icon on their home screens for easier access. Again, the app manifest file is used declare these capabilities.
Search engine ready
Unlike native apps, there’s no need to indicate your deep linking URL by editing the source code of a PWA – instead, individual PWA URLs can be indexed by search engines. This takes away a lot of the headache that normally surrounds optimizing your apps for search.
Who’s using PWA right now?
Early PWA adoption has to a few high profile publishers, namely Forbes, Washington Post and Financial Times.
AliExpress is an ecommerce site that allows users to order from mainly Chinese companies very similar to eBay. A typical market version of the PWA site is found here.
The Aliexpress PWA site managed to capture some impressive metrics upon release, including a 104% increase in new users and 82% usage increase among iOS users. This statistic reveals that despite Service Workers not yet being supported by Safari right now, there are twice as many pages being visited per user on multiple browsers. Time spent on the site in a session increased by 74%.
Ele.me is a food ordering company in China that reduced loading times on average by 6.35% across all their pages. Users managed to experience the site with time to consistently interactive dropping to 4.93 seconds on a 3G network on the first load.
Finally, George.com (ASDA) recently launched its PWA site. According to Google, this site accelerated average speeds by 3.8x and saw a fantastic 31% increase in conversion in return.
What about AMP?
There are various pros and cons of both technologies. Accelerated Mobile Pages (AMP) are quicker on the first load. This can be demonstrated by load times to interactivity where PWA can load in less than ten seconds, while AMP provides an initial load speed of fewer than two seconds.
AMP requires validated web components to function, but PWA only needs HTML5, JS and CSS to get running.
AMP’s advantage over PWA is that they don’t require Service Worker in the initial load while benefiting from cached pages from the get-go. It is possible to combine PWA with AMP to improve the user experience further.
One such component is ‘<amp-install-service-worker>’. Landing page content can be served as an AMP page while the Service Worker is triggered by the use of an iframe, accessing and caching PWA related assets behind the scenes.
This means that AMP pages can switch quickly to PWA pages in a seamless manner without having to wait too long for subsequent PWA to load. Imagine a scenario where a user can enter the site quickly via an AMP page, downloading content at super-quick speed and then switching to PWA pages with all their benefits!
In summary, it’s possible to offer a high-speed mobile experience using both AMP and PWA. The AMP landing page is effectively a quick entry point page and used to initiate service worker caching for following pages that are in PWA format.
To ensure that the user experience remains consistent, the look and feel of both AMP and PWA pages should be identical to create a seamless, uninterrupted transfer between technologies.
Are there any issues for ecommerce?
PWA are predominantly Single Page apps which, in turn, could be problematic for Google and search engines crawling your pages. There’s a good reason that single page apps are used in PWA, since they are lightweight and allow service workers to inject content in a handy template based format.
There could be a need to pre-render pages server side in order to get crawled. The overall recommendation when developing PWAs is to do so iteratively, slowly building newer features and testing at different build stages. This will maintain a site development strategy of progressively enhancing for inclusive user experience.
There is an assortment of Google tools that can highlight issues upon build. Make use Google’s Fetch and Render tool, and the Lighthouse audit that is now part of Chrome’s Developer Inspect tool can analyse a PWA give a score out of 100.
It can provide readings on time to first meaning paint, for instance, and show potential blockers in the loading of DOM structure. The tool is powerful in reducing overheads and code bloat.
Fetch and Render will indicate any issues where Google would struggle to see elements on the page. Google makes it easier to develop PWA sites with this build checklist.
Do not block JavaScript and CSS from any engine crawl activity since they need to understand how the page looks in a browser. Provide clean URLs, without using the hash symbol, and adequately canonical tag PWA pages back to the relevant desktop page where there could be duplication.
Should e-retailers be worried about security and privacy? Service worker programs only work under secure conditions so your site must be working with https encryption.
A specific API called Payment Request API is supposed to make it easier for a mobile site to collect secure credit card and payment information. The API facilitates autofill and one tap checkouts. It has recently been developed by Google and is subject to changes.
What the future holds for PWAs
Google is actively pushing PWA and AMP in 2018. The search engine is speaking with many partners at the moment, and one such development is PWA integration into e-commerce platforms, such as Magento.
It’s realistic to say that the technology currently meets the criteria for the ever expectant and demanding mobile user crowd. Mobile users want to see content quickly or at times offline and PWA offer opportunities to engage and re-engage users.
In a recent study, US users spent 5 hours a day on mobile devices but only 5% of that time accounted for mobile shopping apps. It is said that mobile apps have been falling out of favour for some time, with users favouring mobile web or becoming disheartened by the mobile app experience from retailers.
With the greater impact of PWA, one can imagine how convenient it is for a user not having to switch between app and browser when conducting product research. It will certainly be interesting to see what the future of the Progressive Web App looks like!
from Search Engine Watch https://searchenginewatch.com/2018/03/06/how-progressive-web-apps-could-rule-mobile-ecommerce-in-2018/
0 notes
Photo
Five CSS Performance Tools to Speed up Your Website
This article is part of a series created in partnership with SiteGround. Thank you for supporting the partners who make SitePoint possible.
In this article, I'm going to list five CSS performance tools that will help you put your website on a CSS diet for a fast loading experience.
Why Is Page Speed so Important?
A fast loading website is crucial to the success of your website. There are research findings to support the view that users get quickly impatient if they have to wait even a few seconds for web content to appear.
When this happens on your website, their experience degrades noticeably with the consequence that your website gets fewer visits and fewer conversions, which ends up hurting your business.
There are a number of factors that affect web performance, e.g., the number of HTTP requests, bloated code, heavy media files, etc. How you write CSS and how your stylesheet gets loaded in the browser can have significant impact on a few of those precious milliseconds which make the difference between visitors staying on your website and clicking their way toward your competition.
Let's see how the tools below can help to fix your CSS for top-notch web performance.
TestMyCSS
TestMyCSS is a free online optimization tool with tons of features. It checks for redundancy, validation errors, unused CSS, and best practices.
You can start to use it right away, just type the url to a website's CSS file, and TestMyCSS spits out all the items that need improvement. Not only that, you can also find helpful hints on
How to simplify complex selectors
Duplicated CSS properties and selectors you need to get rid of
The number of !important declarations present in the code
Unecessary class specificity
Unecessary IE fixes
Prefixed CSS properties which don't need vendor prefixes any more
Class or ID rules qualified with tag names, e.g., a.primary-link
Inappropriate use of universal selectors.
Stylelint
Stylelint is an ambitious CSS linter that works with PostCSS, an open source tool to write state of the art CSS. A linter is a program that goes through your code and catches any potential errors.
Stylelint can
Flag errors like typos, invalid hex colors, duplicate selectors, etc.
Enforce best practices
Enforce coding style conventions like consistent spacing in each CSS rule, etc.
Provide support for new cutting edge CSS syntax
Automatically fix some minor warnings using stylefmt, a tool to format CSS rules ...
... and more.
Continue reading %Five CSS Performance Tools to Speed up Your Website%
by Maria Antonietta Perna via SitePoint http://ift.tt/2n0c3wG
0 notes
Text
Version 341
youtube
windows
zip
exe
os x
app
linux
tar.gz
source
tar.gz
I had a great week after being ill at the end of the previous week. There is a bit more than usual in this release because of the extra time. The Client API has some new commands, and there are a variety of fixes and stability improvements.
client api
The Client API now does some fun stuff. If you would like to play with one user's browser extension, which recognises hydrus-compatible URLs--including whether you have the page's file already--and lets you send them straight to hydrus for downloading, please check it out here:
https://gitgud.io/prkc/hydrus-companion
And the Client API help itself is here:
https://hydrusnetwork.github.io/hydrus/help/client_api.html
There is a little info at the top telling you how to turn the API on.
This release brings some more API commands. You can now edit a file's tags, edit a file's URLs, and the 'add_url' command now lets you attach tags for any files coming from the given URL and also lets you determine which page (by page name) the URL will go to (for instance if you have multiple watcher pages working on different check timings for different boards).
the rest
I spent some time this week cleaning things up. Repository update processing uses less memory and now regularly saves its progress on big jobs, reducing transaction bloat, improving error recovery, and giving control back to you much faster when it finishes. Memory use all around should be reduced.
I also replaced several areas of messy UI code, which has resulted in improved stability, particularly for Linux. Some CRITICAL logspam is gone, and I believe I have finally fixed a lingering ui crash on the duplicates processing page (at least for search distance 0--I will still work on >0, which seems to be a db-level problem for linux). Several boot crashes due to incorrect boot-error handling (such as when catching that some client_files directories are missing) are also fixed. I would appreciate your continued feedback here.
Duplicate processing now treats ratings copy/move/merge actions a little different: Before, it would only make a change if the 'destination' file had no rating set--now it will do it if it has no rating or if the rating is smaller than what the 'source' has. For instance, merging two files that have rating 2/5 and 3/5 will result in both files having rating 3/5.
After talking with a user about different ways of storing and presenting thumbnails, I put some work into a thumbnail drawing experiment that only needs one set of thumbs to draw on screen (the current thumbnail system stores two copies for different purposes). If you would like to try it temporarily, please turn it on under help->debug->gui actions->thumbnail experiment mode. Instead of using your 'resized' thumbs, it will only use the 'master'. It makes no changes to your file structure, and you can turn it off at any time. Try loading up some random files in your client and let me know if you get any errors or if any thumbs take an extremely long time to load. I've wanted to drop the two-thumb storage system for some time but never had good numbers to feel good about it, but my tests here are promising--the experiment mode only takes a few hundred µs more per initial thumb load for me, and if I were to switch over to it permanently, I could reduce it even further. Assuming there are no big problems here, I expect to chip away at this problem over the coming weeks and hopefully make a significant simplification in hydrus file storage.
full list
client api:
added /add_tags/add_tags, which does several kinds of tag content updates
added /add_tags/clean_tags, which shows how hydrus will handle potential tags
added /add_urls/associate_url, which allows you to associate urls with files
added 'destination_page_name' to /add_urls/add_url, which will choose which destination watcher/url importer to place the url (or create a new one with that name)
updated client api version to 2
updated client help and unit tests for the above
added a linked contents to the client api help
improved some server error handling, mostly moving 403s to more correct 400s
improved how missing parameter 400 errors are reported from the server vs deeper keyerrors that should be 500
.
the rest:
tag repository update processing now saves progress to disk every million rows or every minute, whichever comes first. this reduces journaling bloat, improves recovery when the process quits unexpectedly, and makes for significantly faster cancel when requested by the user
when processing duplicates and copying/merging/moving ratings, the 'source' file will now also overwrite the 'destination' file's rating if that destination rating is lower (previously, the rating would only go over if the dest had no rating set)
added a new 'thumbnail experiment mode' under help->debug->gui. this will load fullsize thumbs and resize them in memory, please see release post for more details
reduced menubar replacement flicker while, I believe, keeping and strengthening recent menubar indexing stability improvements
the tag autocomplete dropdown will now always embed (instead of floating) in non-Windows
when data seems non-decodable, the fallback encoding format is now that given by chardet, rather than utf-8
improved serialisability of some pending tag data
watchers can now hold and pass on fixed pending tag data
gallery log objects can now hold and pass on fixed pending tag data
file import objects can now hold and action fixed pending tag data
hard drive imports now store their paths-to-tags info in this new format, directly in the file import objects
improved some url-import page drop-target-selection logic
improved error reporting when dropping/api-adding urls
adjusted some url import workflow so big 'already in db' download lists should work a bit faster
attempting to start the program with some external database files but not the main 'client.db/server.db' file will now cause a boot-fail exception with an explanation before any stub db files can be made
tightened up some hydrus service login-capability-testing code that was previously stopping certain error states from recovering promptly, even on a force account refresh, while the service was maxed on bandwidth
fixed a source of linux CRITICAL logspam related to several common dialogs
improved ui stability on boot when file folders are missing (particularly for linux)
improved stability for the various async tasks on the duplicates processing page, particularly for linux. I am not sure I got everything here, but it is definitely better
did some more misc stability improvements, particularly in various boot fail scenarios
completely removed an ancient and janky focus catcher widget from the main gui frame
now various db caching is improved on the python side, removed a sqlite instruction to force temp information to always stay in memory--hot data returns to staying mostly in memory to start and then spools to disk if the transaction gets too large
fixed approx bitrate sorting for malformed video files with explicitly '0' duration
daemon_profile_mode now spams some more info about export folders
fixed an issue that meant client db maintenance was firing its jobs too aggressively, regardless of idle status
updated windows build to cv 4.0
misc refactoring and fixes
next week
We are in the home stretch of the Client API now. I just want to write some simple file searching and it'll be done. I do not expect to be done in one week, but I think I should have something working for 342.
It felt good this week to clear out some long-time bugs doing code cleaning, so I'd like to focus on that as well.
0 notes
Text
Version 263
youtube
windows
zip
exe
os x
app
tar.gz
linux
tar.gz
source
tar.gz
I had a great week. Some stuff is fixed, the network engine does something, and files should load faster!
faster loading
I have put time into making sessions and regular file results load faster and more cleanly.
Also, I have significantly sped up the speed at which file results load for users with many tags. After running some profiles on my IRL database that syncs with the PTR, I discovered the biggest source of load-lag was deleted mappings, of which on the PTR there are now 1.6 million. This often lead to 0.5s or more delay on each batch of 256 files. I have extended the fast-load tag cache to now cover deleted mappings, so you should notice results loading much faster on PTR clients, particularly when you have recently booted the client.
Please let me know how these work for you and where they now trip up!
memory reduction
I've improved how some caches maintain themselves and added clearing maintenance to some other sources of memory bloat. A busy client with many pages and thumbnails should use less memory overall and better clean itself up after it is done being busy.
network engine integration started
The network engine is a lot of work, but it is coming along very well. It works, remembers cookies and so on, and the bandwidth manager throttles correctly at all timescales. It is a great improvement to the existing mess.
And clients now run the first simple version of it. The simple 'raw url' downloader page is the first page to use it, so if you are interested in this, please feel free to try downloading some urls through it and let me know if you run into any trouble. There is a new 'network job' control that shows current download status and download speed and a stop button to cancel in-progress downloads that, in future, should appear everywhere a download occurs.
In the coming weeks, I will convert more of the downloaders and other network-consumers over to the new engine. I will also write dialogs to manage per-domain bandwidth rules and review existing bandwidth usage. When everything is converted over, I will be in a good position to start the new login and parsing engines.
full list
greatly improved how gui sessions are loaded--now the page tabs are loaded instantly, but the thumbnails are loaded in the background. session loaded should be significantly less laggy and buggy
the issue of pages sometimes initially sizing at zero size (which could be caused by minimising the client while a session load was ongoing) should hence be better/fixed completely!
gui sessions will now load their files in the exact order in which they were saved--behaviour that I think was not previously always reliable
more general network code work and polishing
added and improved unit tests for network code
improved how short-time-delta data bandwidth is reported
improved how short-time-delta data bandwidth is tested
wrote a networkjobcontrol to display and control the new network job object
tumblr parser now produces 68.-less urls
tumblr parser now produces https urls
cleaned up tumblr parser a little
url caches will clip existing tumblr urls of the 68.-subdomain and convert to the new raw format, clearing out dupes along the way
url caches will convert existing tumblr and pixiv urls to https, clearing out dupes along the way
the pixiv parser now deals with missing creator/title tags without errors
extended the specific file domain tag cache to also store deleted mappings, resulting in much faster request building for clients with large numbers of deleted mappings
improved some downloader page queue text display timing
added support for more types of mp4 file
improved how some memory maintenance calls work
improved how hydrus datacaches track their recently-used-data fifo list
pages now regularly clear out spare thumbnail canvas bmps
pages now regularly clear out cached thumbnail canvas bmps when they are not the currently viewed page
import caches, when asked for url membership, will now test both the http and https versions the url
maybe improved how 'open in file browser' works in windows
fixed the 'recount video frames' advanced thumbnail menu entry, which wasn't working with the new ffmpeg wrapper
moved some bloaty master hash data out of client.db and into client.master.db
pubsub profile will no longer profile 'message', as it just makes for spam
reduced some serverside pubsub spam
reduced some significant clientside pubsub spam that I think was smashing the gui event loop at inconvenient moments
improved some client shutdown object sync code
fixed an issue where some duplicate maintenance popups would not clear themselves up properly if interrupted mid-job
cleaned up some http-https conversion and comparison code
fixed some status-setting code that meant thumbnail pages were sometimes setting status after they were replaced and scheduled for deletion
misc improvements
next week
I did not have time for the database migration gui this week, so that's what I would like to focus on after I have probably the 'page of urls' downloader and thread watcher converted to the new network engine.
I also have program blocking/hanging on my mind. The new accurate video parsing is great, but it takes a lot of CPU and means that large webm downloads regularly hang the gui for half a second. Also, the tag autocomplete dropdown is super laggy with certain inputs. I am developing plans to deal with these issues--and other sources of lag or excessive transaction overhead--in the coming weeks.
0 notes