#HTTPRedirect
Explore tagged Tumblr posts
Link
#ASP.NET#ASP.NETCore#ASP.NETCorerouting#HTTPRedirect#HttpRequest#HTTPResponse#MVC#RazorPages#Requestpipeline#Views#WorkInProgress
2 notes
·
View notes
Text
Configuration Manager Content Distribution Error 0x8007ffff

Configuration Manager Content Distribution Error 0x8007ffff. Opening the distmgr.log file showed failed to update the IIS auth module error 0x8007ffff. Did not configure IIS module, GLE – 65535 ConfigureIISModules did not configure IIS Module Failed to update the IIS auth module error 0x8007ffff Cannot set the current Drizzle Role status for the DP Also checking the IIS configuration on the remote distribution point showed no virtual directories under Sites\Default Web Site.
Solution
1. Launch the PowerShell as admin and run the next command: dism.exe /online /norestart /enable-feature /ignorecheck /featurename:"IIS-WebServerRole" /featurename:"IIS-WebServer" /featurename:"IIS-CommonHttpFeatures" /featurename:"IIS-StaticContent" /featurename:"IIS-DefaultDocument" /featurename:"IIS-DirectoryBrowsing" /featurename:"IIS-HttpErrors" /featurename:"IIS-HttpRedirect" /featurename:"IIS-WebServerManagementTools" /featurename:"IIS-IIS6ManagementCompatibility" /featurename:"IIS-Metabase" /featurename:"IIS-WindowsAuthentication" /featurename:"IIS-WMICompatibility" /featurename:"IIS-ISAPIExtensions" /featurename:"IIS-ManagementScriptingTools" /featurename:"MSRDC-Infrastructure" /featurename:"IIS-ManagementService" Close the PowerShell window once the command is executed

2. After running the above command the IIS virtual directories getting created; 3. Finally, all the virtual directories created under Sites\Default Web Site on remote MECM DP. After this the content distribution was successful. Read the full article
0 notes
Text
New Post has been published on Website Design Naples Florida Webmaster
New Post has been published on https://vinbo.com/when-to-use-http-301-and-302-redirects-for-the-best-results/
When to Use HTTP 301 and 302 Redirects for the Best Results
youtube
In today’s video, we’ll learn about HTTP 301 + 302 redirects and when you should use them.
Blog post: https://www.elegantthemes.com/blog/wordpress/when-to-use-http-301-and-302-redirects-for-the-best-results
➡️ Learn more about Divi: https://www.elegantthemes.com/gallery/divi 🔵 Like us on Facebook: https://www.facebook.com/elegantthemes/
#HTTPRedirects #WordPress #ElegantThemes
0 notes
Text
E-Ticarette 301 Yönlendirme
New Post has been published on https://weepay.co/blog/e-ticarette-301-yonlendirme/
E-Ticarette 301 Yönlendirme
301 yönlendirme işlemi, e-ticaret hayatınızda sıklıkla ihtiyaç duyabileceğiniz işlemlerden bir tanesidir. Bazı durumlarda sitenizin SEO uyumluluğunun azalmaması adına bu işlemi yapmak son derece önemlidir ve sizlere fayda sağlayacaktır. Bu yüzden 301 yönlendirmenin ne olduğunu ve nasıl yapıldığını öğrenmenizde yarar var. İsterseniz bu konuya bir göz atalım.
301 Yönlendirme Nedir?
Temel olarak 301 yönlendirme, bir domaine sahip bir sitenin kullanılmadığı zamanlarda kullanıcıları başka bir siteye otomatik olarak yönlendirme işlemine verilen addır. Bu işlem sayesinde, artık aktif olmayan sayfanıza tıklayan müşterilerin 404 hatası almasını veya boş sayfa ile karşılaşmasını önlemiş olursunuz. Bu sayfaya girmeye çalışan kişiler, bu yöntem sayesinde belirtmiş olduğunuz yeni sayfaya otomatik olarak yönlendirilirler.
301 Yönlendirme Size Ne Gibi Avantajlar Sağlar?
Sitenizdeki herhangi bir sayfayı kullanmayı bırakıp pasif konuma getirirseniz, o sayfa boş sayfa olarak kalacaktır ve kullanıcılara 404 hatası olarak geri dönecektir. Bu durum Google’ın istemediği bir durumdur ve SEO uyumluluğunuza zarar vermektedir. (SEO çalışmasının e-ticaret açısından önemini öğrenmek için buraya tıklayın)
Bir sayfayı tamamen boş bırakıp taşırsanız aramalarınız artık üst sıralarda çıkmayacaktır. 301 yönlendirme sitenizin bu durumdan etkilenmemesini sağlayacaktır. Bunun yanında yönlendirilen müşteriler ise hemen sayfadan ayrılmayacaktır. Aradığı şeyleri bulamasalar bile boş sayfaya nazaran daha fazla müşteri yoğunluğunuz olacaktır.
301 Yönlendirme Hangi Durumlarda Yapılır?
Sitenize ait bir uzantıyı artık kullanmadığınız zaman kullanıcıları istediğiniz uzantıya yönlendirme işlemlerinde bunu kullanabilirsiniz. Mesela bir ürün satıyorsunuz ve anlaştığınız marka ile artık çalışmayacaksınız. Bu durumda o markaya ait uzantıyı tamamen kapatmak yerine insanları satışını yaptığınız başka markalara yönlendirebilirsiniz. Diğer bir durum ise sitenizin bağlantı linki değiştiyse, ki bu durum yanlış yazma veya revize etme gibi sebeplerden dolayı oluşabilir, eski bağlantı linkinin kırık bir link olmaması adına bu yöntemi uygulayıp kullanıcıların yeni adrese ulaşmasını sağlayabilirsiniz.
Peki Bu İşlem Nasıl Yapılır?
İşletim sistemlerine göre değişmekle birlikte 301 yönlendirmeyi sadece birkaç basit adımda tamamlayabilirsiniz. Eğer Linux kullanıyorsanız, Linux paketindeki .htcaccess dosyasına şu komutları yazmanız gerekmektedir:
Redirect 301 /eski -sayfa.html /yeni -sayfa.html veya Redirect 301 https://yeni -site-adi.uzantısı
Bir diğer yöntem ise <HEAD> tagindeki HTML kodları arasına aşağıdakini de eklemenizdir.
<meta http-equiv=”refresh” content=”2;url=http://yonlenecek-site.uzantisi/” />
Eğer Windows işletim sistemi kullanıyorsanız yukadrıdaki HTML kodu seçeneğini tekrarlayabilirsiniz veya config dosyası ile yönlendirme yapabilirsiniz. Tek yapmanız gereken şu kodu yazmak:
<system.webServer>
<httpRedirect enabled=”true” destination=”http://eski -site.uzantisi” httpResponseStatus=”Permanent” />
</system.webServer>
0 notes
Text
TechSEO360 Crawler Guide – Sitemaps and Technical SEO Audits
For 10 years now, the crawler I use for the technical SEO website audits I do at Search Engine People is what’s nowadays called TechSEO360. A hidden gem; cost-effective, efficient (crawls any site of any size), forward looking (e.g.: had AJAX support before other such crawler tools did). I’ve written about this website crawler before but wanted to do a more comprehensive all-in-one post.
TechSEO360 Explained
TechSEO360 is a technical SEO crawler with highlights being:
Native software for Windows and Mac.
Can crawl very large websites out-of-the-box.
Flexible crawler configuration for those who need it.
Use built-in or custom reports for analyzing the collected website data (although I usually rely on exporting all data to Excel and using its powerful filters, pivoting, automatic formatting, etc.).
Create image, video and hreflang XML sitemaps in addition to visual sitemaps.
How This Guide is Structured
This guide will cover all the most important SEO functionality found in this software.
We will be using the demo website https://Crawler.TechSEO360.com in all our examples.
All screenshots will be from the Windows version – but the Mac version contains the same features and tools.
We will be using TechSEO360 in its free mode which is the state switched to when the initial fully functional free 30 trial ends.
We will be using default settings for website crawl and analysis unless otherwise noted.
We will start by showing how to configure the site crawl and then move on to technical SEO, reports and sitemaps.
Configuring and Starting The Crawl
Most sites will crawl fine when using the default settings. This means the only configuration required will typically be to enter the path of the website you wish to analyze – whether it is residing on the internet, local server or local disk. As an easy alternative to manual configuration, it is also possible to apply various “quick presets” which configures the underlying settings. Examples could be:
You know you want to create a video sitemap and want to make sure you can generate the best possible.
You use a specific website CMS which generate many thin content URLs which should be excluded.
For those who want to dive into settings, you can assert a near-complete control of the crawl process including:
Crawler Engine
This is where you can mess around with the deeper internals of how HTTP requests are performed. One particular thing is how you can increase the crawling speed: Simply increase the count of simultaneous threads and simultaneous connections – just make sure your computer and website can handle the additional load.
Webmaster Filters
Control to what degree the crawler should obey noindex, nofollow, robots.txt and similar.
Analysis Filters
Configure rules for which URLs should have their content analyzed. There are multiple “exclude” and “limit-to” filtering options available including URL patterns, file extensions and MIME types.
Output Filters
Similar to “Scan website | Analysis filters” – but is instead used to control which URLs get “tagged” for removal when a website crawl finishes. URLs excluded by options found in “Scan website | Webmaster filters” and “Scan website | Output filters” can still be kept and shown after the website crawl stops if the option “Scan website | Crawler options | Apply webmaster and output filters after website scan stops” is unchecked. With this combination you:
Get to keep all the information collected by the crawler, so you can inspect everything.
Still avoid the URLs being included when creating HTML and XML sitemaps.
Still get proper “tagging” for when doing reports and exports.
Crawl Progress
During the website crawl, you can see various statistics that show how many URLs have had their content analyzed, how many have had their links and references resolved and how many URLs are still waiting in queues.
Website Overview After Crawl
After a site crawl has finished the program opens up a view with data columns to the left: If you select an URL you can view further details to the right: Here is a thumbnail of how it can look on a full-sized screen:
Left Side
Here you will find URLs and associated data found during the website scan. By default only a few of the most important data columns are shown. Above this there is a panel consisting of five buttons and a text box. Their purposes are:
#1 Dropdown with predefined “quick reports”. These can be used to quickly configure:
Which data columns are visible.
Which “quick filter options” are enabled.
The active “quick filter text” to further limit what gets shown.
#2 Dropdown to switch between showing all URLs in the website as a flat “list” versus as a “tree”.
#3 Dropdown to configure which data columns are visible.
#4 Compared to the above, enabling visibility of data column “Redirects to path” looks like this:
#5 Dropdown to configure which “quick filter options” are selected.
#6 On/off button to activate/deactivate all the “quick filters” functionality.
#7 Box containing the “quick filter text” which is used to further customize what gets shown.
How to use “quick reports” and “quick filters” functionality will be explained later with examples.
Right Side
This is where you can see additional details of the selected URL at the left side. This includes “Linked by” list with additional details, “Links [internal]” list, “Used by” list, “Directory summary” and more.
To understand how to use this when investigating details compare the following two scenarios.
#1 At the left we have selected URL http://crawler.techseo360.com/noindex-follow.html – we can also see the crawler has tagged it "[noindex][follow]" in the data column “URL flags”: To the right inside the tab “Links [internal]”, we can confirm that all links have been followed including and view additional details.
#2 At the left we have selected URL http://crawler.techseo360.com/nofollow.html – we can also see the crawler has tagged it "[index][nofollow]" in the data column “URL flags”.: To the right inside the tab “Links [internal]”, we can confirm that no links have been followed.
Using Quick Reports
As I said, I don’t often use these, preferring to Show All Data Columns, and then export to Excel. But for those who like these kind of baked-in reports in other tools, here are some of the most used quick reports available:
All Types of Redirects
The built-in “quick report” to show all kinds of redirects including the information necessary to follow redirect chains: Essentially this has:
Changed the visibility of data columns to those most appropriate.
Set the filter text to: [httpredirect|canonicalredirect|metarefreshredirect] -[noindex] 200 301 302 307
Activated filters:
Only show URLs with all [filter-text] found in "URL state flags" column
Only show URLs with any filter-text-number found in "response code" column
With this an URL has to fulfil the following three conditions to be shown:
Has to point to another URL by either HTTP redirect, canonical instruction or “0 second” meta refresh.
Can not contain a “noindex” instruction.
Has to have either response code 200, 301, 302 or 307.
404 Not Found
If you need to quickly identify broken links and URL references, this report is a good choice. With this, the data columns “Linked.List” (e.g. “a” tag), “Used.List” (e.g. “src” attribute) and “Redirected.List” are made visible.
Noindex
Quickly see all pages with the “noindex” instruction.
Duplicate Titles #1
Quickly see all pages with duplicate titles including those with duplicate empty titles.
Duplicate Titles #2
If not overridden by other filters, filter text matches against content inside all visible data columns. Here we have narrowed down our duplicate titles report to those that contain the word “example”.
Title Characters Count
Limit the URLs shown by title characters count. You can control the threshold and if above or below. Similar is available for descriptions.
Title Pixels Count
Limit the URLs shown by title pixels count. You can control the threshold and if above or below. Similar is available for descriptions.
Images and Missing Alt / Anchor Text
Only show image URLs that was either used without any alternative text or linked without any anchor text.
Other Tools
On-page Analysis
By default there is performed comprehensive text analysis on all pages during the website crawl. The option found for this resides in “Scan website | Data collection” which gives results like these: However, you can also always analyze single pages without crawling the entire website: Notice that you can see which keywords and phrases are targeted across an entire website if you use the “sum scores for selected pages” button.
Keyword Lists
A flexible keyword list builder that allows to combine keyword lists and perform comprehensive clean-up.
3rd Party Online Tools
If you need more tools, you can add them yourself and even decide which should be accessible by tabs instead of just the drop-down. The software will automatically pass on the selected URL or similar to the selected online tool. Each online tool is configured by a text file that defines which data is passed and how it is done.
Sitemaps
Sitemap File Types
With 13 distinct sitemap file formats, chances are your needs are covered. This includes XML sitemaps, video sitemaps and image sitemaps.
XML Sitemaps and Hreflang
Even if your website does not include any hreflang markup, TechSEO360 will often be able to generate XML sitemaps with appropriate alternate hreflang information if your URLs contain parts that includes a reference to the language-culture or country.
XML Image and Video Sitemaps
You can usually speed-up your configuration by using one of the “Quick presets”:
Google video sitemap
Google video sitemap (website has videos hosted externally)
Google image sitemap
Google image sitemap (website has images hosted externally)
If you intend to create both image and video sitemaps, use one of the video choices since they also include all the configuration optimal for image sitemaps.
TechSEO360 uses different methods to calculate which pages, videos and images belong together in generated XML sitemaps – something that can be tricky if an image or video is used multiple places.
HTML Sitemaps
Select from the built in HTML templates or design your own including the actual HTML/CSS/JS code and various options used when building the sitemaps.
Other Functionality
Javascript and AJAX Support
You can configure TechSEO360 to search Javascript code for file and URL references by checking the option “Scan website | Crawler options | Try search inside Javascript”.
If you are dealing with an AJAX website you can switch to an AJAX enabled solution in “Scan website | Crawler engine | Default path type and handler”.
Custom Text and Code Search
It can often be useful to search for text and code across an entire website – e.g. to find pages using old Google Analytics code or similar.
You can configure multiple searches in “Scan website | Data Collection” | Search custom strings, code and text patterns”.
The results are shown in the data column “Page custom searches” showing a count for each search – optionally with the content extracted from the pattern matching.
Calculated Importance Score
TechSEO360 calculates importance of all pages based on internal linking and internal redirects.
You can see this by enabling visibility of the data column “Importance score scaled”.
Similar Content Detection
Sometimes pages are similar but not exact duplicates. To find these, you can enable option “Scan website | Data Collection | Tracking and storage of extended data | Perform keyword analysis for all pages” before scan.
When viewing results enable visibility of the data column “Page content duplicates (visual view)” and you will get a graphical representation of the content.
Command Line Interface (CLI)
If you are using the trial or paid version, you can use the command line – here is an example: "techseo.exe" -exit -scan -build ":my-project.ini" @override_rootpath=http://example.com@ The above passes a project file with all options defined, overrides the website domain and instructs TechSEO360 to run a complete crawl, build of sitemaps and exit.
Importing Data
The “File | Import…” functionality works intelligently and can be used to:
Exporting Data
The “File | Export…” functionality can export data to CSV, Excel, HTML and more depending on what you are exporting. To use:
Select the control with the data you wish to export.
Apply options so the control only contains the data you wish to export. (This can e.g. include “data columns”, “quick filter options” and “quick filter text”)
Click the “Export” button and you now have the data you want in the format you want.
TechSEO360 Pricing
There are essentially three different states:
When you first download the software you get a fully functional 30 days free trial.
When the trial expires it still continues to work in free mode which allows to crawl 500 pages in websites.
When purchasing the yearly subscription price is $99 for a single user license which can be used on both Windows and Mac.
You can download the trial for Windows and Mac at https://TechSEO360.com.
Source link
from Marketing Automation and Digital Marketing Blog http://amarketingautomation.com/techseo360-crawler-guide-sitemaps-and-technical-seo-audits/
0 notes
Text
Redirect website traffic to https with IIS hosting
Redirect website traffic to https with IIS hosting
Securing the traffic coming to your website is of utmost importance. Websites today running without an SSL certificate or without https cannot be trusted. If your application is hosted on IIS, one trick to redirect (not re-write URL) all traffic via https is to create a new Website in IIS and add httpRedirect in your web.config. HTTP Redirection is not available on the default installation of IIS…
View On WordPress
0 notes
Text
Petition gegen die neue Lärmschutzverordnung
The Art 2 Rock ist ja eigentlich nicht unbedingt die Plattform auf der ich mich politisch äussere. Wenn ich dann aber solche Vorlagen zu lesen bekomme wie die neue Lärmschutzverordnung dann bin ich schon mal froh wenn sich einer dazu äussert. So wie in diesem Fall Felix Mechelke aus Luzern. Da diese Lärmschutzverordnung allen Veranstaltern das Leben noch schwerer macht, unterstütze ich diese Petition gegen die neue Lärmschutzverordnung. Lest Euch den Text durch und verwendet am Schluss diesen Link, wenn ihr findet, ja es lohnt sich, sich auch mal zu wehren.
Ab 2019 sollen Konzerte und Veranstaltungen die eine Lautstärke von 93 Dezibel (dB) überschreiten, aufgezeichnet und 14 Tage im Voraus angemeldet werden.
Dazu ist eine Messvorrichtung nötig, die mehrere tausend Franken kostet (Experten sprechen im Schnitt von 5000 CHF). Zusätzlichen müssen Kosten für das spezialisierte Personal berechnet werden, die die Anlage bedienen und warten. Diese Kosten können von kleinen Konzerthäusern und Veranstalungsräumen kaum getragen werden. Werden die Vorschriften nicht eingehalten, muss mit hohen Bußen gerechnet werden.
Wenn in Zukunft weiter auch noch Veranstaltungen, Partys oder Konzerte in kleinerem Rahmen stattfinden sollen, müssen die Kosten und der bürokratische Aufwand auf den Bund abgewälzt werden. Sollen Veranstaltungen aufgezeichnet werden, so muss der Bund für die Kosten der Messvorrichtung, sowie den Umbau für die benötigte “Ausgleichszone” aufkommen.
Alternativ könnte der Bund auch Konzerthäuser finanziell unterstützen, wenn diese sich dazu entscheiden, eine Messvorrichtung zu installieren. So entsteht ein Anreiz, sich für die neue Lärmschutzverordnung einzusetzen.
Begründung
Die neue Lärmschutzverordung ist eine unglaubliche Bevormundung und völlig inkonsequent. Jeder und jede Besucher/in einer Veranstaltung entscheidet selbst, ob er/sie sich dem Lärm aussetzen möchte. Es werden an Konzerten jetzt schon gratis Ohrstöpsel verteilt, die die Gefährdung durch Schall enorm verringern.
Der bürokratische Aufwand (siehe NZZ-Artikel) ist enorm und erschwert das Organisieren von Veranstaltungen bis ins Unmögliche (Anmeldung der Veranstaltung mind. 14 Tage im Voraus).
Dem Bundesamt für Gesundheit scheint nicht ganz klar zu sein, dass gewisse Instrumente schon unverstärkt eine hohe Lautstärke freisetzen. So werden beim Soundcheck mit einem Schlagzeuger in einem kleinen Konzertsaal (150 Personen) schon unverstärkt Lautstärken bis zu 100 dB erreicht. Ganz zu schweigen von einer Guggemusik, die unverstärkt in der Altstadt spielt, sowie einer Horde Partygäste, die zum letzten Song des DJ’s mitsingen.
Anders allerdings als an Partys, Konzerten oder Hochzeiten, setzten sich die Bürger/Bürgerinnen folgendem nervtötenden Lärm NICHT freiwillig aus:
Fluglärm Militäraviatik Strassenlärm Baulärm
Jedoch sieht man in diesen Bereichen auf Seiten des Bundes wenig, bis gar kein Engangement.
Sollen in Zukunft nicht nur Konzerte von Stars (man stelle sich ein Konzert in Zimmerlautstärke vor) in den grossen Konzerthäusern stattfinden, so muss unbedingt etwas gegen die neue Verordnung unternommen werden. Denn diese bedeutet den finanziellen Untergang für Bars, Clubs sowie kleineren Bühnen oder das Ende von Konzerten, Partys oder Hochzeiten an denen mitgesungen wird. Einzig und allein die Guggemusik an der Fasnacht kann feuchtfröhlich weiterspielen.
Quellen und mehr Informationen:
www.nzz.ch/zuerich/die-verteufelung-des-schalls-ld.1411278
www.20min.ch/schweiz/news/story/Laermschutz-31035440?httpredirect
tageswoche.ch/kultur/nur-die-guggen-duerfen-weiter-laermen-neuer-schallschutz-bedroht-beizen-und-bars/
Und hier geht es zur Petition
Petition gegen die neue Lärmschutzverordnung was originally published on The Art 2 Rock
0 notes
Text
Come eliminare la Redirect Cache di Google Chrome per una singola pagina

Come probabilmente saprete già, dal momento che vi siete imbattuti in questo articolo, Google Chrome - come la maggior parte degli altri browser - effettua un caching locale delle istruzioni di reindirizzamento che riceve tramite HTTP, quelli che in lingua inglese si chiamano HTTP Redirect 301: questo significa che, in caso di ricezione di un HTTP response di tipo 301 - Permanent, il browser memorizzerà questa "risposta" per un determinato lasso di tempo, evitando di chiedere al server ulteriori HTTP Response per quella stessa URL. Si tratta di un comportamento assolutamente corretto da parte del browser, perfettamente in linea con quanto stabilito dall'RFC 7231 Section-6.4.2, che recita quanto segue: A 301 response is cacheable by default; i.e., unless otherwise indicated by the method definition or explicit cache controls (see Section 4.2.2 of ). La principale conseguenza di questo HTTP response caching è che, ogniqualvolta una pagina risponde alla request del nostro browser con un 301 Redirect, questa diventa di fatto inaccessibile per un bel pò di tempo. Questo effetto non crea alcun problema quando il 301 è a tutti gli effetti "permanente", ma può comportare qualche problema in sede di sviluppo web, ad esempio quando si sta configurando un Web Server IIS o NGINX, nel momento in cui si effettua un errore di reindirizzamento: questa situazione si verifica quando una pagina viene erroneamente configurata per restituire un HTTP 301 Redirect al nostro browser: a quel punto l'errore viene messo in cache dal browser, rendendo di fatto la pagina inaccessibile per un certo lasso di tempo, provocando non pochi grattacapi al povero sviluppatore web. Questo tipo di sviste sono piuttosto frequenti, e in una certa misura inevitabili quando si sperimentano delle soluzioni di caching, load balancing e redirect su ambienti di test: per quanto mi riguarda posso dire che mi succede in continuazione, motivo per cui mi sono da tempo abituato a lavorare con browser con cache disattivata e/o con cache che viene rimossa automaticamente alla chiusura: nei rari casi in cui mi trovo a dover fare questi esperimenti con browser non di mia proprietà o che per qualche motivo non posso configurare in questo modo, risolvo utilizzando la modalità Finestra di navigazione in Incognito di Chrome, la quale - come certamente sapete - cancella la cache ogni volta che il browser viene chiuso. Nel caso in cui questo problema si verifichi su un browser utilizzato per la navigazione è possibile eliminare il problema legato alla cache nei seguenti modi:
#1: Cancellare tutti i dati della cache del browser
Il modo più semplice è, ovviamente, procedere con la cancellazione dell'intera cache del browser nel seguente modo: Menu > Impostazioni > Mostra impostazioni avanzate > Privacy, quindi fare clic su Cancella dati di navigazione... Assicurarsi che la checkbox "Immagini e file cache" sia correttamente selezionata (il resto è opzionale, a seconda di ciò che si desidera cancellare). Fare clic sul pulsante Cancella dati di navigazione. Questo risolverà il problema, ma avrà l'effetto collaterale di cancellare anche la cache di qualsiasi altro sito.
#2: Cancellare i dati di cache per una pagina specifica
Nel caso in cui si voglia eliminare i dati presenti in cache relativi soltanto a una singola URL, è possibile procedere nel seguente modo. Premere SHIFT+CTRL+I per aprire il pannello Strumenti per sviluppatori. Aprire la scheda Network e abilitare l'opzione Disattiva cache (Disable cache): in questo modo, il browser non interrogherà mai la propria cache locale a seguito di qualsivoglia HTTP request. Digitare la URL che si desidera cancellare dalla cache del browser nella barra degli indirizzi e premere Invio. Fare clic e tenere premuto il pulsante Reload - quello posto a sinistra della barra degli indirizzi - fino a quando non si aprirà una finestra di opzioni avanzate: selezionare l'opzione Svuota la cache e ricarica manualmente (Empty Cache and Hard Reload). Non appena la pagina è stata completamente ricaricata, deselezionare la casella di controllo Disattiva cache / Disable cache così da disattivare tale funzione. IMPORTANTE: Il pannello degli Strumenti per sviluppatori deve restare aperto per l'intera procedura, altrimenti l'opzione Svuota la cache e ricarica manualmente (Empty Cache and Hard Reload) non sarà resa accessibile. La procedura sopra descritta è valida per Google Chrome, ma può essere facilmente adattata anche ad altri browser - Mozilla Firefox, IE ed Edge - che supportino le medesime funzionalità descritte sopra. #3: Utilizzare la Modalità Incognito In alternativa alle opzioni precedenti, ricordiamo che è sempre possibile accedere alla versione "uncached" della pagina utilizzando la Modalità Incognito di Google Chrome, ovvero aprendo una Finestra di navigazione in Incognito: questa modalità viene infatti lanciata senza prendere in considerazione la cache del browser ed elimina l'intera cache di navigazione ogni volta che il browser viene chiuso. Ovviamente, l'utilizzo della Modalità Incognito non risolverà il problema sul browser principale, per affrontare il quale occorrerà necessariamente ricorrere a una delle due opzioni descritte sopra. Read the full article
0 notes