#https //amazon.com/us/code
Explore tagged Tumblr posts
amzoncode · 1 year ago
Text
Unlocking the Power of Amazon Prime: A Guide to Activation with Amazon.com/Code
Tumblr media
In the digital age, convenience is key, and Amazon Prime stands as a testament to that ethos. Whether it's lightning-fast deliveries, a vast library of streaming content, or exclusive deals, Amazon Prime has become synonymous with seamless, all-encompassing service. To tap into this world of amazon.com/mytv enter code for tv benefits, activation through Amazon.com/code is your gateway to unlocking the full potential of Prime membership.
Understanding Amazon.com/Code:
Amazon.com/code serves as the bridge between your device and Amazon Prime, facilitating a smooth activation process. This unique alphanumeric code is provided when you sign up for Amazon Prime, allowing you to link your device and enjoy Prime's extensive offerings.
Activating Amazon Prime with Amazon.com/Code:
Registration: Begin by signing up for Amazon Prime on the official website or through the Prime app on your device. Once registered, you'll receive a prompt to enter the activation code.
Accessing Amazon.com/Code: Navigate to amazon.com/code using a web browser on your computer, smartphone, or tablet. Ensure you're logged in with the same Amazon account used for Prime registration.
Entering the Code: Input the unique code provided during registration into the designated field on the Amazon.com/code page. Double-check the code for code.amazon.com accuracy to avoid any errors.
Confirmation: After entering the code, click on the "Submit" or "Continue" button. You'll then receive confirmation that your device is successfully linked to your Amazon Prime account.
Enjoy Prime Benefits: With activation complete, dive into the myriad benefits of Amazon Prime, from expedited shipping to a vast library of movies, TV shows, music, and more.
Affirmative Benefits of Amazon Prime:
Fast Shipping: Enjoy free, expedited shipping on millions of eligible items, with options for same-day or next-day delivery in select areas.
Prime Video: Access thousands of movies, TV shows, and Amazon Originals, available for streaming on-demand across multiple devices.
Prime Music: Stream ad-free access to millions of songs and curated playlists, catering to every musical taste and mood.
Exclusive Deals: Gain early access to Lightning Deals and exclusive discounts, maximizing savings on a wide range of products.
Prime Reading: Explore a rotating selection of ebooks, magazines, comics, and more, all included with your Prime membership.
By activating Amazon Prime through Amazon.com/code, you're not just gaining access to a amazon com code activation subscription service; you're opening the door to a world of convenience, entertainment, and savings, enriching your digital lifestyle in ways you never thought possible. Unlock the power of Amazon Prime today and experience the future of online convenience firsthand.
0 notes
berenwrites · 3 months ago
Text
Kindle Book Download - The Faster Way
So I saw a post on Tumblr yesterday, which I am sorry I cannot find again. It was talking about how Kindle are removing the download ability on the 26th Feb and one response mentioned a Tampermonkey script to help with downloading all of our Kindle books. If someone can point me to the post, I will credit the person who mentioned it - thank you.
Well the script makes it about half as frustrating to download everything as the only way Kindle provide it and it took me a while to figure out how to use it, so I thought I would elaborate. I went from being able to do 8 pages of books in a day, to doing 37 - I have another 20 to go.
Tampermonkey is a plugin for the browser which allows a script to be run to alter the page - spent a good hour trying to work out how to run the script because I thought Tampermonkey was the name of the author or something😆 . It can be found here: https://www.tampermonkey.net/
There is an FAQ about how to install it here, along with how to install scripts: https://www.tampermonkey.net/faq.php
The script to alter the Amazon download page is here: https://github.com/chrishol/greasemonkey-scripts/blob/main/download-all-kindle-books.js
The script puts a button at the top of the page on the right that says "Trigger Download" (seen in green below)
Tumblr media
This button when pressed will do all the button presses for the downloads for you so you don't have to click everything yourself. All you have to do is confirm the save to your computer.
On my PC I can set it going, wait for the first save to come up, then click away and leave it for about 3 mins while I'm doing other things and then click save 25 times once it's done. Others have to click the save after 8-10 saves have been queued up or it sits there just waiting - not sure why, but it happens.
If the script does not work and the button does not appear for you after you have told Tampermonkey to run on the page, check the line in the top of the code that starts with
// @ match (no space after the @ in the actual script)
Tumblr media
The script is set up for Amazon.com, so if you are on Amazon.co.uk like me, or another of the Amazon sites, you will need to edit this, to have the right amazon in the URL.
E.g. mine looks like: https://www.amazon.co.uk/hz/mycd/digital-console/contentlist/booksAll/*
The rest of the URL will be exactly the same.
The match line is basically telling the script what page the script should run on.
Also of note - in the code it allows you to change which Kindle device you are picking in the list if you have more than one. If your Kindle is first in the list, you're all set. If not, scroll down to line #77 until you find this part and follow the instructions in the comment:
clickElementWithin(dropdown, 'span[id^="download_and_transfer_list_"]'); // Choose the first Kindle in list // If you want the second Kindle in the list, change the above line to this instead (for the third, you'd change the [1] to [2] and so on): // dropdown.querySelectorAll('span[id^="download_and_transfer_list_"]')[1].click();
Tumblr media
And those are the only things that tripped me up - so happy downloading. Hope it all works as well for you as it is working for me.
22 notes · View notes
nurseshannansreviews · 8 months ago
Text
Tumblr media Tumblr media Tumblr media
👩‍🍳 I love cooking with heart healthy oils, especially olive oils! This flairosol dispenser has tons of great features and works with all types of cooking liquids including olive oil, avocado oil, soy sauce, wine, canola oil, water, and more! I use it for frying, grilling, baking, seasoning, salad dressing, air frying, and drizzling. Flairosol's patented spray nozzle perfectly dispenses 1g of oil with each press and has a fine filter to avoid clogs! The fan-spray nozzle provides a precise amount of oil mist for even coverage. One full stroke sprays approximately 1gr of cooking oil to help control our daily intake of calories. Another awesome feature is the
continuous Spray with a Dutch patented nozzle for fine atomization and continuous spray options. It also has an innovative leak proof design with an anti-drip trigger and the wide glass bottle opening is perfect for easy refilling.
👩‍🍳 To learn more about this innovative oil sprayer and enjoy a safe, healthy way to control the amount of cooking oils and dressings visit the link in my bio Amazon Storefront -
https://www.amazon.com/dp/B0BLHJSPHW?maas=maas_adg_537BABA5C2D889E0BF9C629BB5EE9609_afap_abs&ref_=aa_maas&tag=maas
.
Get 10% off with discount code:  R7SIPFEZ
2 notes · View notes
digitalmore · 1 month ago
Text
0 notes
thpn · 5 months ago
Text
COVID-19, Long COVID, COVID Vaccine Shedding Exposure, SARS-CoV-2, Spike Protein Disease Protocol
In order of importance:
Professional Formulas Heal-All (Prunella vulgaris) 1:1 Aerial Parts Extract. Take 5 mL twice a day. Professional referral required. Use Doctor Code MJY567Z at https://professionalformulas.com/patients/ https://professionalformulas.com/product/heal-all-prunella-vulgaris/
5 mL pipettes: amazon.com/dp/B0DFG4WBPW
NicoDerm CQ Nicotine Patches, 21 mg. Cut into 6 equal 3.5 mg patches. Use one a day and secure with medical tape. https://www.nicorette.com/nicoderm-cq/clear-21mg-14ct/
COVID vaccine injury requires gene therapy, which is beyond the scope of this protocol.
1 note · View note
thornestar · 5 months ago
Text
youtube
What in the world do you get for a flaming pink skeleton who has everything? Pink fun, of course! Carmella Gambian Lucreziax IV (Carm for short) wants to open her gift early! When a skeleton asks nicely I tend to listen. SFW
Discount code under the cut!
Product link: https://amzn.to/3NWgSGf Discount: 50%OFF CODE: AB9FTOCH Reg.Price: $35.99 Final Price: $17.99
Pink fun is an egg shaped vibrator with a convenient tail. A little pink periscope for the power and control buttons! You can also use the remote control, or the app.
Use it with the App to sync vibrations to videos or music. Drawing mode lets you draw a picture to control your experience! Or (this is my 😍 ) you can add friends and let them control your vibrator no matter where they are in the world!
Tumblr media
Powerful lil thing! ☠️
0 notes
zebratoys · 7 months ago
Text
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
Sacred Geometry is created by using geometric shapes and mathematical equations coded with precise measurements and divine proportional ratios, offering a glimpse into the underlying patterns and principles that shape our existence while inspiring a profound sense of wonder and awe.  The Hexagon empowers a state of tranquility, and promotes body-mind-spirit synergy that broadens awareness and consciousness. The Hexagon elements are designed to nurture self-transcendence and support transformations that deepen spiritual journeys' nurturing harmony, balance, and evolution. The collection features Hexagon designs composed of marvelous symmetrical patterns alongside elements of Sacred Geometry symbols, magnetizing geometric shapes and divine cosmic proportions that glimmer with pure love and holistic therapeutic essences of a boundless and timeless state.
Order the Hexagon Coloring Book on Amazon: https://www.amazon.com/dp/B0DHKP7G4V
0 notes
faruk999043 · 9 months ago
Text
🔰★☆100 YEARS OLD: ELIXIR FOR A LONG AND HAPPY LIFE☆★🔰
Best Selling ⭐⭐⭐⭐⭐ ratings
    ✅Click here: https://www.amazon.com/dp/B0DBQ8FSXB 
About 100 YEARS OLD book, someone said:
"This book should be the true story of each person. It should represent a simple guideline to follow from start to finish of our long and happy life."
People spend a lot of time seeking the essence of a fulfilling life, often forgetting that the solution lies in the simplest and closest things.
Many of us find ourselves "halfway there," burdened with regrets and doubts.
100 YEARS OLD - Elixir For A Long And Happy Life offers a moment of clarity and simplicity, steering clear of complex texts and editions.
In this book, the elements are essential:
Real stories
Research findings
Practical advice
QR code links to real places
Narratives
Numerous recipes
Illustrated yoga exercises
Humorous stories
With humility, Stefano Tosti, in collaboration with Hermann Meyer, embarks on a profound and compelling journey to rediscover and reanalyze secrets already known and truths perhaps told by our grandparents.
Download the eBook edition on Amazon or order the light and easy-to-carry paperback version, available at reduced price for a limited time. For a unique and thoughtful gift for someone special, consider the hardcover premium edition, enriched with premium creamy paper and beautiful photographs.
Tumblr media
0 notes
mohammadismail1 · 9 months ago
Text
🔰★☆100 YEARS OLD: ELIXIR FOR A LONG AND HAPPY LIFE☆★🔰
Best Selling ⭐⭐⭐⭐⭐ ratings
    ✅Click here: https://www.amazon.com/dp/B0DBQ8FSXB 
About 100 YEARS OLD book, someone said:
"This book should be the true story of each person. It should represent a simple guideline to follow from start to finish of our long and happy life."
People spend a lot of time seeking the essence of a fulfilling life, often forgetting that the solution lies in the simplest and closest things.
Many of us find ourselves "halfway there," burdened with regrets and doubts.
100 YEARS OLD - Elixir For A Long And Happy Life offers a moment of clarity and simplicity, steering clear of complex texts and editions.
In this book, the elements are essential:
Real stories
Research findings
Practical advice
QR code links to real places
Narratives
Numerous recipes
Illustrated yoga exercises
Humorous stories
With humility, Stefano Tosti, in collaboration with Hermann Meyer, embarks on a profound and compelling journey to rediscover and reanalyze secrets already known and truths perhaps told by our grandparents.
Download the eBook edition on Amazon or order the light and easy-to-carry paperback version, available at reduced price for a limited time. For a unique and thoughtful gift for someone special, consider the hardcover premium edition, enriched with premium creamy paper and beautiful photographs.
Tumblr media
0 notes
kamaluddin1 · 9 months ago
Text
🔰★☆100 YEARS OLD: ELIXIR FOR A LONG AND HAPPY LIFE☆★🔰
Best Selling ⭐⭐⭐⭐⭐ ratings
    ✅Click here: https://www.amazon.com/dp/B0DBQ8FSXB 
About 100 YEARS OLD book, someone said:
"This book should be the true story of each person. It should represent a simple guideline to follow from start to finish of our long and happy life."
People spend a lot of time seeking the essence of a fulfilling life, often forgetting that the solution lies in the simplest and closest things.
Many of us find ourselves "halfway there," burdened with regrets and doubts.
100 YEARS OLD - Elixir For A Long And Happy Life offers a moment of clarity and simplicity, steering clear of complex texts and editions.
In this book, the elements are essential:
Real stories
Research findings
Practical advice
QR code links to real places
Narratives
Numerous recipes
Illustrated yoga exercises
Humorous stories
With humility, Stefano Tosti, in collaboration with Hermann Meyer, embarks on a profound and compelling journey to rediscover and reanalyze secrets already known and truths perhaps told by our grandparents.
Download the eBook edition on Amazon or order the light and easy-to-carry paperback version, available at reduced price for a limited time. For a unique and thoughtful gift for someone special, consider the hardcover premium edition, enriched with premium creamy paper and beautiful photographs.
Tumblr media
0 notes
outsourcingctg24 · 9 months ago
Text
🔰★☆100 YEARS OLD: ELIXIR FOR A LONG AND HAPPY LIFE☆★🔰
Best Selling ⭐⭐⭐⭐⭐ ratings
    ✅Click here: https://www.amazon.com/dp/B0DBQ8FSXB 
About 100 YEARS OLD book, someone said:
#LongevitySecrets
#HealthyLiving
#LifeWellLived
#MindfulLiving
#HappinessGuide
"This book should be the true story of each person. It should represent a simple guideline to follow from start to finish of our long and happy life."
People spend a lot of time seeking the essence of a fulfilling life, often forgetting that the solution lies in the simplest and closest things.
Many of us find ourselves "halfway there," burdened with regrets and doubts.
100 YEARS OLD - Elixir For A Long And Happy Life offers a moment of clarity and simplicity, steering clear of complex texts and editions.
In this book, the elements are essential:
Real stories
Research findings
Practical advice
QR code links to real places
Narratives
Numerous recipes
Illustrated yoga exercises
Humorous stories
With humility, Stefano Tosti, in collaboration with Hermann Meyer, embarks on a profound and compelling journey to rediscover and reanalyze secrets already known and truths perhaps told by our grandparents.
Download the eBook edition on Amazon or order the light and easy-to-carry paperback version, available at reduced price for a limited time. For a unique and thoughtful gift for someone special, consider the hardcover premium edition, enriched with premium creamy paper and beautiful photographs.
Tumblr media
0 notes
freelancerjahed10 · 9 months ago
Text
🔰★☆100 YEARS OLD: ELIXIR FOR A LONG AND HAPPY LIFE☆★🔰
Best Selling ⭐⭐⭐⭐⭐ ratings
    ✅Click here: https://www.amazon.com/dp/B0DBQ8FSXB 
Description:
About 100 YEARS OLD book, someone said:
"This book should be the true story of each person. It should represent a simple guideline to follow from start to finish of our long and happy life."
People spend a lot of time seeking the essence of a fulfilling life, often forgetting that the solution lies in the simplest and closest things.
Many of us find ourselves "halfway there," burdened with regrets and doubts.
100 YEARS OLD - Elixir For A Long And Happy Life offers a moment of clarity and simplicity, steering clear of complex texts and editions.
In this book, the elements are essential:
Real stories
Research findings
Practical advice
QR code links to real places
Narratives
Numerous recipes
Illustrated yoga exercises
Humorous stories
With humility, Stefano Tosti, in collaboration with Hermann Meyer, embarks on a profound and compelling journey to rediscover and reanalyze secrets already known and truths perhaps told by our grandparents.
Download the eBook edition on Amazon or order the light and easy-to-carry paperback version, available at reduced price for a limited time. For a unique and thoughtful gift for someone special, consider the hardcover premium edition, enriched with premium creamy paper and beautiful photographs.
Tumblr media
0 notes
nurseshannansreviews · 2 years ago
Text
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
ad. ⭕ Red light therapy at home! I've been using this @comfytemp red light therapy belt for #backpain for the past week and it works wonders. I love how red light is a healthy and safe #drugfree therapy. I've been using it for 20 minutes every day for the past 3 weeks and have way less pain! This #comfytemp belt has lots of professional lamp beads with the new three chips technology.
⭕ The reason it works so well is because the #infraredlight penetrates deep into tissue cells to help relieve pain and increase blood circulation, reduce inflammation, #jointpain, muscle tension and even promote cell regeneration. It also improves skin appearance and accelerates #healing from things like sports injuries. It's also versatile and adjustable with 4 modes, red light mode, #infraredlight mode, red + #infrared light mode and pulse mode. It can be used for shoulder stiffness, waist back pain, #musclepain, joint inflammation, #sportsinjury, and menstrual pain. Plus it's hands-free and has a long cord so I can use it while lying on the sofa or any other comfortable place. Sometimes I connect it to my laptop in our office.
⭕ I think this #comfytemp belt makes a unique and useful gift for anyone looking for a healthy and safe method of #physicaltherapy at home. To learn more and get 10% off on Amazon use code: RLTherapy
amazon: https://amzn.to/3NNAJZ9
Or visit
official site: https://bit.ly/3peV3ct
,
.
.
.
.
.
.
.
.
.
.
.
.
.
#organizedmom #reviewer #producttester #producttesting #organization #getorganized #organizedhome# californiablogger #organizedmoms #organizedlife #organizedliving #momblogger
1 note · View note
alauddin24 · 9 months ago
Text
🔰★☆100 YEARS OLD: ELIXIR FOR A LONG AND HAPPY LIFE☆★🔰
Best Selling ⭐⭐⭐⭐⭐ ratings
    ✅Click here: https://www.amazon.com/dp/B0DBQ8FSXB 
Description:
About 100 YEARS OLD book, someone said:
"This book should be the true story of each person. It should represent a simple guideline to follow from start to finish of our long and happy life."
People spend a lot of time seeking the essence of a fulfilling life, often forgetting that the solution lies in the simplest and closest things.
Many of us find ourselves "halfway there," burdened with regrets and doubts.
100 YEARS OLD - Elixir For A Long And Happy Life offers a moment of clarity and simplicity, steering clear of complex texts and editions.
In this book, the elements are essential:
Real stories
Research findings
Practical advice
QR code links to real places
Narratives
Numerous recipes
Illustrated yoga exercises
Humorous stories
With humility, Stefano Tosti, in collaboration with Hermann Meyer, embarks on a profound and compelling journey to rediscover and reanalyze secrets already known and truths perhaps told by our grandparents.
Download the eBook edition on Amazon or order the light and easy-to-carry paperback version, available at reduced price for a limited time. For a unique and thoughtful gift for someone special, consider the hardcover premium edition, enriched with premium creamy paper and beautiful photographs.
Tumblr media
0 notes
suatatan · 11 months ago
Text
Quotes from the book Data Science on AWS
Data Science on AWS
Antje Barth, Chris Fregly
As input data, we leverage samples from the Amazon Customer Reviews Dataset [https://s3.amazonaws.com/amazon-reviews-pds/readme.html]. This dataset is a collection of over 150 million product reviews on Amazon.com from 1995 to 2015. Those product reviews and star ratings are a popular customer feature of Amazon.com. Star rating 5 is the best and 1 is the worst. We will describe and explore this dataset in much more detail in the next chapters.
*****
Let’s click Create Experiment and start our first Autopilot job. You can observe the progress of the job in the UI as shown in Figure 1-11.
--
Amazon AufotML experiments
*****
...When the Feature Engineering stage starts, you will see SageMaker training jobs appearing in the AWS Console as shown in Figure 1-13.
*****
Autopilot built to find the best performing model. You can select any of those training jobs to view the job status, configuration, parameters, and log files.
*****
The Model Tuning creates a SageMaker Hyperparameter tuning job as shown in Figure 1-15. Amazon SageMaker automatic model tuning, also known as hyperparameter tuning (HPT), is another functionality of the SageMaker service.
*****
You can find an overview of all AWS instance types supported by Amazon SageMaker and their performance characteristics here: https://aws.amazon.com/sagemaker/pricing/instance-types/. Note that those instances start with ml. in their name.
Optionally, you can enable data capture of all prediction requests and responses for your deployed model. We can now click on Deploy model and watch our model endpoint being created. Once the endpoint shows up as In Service
--
Once Autopilot find best hyperpharameters you can deploy them to save for later
*****
Here is a simple Python code snippet to invoke the endpoint. We pass a sample review (“I loved it!”) and see which star rating our model chooses. Remember, star rating 1 is the worst and star rating 5 is the best.
*****
If you prefer to interact with AWS services in a programmatic way, you can use the AWS SDK for Python boto3 [https://boto3.amazonaws.com/v1/documentation/api/latest/index.html], to interact with AWS services from your Python development environment.
*****
In the next section, we describe how you can run real-time predictions from within a SQL query using Amazon Athena.
*****
Amazon Comprehend. As input data, we leverage a subset of Amazon’s public customer reviews dataset. We want Amazon Comprehend to classify the sentiment of a provided review. The Comprehend UI is the easiest way to get started. You can paste in any text and Comprehend will analyze the input in real-time using the built-in model. Let’s test this with a sample product review such as “I loved it! I will recommend this to everyone.” as shown in Figure 1-23.
*****
mprehend Custom is another example of automated machine learning that enables the practitioner to fine-tune Comprehend’s built-in model to a specific datase
*****
We will introduce you to Amazon Athena and show you how to leverage Athena as an interactive query service to analyze data in S3 using standard SQL, without moving the data. In the first step, we will register the TSV data in our S3 bucket with Athena, and then run some ad-hoc queries on the dataset. We will also show how you can easily convert the TSV data into the more query-optimized, columnar file format Apache Parquet.
--
S3 deki datayı her zaman parquet e çevir
*****
One of the biggest advantages of data lakes is that you don’t need to pre-define any schemas. You can store your raw data at scale and then decide later in which ways you need to process and analyze it. Data Lakes may contain structured relational data, files, and any form of semi-structured and unstructured data. You can also ingest data in real time.
*****
Each of those steps involves a range of tools and technologies, and while you can build a data lake manually from the ground up, there are cloud services available to help you streamline this process, i.e. AWS Lake Formation.
Lake Formation helps you to collect and catalog data from databases and object storage, move the data into your Amazon S3 data lake, clean and classify your data using machine learning algorithms, and secure access to your sensitive data.
*****
From a data analysis perspective, another key benefit of storing your data in Amazon S3 is, that it shortens the “time to insight’ dramatically, as you can run ad-hoc queries directly on the data in S3, and you don’t have to go through complex ETL (Extract-Transform-Load) processes and data pipeli
*****
Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. Athena is serverless, so you don’t need to manage any infrastructure, and you only pay for the queries you run.
*****
With Athena, you can query data wherever it is stored (S3 in our case) without needing to move the data to a relational database.
*****
Athena and Redshift Spectrum can use to locate and query data.
*****
Athena queries run in parallel over a dynamic, serverless cluster which makes Athena extremely fast -- even on large datasets. Athena will automatically scale the cluster depending on the query and dataset -- freeing the user from worrying about these details.
*****
Athena is based on Presto, an open source, distributed SQL query engine designed for fast, ad-hoc data analytics on large datasets. Similar to Apache Spark, Presto uses high RAM clusters to perform its queries. However, Presto does not require a large amount of disk as it is designed for ad-hoc queries (vs. automated, repeatable queries) and therefore does not perform the checkpointing required for fault-tolerance.
*****
Apache Spark is slower than Athena for many ad-hoc queries.
*****
For longer-running Athena jobs, you can listen for query-completion events using CloudWatch Events. When the query completes, all listeners are notified with the event details including query success status, total execution time, and total bytes scanned.
*****
With a functionality called Athena Federated Query, you can also run SQL queries across data stored in relational databases (such as Amazon RDS and Amazon Aurora), non-relational databases (such as Amazon DynamoDB), object storage (Amazon S3), and custom data sources. This gives you a unified analytics view across data stored in your data warehouse, data lake and operational databases without the need to actually move the data.
*****
You can access Athena via the AWS Management Console, an API, or an ODBC or JDBC driver for programmatic access. Let’s have a look at how to use Amazon Athena via the AWS Management Console.
*****
When using LIMIT, you can better-sample the rows by adding TABLESAMPLE BERNOULLI(10) after the FROM. Otherwise, you will always return the data in the same order that it was ingested into S3 which could be skewed towards a single product_category, for example. To reduce code clutter, we will just use LIMIT without TABLESAMPLE.
*****
In a next step, we will show you how you can easily convert that data now into the Apache Parquet columnar file format to improve the query performance. Parquet is optimized for columnar-based queries such as counts, sums, averages, and other aggregation-based summary statistics that focus on the column values vs. row information.
*****
selected for DATABASE and then choose “New Query” and run the following “CREATE TABLE AS” (short CTAS) SQL statement:
CREATE TABLE IF NOT EXISTS dsoaws.amazon_reviews_parquet
WITH (format = 'PARQUET', external_location = 's3://data-science-on-aws/amazon-reviews-pds/parquet', partitioned_by = ARRAY['product_category']) AS
SELECT marketplace,
*****
One of the fundamental differences between data lakes and data warehouses is that while you ingest and store huge amounts of raw, unprocessed data in your data lake, you normally only load some fraction of your recent data into your data warehouse. Depending on your business and analytics use case, this might be data from the past couple of months, a year, or maybe the past 2 years. Let’s assume we want to have the past 2 years of our Amazon Customer Reviews Dataset in a data warehouse to analyze year-over-year customer behavior and review trends. We will use Amazon Redshift as our data warehouse for this.
*****
Amazon Redshift is a fully managed data warehouse which allows you to run complex analytic queries against petabytes of structured data. Your queries are distributed and parallelized across multiple nodes. In contrast to relational databases which are optimized to store data in rows and mostly serve transactional applications, Redshift implements columnar data storage which is optimized for analytical applications where you are mostly interested in the data within the individual columns.
*****
Redshift Spectrum, which allows you to directly execute SQL queries from Redshift against exabytes of unstructured data in your Amazon S3 data lake without the need to physically move the data. Amazon Redshift Spectrum automatically scales the compute resources needed based on how much data is being received, so queries against Amazon S3 run fast, regardless of the size of your data.
*****
We will use Amazon Redshift Spectrum to access our data in S3, and then show you how you can combine data that is stored in Redshift with data that is still in S3.
This might sound similar to the approach we showed earlier with Amazon Athena, but note that in this case we show how your Business Intelligence team can enrich their queries with data that is not stored in the data warehouse itself.
*****
So with just one command, we now have access and can query our S3 data lake from Amazon Redshift without moving any data into our data warehouse. This is the power of Redshift Spectrum.
But now, let’s actually copy some data from S3 into Amazon Redshift. Let’s pull in customer reviews data from the year 2015.
*****
You might ask yourself now, when should I use Athena, and when should I use Redshift? Let’s discuss.
*****
Amazon Athena should be your preferred choice when running ad-hoc SQL queries on data that is stored in Amazon S3. It doesn’t require you to set up or manage any infrastructure resources, and you don’t need to move any data. It supports structured, unstructured, and semi-structured data. With Athena, you are defining a “schema on read” -- you basically just log in, create a table and you are good to go.
Amazon Redshift is targeted for modern data analytics on large, peta-byte scale, sets of structured data. Here, you need to have a predefined “schema on write”. Unlike serverless Athena, Redshift requires you to create a cluster (compute and storage resources), ingest the data and build tables before you can start to query, but caters to performance and scale. So for any highly-relational data with a transactional nature (data gets updated), workloads which involve complex joins, and latency requirements to be sub-second, Redshift is the right choice.
*****
But how do you know which objects to move? Imagine your S3 data lake has grown over time, and you might have billions of objects across several S3 buckets in S3 Standard storage class. Some of those objects are extremely important, while you haven’t accessed others maybe in months or even years. This is where S3 Intelligent-Tiering comes into play.
Amazon S3 Intelligent-Tiering, automatically optimizes your storage cost for data with changing access patterns by moving objects between the frequent-access tier optimized for frequent use of data, and the lower-cost infrequent-access tier optimized for less-accessed data.
*****
Amazon Athena offers ad-hoc, serverless SQL queries for data in S3 without needing to setup, scale, and manage any clusters.
Amazon Redshift provides the fastest query performance for enterprise reporting and business intelligence workloads, particularly those involving extremely complex SQL with multiple joins and subqueries across many data sources including relational databases and flat files.
*****
To interact with AWS resources from within a Python Jupyter notebook, we leverage the AWS Python SDK boto3, the Python DB client PyAthena to connect to Athena, and SQLAlchemy) as a Python SQL toolkit to connect to Redshift.
*****
easy-to-use business intelligence service to build visualizations, perform ad-hoc analysis, and build dashboards from many data sources - and across many devices.
*****
We will also introduce you to PyAthena, the Python DB Client for Amazon Athena, that enables us to run Athena queries right from our notebook.
*****
There are different cursor implementations that you can use. While the standard cursor fetches the query result row by row, the PandasCursor will first save the CSV query results in the S3 staging directory, then read the CSV from S3 in parallel down to your Pandas DataFrame. This leads to better performance than fetching data with the standard cursor implementation.
*****
We need to install SQLAlchemy, define our Redshift connection parameters, query the Redshift secret credentials from AWS Secret Manager, and obtain our Redshift Endpoint address. Finally, create the Redshift Query Engine.
# Ins
*****
Create Redshift Query Engine
from sqlalchemy import create_engine
engine = create_engine('postgresql://{}:{}@{}:{}/{}'.format(redshift_username, redshift_pw, redshift_endpoint_address, redshift_port, redshift_database))
*****
Detect Data Quality Issues with Apache Spark
*****
Data quality can halt a data processing pipeline in its tracks. If these issues are not caught early, they can lead to misleading reports (ie. double-counted revenue), biased AI/ML models (skewed towards/against a single gender or race), and other unintended data products.
To catch these data issues early, we use Deequ, an open source library from Amazon that uses Apache Spark to analyze data quality, detect anomalies, and even “notify the Data Scientist at 3am” about a data issue. Deequ continuously analyzes data throughout the complete, end-to-end lifetime of the model from feature engineering to model training to model serving in production.
*****
Learning from run to run, Deequ will suggest new rules to apply during the next pass through the dataset. Deequ learns the baseline statistics of our dataset at model training time, for example - then detects anomalies as new data arrives for model prediction. This problem is classically called “training-serving skew”. Essentially, a model is trained with one set of learned constraints, then the model sees new data that does not fit those existing constraints. This is a sign that the data has shifted - or skewed - from the original distribution.
*****
Since we have 130+ million reviews, we need to run Deequ on a cluster vs. inside our notebook. This is the trade-off of working with data at scale. Notebooks work fine for exploratory analytics on small data sets, but not suitable to process large data sets or train large models. We will use a notebook to kick off a Deequ Spark job on a cluster using SageMaker Processing Jobs.
*****
You can optimize expensive SQL COUNT queries across large datasets by using approximate counts.
*****
HyperLogLogCounting is a big deal in analytics. We always need to count users (daily active users), orders, returns, support calls, etc. Maintaining super-fast counts in an ever-growing dataset can be a critical advantage over competitors.
Both Redshift and Athena support HyperLogLog (HLL), a type of “cardinality-estimation” or COUNT DISTINCT algorithm designed to provide highly accurate counts (<2% error) in a small fraction of the time (seconds) requiring a tiny fraction of the storage (1.2KB) to store 130+ million separate counts.
*****
Existing data warehouses move data from storage nodes to compute nodes during query execution. This requires high network I/O between the nodes - and reduces query performance.
Figure 3-23 below shows a traditional data warehouse architecture with shared, centralized storage.
*****
certain “latent” features hidden in our data sets and not immediately-recognizable by a human. Netflix’s recommendation system is famous for discovering new movie genres beyond the usual drama, horror, and romantic comedy. For example, they discovered very specific genres such as “Gory Canadian Revenge Movies,” “Sentimental Movies about Horses for Ages 11-12,” “
*****
Figure 6-2 shows more “secret” genres discovered by Netflix’s Viewing History Service - code named, “VHS,” like the popular video tape format from the 80’s and 90’s.
*****
Feature creation combines existing data points into new features that help improve the predictive power of your model. For example, combining review_headline and review_body into a single feature may lead to more-accurate predictions than using them separately.
*****
Feature transformation converts data from one representation to another to facilitate machine learning. Transforming continuous values such as a timestamp into categorical “bins” such as hourly, daily, or monthly helps to reduce dimensionality. Two common statistical feature transformations are normalization and standardization. Normalization scales all values of a particular data point between 0 and 1, while standardization transforms the values to a mean of 0 and standard deviation of 1. These techniques help reduce the impact of large-valued data points such as number of reviews (represented in 1,000’s) over small-valued data points such as helpful_votes (represented in 10’s.) Without these techniques, the mod
*****
One drawback to undersampling is that your training dataset size is sampled down to the size of the smallest category. This can reduce the predictive power of your trained models. In this example, we reduced the number of reviews by 65% from approximately 100,000 to 35,000.
*****
Oversampling will artificially create new data for the under-represented class. In our case, star_rating 2 and 3 are under-represented. One common oversampling technique is called Synthetic Minority Oversampling Technique (SMOTE). Oversampling techniques use statistical methods such as interpolation to generate new data from your current data. They tend to work better when you have a larger data set, so be careful when using oversampling on small datasets with a low number of minority class examples. Figure 6-10 shows SMOTE generating new examples for the minority class to improve the imbalance.
*****
Each of the three phases should use a separate and independent dataset - otherwise “leakage” may occur. Leakage happens when data is leaked from one phase of modeling into another through the splits. Leakage can artificially inflate the accuracy of your model.
Time-series data is often prone to leakage across splits. Companies often want to validate a new model using “back-in-time” historical information before pushing the model to production. When working with time-series data, make sure your model does not peak into the future accidentally. Otherwise, these models may appear more accurate than they really are.
*****
We will use TensorFlow and a state-of-the-art Natural Language Processing (NLP) and Natural Language Understanding (NLU) neural network architecture called BERT. Unlike previous generations of NLP models such as Word2Vec, BERT captures the bi-directional (left-to-right and right-to-left) context of each word in a sentence. This allows BERT to learn different meanings of the same word across different sentences. For example, the meaning of the word “bank” is different between these two sentences: “A thief stole money from the bank vault” and “Later, he was arrested while fishing on a river bank.”
For each review_body, we use BERT to create a feature vector within a previously-learned, high-dimensional vector space of 30,000 words or “tokens.” BERT learned these tokens by training on millions of documents including Wikipedia and Google Books.
Let’s use a variant of BERT called DistilBert. DistilBert is a light-weight version of BERT that is 60% faster, 40% smaller, and preserves 97% of BERT’s language understanding capabilities. We use a popular Python library called Transformers to perform the transformation.
*****
Feature stores can cache “hot features” into memory to reduce model-training times. A feature store can provide governance and access control to regulate and audit our features. Lastly, a feature store can provide consistency between model training and model predicting by ensuring the same features for both batch training and real-time predicting.
Customers have implemented feature stores using a combination of DynamoDB, ElasticSearch, and S3. DynamoDB and ElasticSearch track metadata such as file format (ie. csv, parquet), BERT-specific data (ie. maximum sequence length), and other summary statistics (ie. min, max, standard deviation). S3 stores the underlying features such as our generated BERT embeddings. This feature store reference architecture is shown in Figure 6-22.
*****
Our training scripts almost always include pip installing Python libraries from PyPi or downloading pre-trained models from third-party model repositories (or “model zoo’s”) on the internet. By creating dependencies on external resources, your training job is now at the mercy of these third-party services. If one of these services is temporarily down, your training job may not start.
To improve availability, it is recommended that we reduce as many external dependencies as possible by copying these resources into your Docker images - or into your own S3 bucket. This has the added benefit of reducing network utilization and starting our training jobs faster.
*****
Bring Your Own Container
The most customizable option is “bring your own container” (BYOC). This option lets you build and deploy your own Docker container to SageMaker. This Docker container can contain any library or framework. While we maintain complete control over the details of the training script and its dependencies, SageMaker manages the low-level infrastructure for logging, monitoring, environment variables, S3 locations, etc. This option is targeted at more specialized or systems-focused machine learning folks.
*****
GloVe goes one step further and uses recurrent neural networks (RNNs) to encode the global co-occurrence of words vs. Word2Vec’s local co-occurence of words. An RNN is a special type of neutral network that learns and remembers longer-form inputs such as text sequences and time-series data.
FastText continues the innovation and builds word embeddings using combinations of lower-level character embeddings using character-level RNNs. This character-level focus allows FastText to learn non-English language models with relatively small amounts of data compared to other models. Amazon SageMaker offers a built-in, pay-as-you-go SageMaker algorithm called BlazingText which is an implementation of FastText optimized for AWS. This algorithm was shown in the Built-In Algorithms section above.
*****
ELMo preserves the trained model and uses two separate Long-Short Term Memory (LSTM) networks: one to learn from left-to-right and one to learn from right-to-left. Neither LSTM uses both the previous and next words at the same time, however. Therefore ELMo does not learn a true bidirectional contextual representation of the words and phrases in the corpus, but it performs very well nonetheless.
*****
Without this bi-directional attention, an algorithm would potentially create the same embedding for the word bank for the following two(2) sentences: “A thief stole money from the bank vault” and “Later, he was arrested while fishing on a river bank.” Note that the word bank has a different meaning in each sentence. This is easy for humans to distinguish because of our life-long, natural “pre-training”, but this is not easy for a machine without similar pre-training.
*****
To be more concrete, BERT is trained by forcing it to predict masked words in a sentence. For example, if we feed in the contents of this book, we can ask BERT to predict the missing word in the following sentence: “This book is called Data ____ on AWS.” Obviously, the missing word is “Science.” This is easy for a human who has been pre-trained on millions of documents since birth, but not easy fo
*****
Neural networks are designed to be re-used and continuously trained as new data arrives into the system. Since BERT has already been pre-trained on millions of public documents from Wikipedia and the Google Books Corpus, the vocabulary and learned representations are transferable to a large number of NLP and NLU tasks across a wide variety of domains.
Training BERT from scratch requires a lot of data and compute, it allows BERT to learn a representation of the custom dataset using a highly-specialized vocabulary. Companies like LinkedIn have pre-trained BERT from scratch to learn language representations specific to their domain including job titles, resumes, companies, and business news. The default pre-trained BERT models were not good enough for NLP/NLU tasks. Fortunately, LinkedIn has plenty of data and compute
*****
The choice of instance type and instance count depends on your workload and budget. Fortunately AWS offers many different instance types including AI/ML-optimized instances with terabytes of RAM and gigabits of network bandwidth. In the cloud, we can easily scale our training job to tens, hundreds, or even thousands of instances with just one line of code.
Let’s select 3 instances of the powerful p3.2xlarge - each with 8 CPUs, 61GB of CPU RAM, 1 Nvidia Volta V100’s GPU processor, and 16GB of GPU RAM. Empirically, we found this combination to perform well with our specific training script and dataset - and within our budget for this task.
instance_type='ml.p3.2xlarge'
instance_count=3
*****
TIP: You can specify instance_type='local' to run the script either inside your notebook or on your local laptop. In both cases, your script will execute inside of the same open source SageMaker Docker container that runs in the managed SageMaker service. This lets you test locally before incurring any cloud cost.
*****
Also, it’s important to choose parallelizable algorithms that benefit from multiple cluster instances. If your algorithm is not parallelizable, you should not add more instances as they will not be used. And adding too many instances may actually slow down your training job by creating too much communication overhead between the instances. Most neural network-based algorithms like BERT are parallelizable and benefit from a distributed cluster.
0 notes
amazon-code-elevat · 11 months ago
Text
Tumblr media
Unlocking Amazon in the USA: Your Ultimate Guide! 🛒✨
Amazon, the global e-commerce giant, amazon.com/code offers a seamless experience for its users across the United States. Whether you're signing in, accessing Amazon Music, or syncing your devices, entering the right code is essential. Let’s explore the various code entry points to ensure you make the most of your Amazon experience in the US, from California to New York.
Enter Code Amazon: Your Gateway to Convenience 🚀
To enhance your Amazon experience, you may need to enter a specific code. This could be for signing in, accessing exclusive content, or syncing your devices. Here's how to navigate through the different Amazon code entry points.
Amazon.com/code: The Universal Entry Point 🌍
When prompted to enter a code, simply head to. amazon.com/codeThis universal link is your gateway to enter any code Amazon requires, whether for signing in or unlocking new features, from Florida to Texas.
Amazon.com/us/code: Tailored for US Users 🇺🇸
For users in the United States, is the URL to use. This site is optimized for US-specific services and promotions, ensuring a relevant and smooth experience for everyone from Illinois to Washington.
Amazon Code Sign In: Quick and Easy Access 🔑
Need to sign in to your Amazon account using a code? Follow these steps:
Visit.
Enter the code provided to you.
Follow the on-screen instructions to complete your sign-in.
Amazon.com/mytv Enter Code for TV Sign In 📺
Want to enjoy Amazon content on your TV? It's easy:
Open the Amazon app on your TV.
Enter the code displayed on your TV screen.
Amazon.com Slash My TV: Syncing Made Simple 🔄
The URL amazon.com/mytv (often referred to as "slash my TV") is your go-to for linking your TV to your Amazon account. It’s quick and hassle-free.
Amazin.com/mytv: A Common Typo 📝
If you accidentally type, don’t worry! It’s a common typo. Just correct it to to proceed.
Amazon.com Code for TV: Seamless Integration 📡
Accessing Amazon Prime Video on your TV is straightforward:
Visit amazon.com/code.
Enter the code shown on your TV.
Enjoy an endless stream of entertainment, from New York to Miami.
Amazon.co.jp/code: For Our Japanese Users 🇯🇵
If you’re accessing Amazon in Japan, use amazon.co.jp/code to enter your code. This ensures you're connected to the right regional services and content.
Music: Tune In to Your Favorites 🎶
To unlock Amazon Music, head to and enter your music-specific code. It’s your key to a world of melodies and rhythms, enjoyed across America from Seattle to Atlanta.
Amazon.conm/mytv: Another Common Typo 📝
Typing amazon.conm/mytv is another common mistake. Ensure you correct it to to link your TV properly.
Code: Easy TV Setup 📺
Setting up your Amazon TV experience? Go to www.amazon.com/mytv, enter the sign-in code, and you’re all set to explore endless entertainment, from Boston to San Francisco.
Amazon in Code: Connecting India 🌏
For users in India, ensure you're connected to the right regional services by using the appropriate code entry points like amazon.in/code.
Amazon.com Forward Slash Code: Universal Entry Point 🌐
Use as your one-stop destination for entering any code required by Amazon, whether you’re in Colorado or Nevada.
https://amzoncode.com/ Music: Access Your Music 🎵
To access Amazon Music, simply visit, enter your code, and enjoy your favorite tunes from Ohio to Virginia.
Amazon.com/mytv Sign In Code: Effortless TV Experience 📺
To set up your Amazon TV, visit enter the sign-in code, and start streaming your favorite content in no time, whether you're in Arizona or Pennsylvania.
Amazon.com US Code Music: Melodies Await 🎶
Unlock a world of music by visiting and entering your music-specific code. Enjoy a rich musical experience with Amazon Music, from Minnesota to North Carolina.
By using these links and codes, you can effortlessly access and enjoy all that Amazon has to offer. Happy shopping and streaming! 🎉
0 notes