Tumgik
openmicrosoft · 6 years
Text
(Cross Post) Static website hosting for Azure Storage now in public preview
Today we are excited to announce the public preview of static website hosting for Azure Storage! The feature set is available in all public cloud regions with support in government and sovereign clouds coming soon.
This set of new storage capabilities provides a cost-effective and scalable solution for hosting modern web applications. For more details on the feature, see the Microsoft Azure Blog and the complete static websites documentation.
1 note · View note
openmicrosoft · 6 years
Text
(Cross Post) Immutable storage for Azure Storage Blobs now in public preview
Financial Services organizations regulated by SEC, CFTC, FINRA, IIROC, FCA etc. are required to retain business-related communication in a Write-Once-Read-Many (WORM) or immutable state that makes it non-erasable and non-modifiable for a certain retention interval. The immutable storage requirement is not limited to financial organizations, but also applies to industries such as healthcare, insurance, media, public safety, and legal services.
Today, we are excited to announce the public preview of immutable storage for Azure Storage Blobs to address this requirement. The feature is available in all Azure public regions. Through configurable policies, users can keep Azure Blob storage data in an immutable state where Blobs can be created and read, but not modified or deleted.
For more details on the feature, see the Microsoft Azure Blog.
0 notes
openmicrosoft · 6 years
Text
(Cross Post) Azure Blob Storage lifecycle management in public preview
Last year, we released Blob-Level Tiering which allows you to transition blobs between the Hot, Cool, and Archive tiers without moving data between accounts. Both Blob-Level Tiering and Archive Storage help you optimize storage performance and cost. You asked us to make it easier to manage and automate, so we did. Today we are excited to announce the public preview of Blob Storage lifecycle management so that you can automate blob tiering and retention with lifecycle management policies.
For more detail, please see the original post at Microsoft Azure Blog.
1 note · View note
openmicrosoft · 6 years
Text
(Cross Post) Announcing general availability of soft delete for Azure Storage Blobs
Today we are excited to announce general availability of soft delete for Azure Storage Blobs! The feature is available in all regions, both public and private.
When turned on, soft delete enables you to save and recover your data when blobs or blob snapshots are deleted. This protection extends to blob data that is erased as the result of an overwrite.
For more details on the feature, see the Microsoft Azure Blog and the complete soft delete documentation.
0 notes
openmicrosoft · 6 years
Text
(Cross Post) Announcing public preview of soft delete for Azure Storage Blobs
Today we are excited to announce the public preview of soft delete for Azure Storage Blobs! The feature is available in all regions, both public and private.
When turned on, soft delete enables you to save and recover your data when blobs or blob snapshots are deleted. This protection extends to blob data that is erased as the result of an overwrite.
For more details on the feature, see the Microsoft Azure blog and the complete soft delete documentation.
0 notes
openmicrosoft · 7 years
Text
(Cross-Post) Cloud storage now more affordable: Announcing general availability of Azure Archive Storage
Today we’re excited to announce the general availability of Archive Blob Storage starting at an industry leading price of $0.002 per gigabyte per month! Last year, we launched Cool Blob Storage to help customers reduce storage costs by tiering their infrequently accessed data to the Cool tier. Organizations can now reduce their storage costs even further by storing their rarely accessed data in the Archive tier. Furthermore, we’re also excited to announce the general availability of Blob-Level Tiering, which enables customers to optimize storage costs by easily managing the lifecycle of their data across these tiers at the object level.
From startups to large organizations, our customers in every industry have experienced exponential growth of their data. A significant amount of this data is rarely accessed but must be stored for a long period of time to meet either business continuity or compliance requirements; think employee data, medical records, customer information, financial records, backups, etc. Additionally, recent and coming advances in artificial intelligence and data analytics are unlocking value from data that might have previously been discarded. Customers in many industries want to keep more of these data sets for a longer period but need a scalable and cost-effective solution to do so.
“We have been working with the Azure team to preview Archive Blob Storage for our cloud archiving service for several months now.  I love how easy it is to change the storage tier on an existing object via a single API. This allows us to build Information Lifecycle Management into our application logic directly and use Archive Blob Storage to significantly decrease our total Azure Storage costs.”
-Tom Inglis, Director of Enabling Solutions at BP
For more detail, please see the original post at Microsoft Azure Blog.
0 notes
openmicrosoft · 7 years
Text
Announcing support for TLS 1.1 and TLS 1.2 in Windows Embedded Standard 2009 and Windows Embedded POSReady 2009
As a follow-up to our announcement regarding TLS 1.2 support at Microsoft, we are announcing that support for TLS1.1/TLS 1.2 on Windows Embedded POSReady 2009 and Windows Embedded Standard 2009 will be available for download as of October 17th, 2017. We're offering this support in recognition that our customers have a strong demand for support for these newer protocols in their environment.
For more information on where to get this update please see the complete blog post at http://ift.tt/2fMmyTg
October 10, 2017 at 11:51PM http://ift.tt/2hAssXV
0 notes
openmicrosoft · 7 years
Text
(Cross-Post) Announcing the public preview of Azure Archive Blob Storage and Blob-Level Tiering
Last year, we launched Cool Blob Storage to help customers reduce storage costs by tiering their infrequently accessed data to the Cool tier. Today we're announcing the public preview of Archive Blob Storage designed to help organizations reduce their storage costs even further by storing rarely accessed data in our lowest-priced tier yet. Furthermore, we're excited to introduce the public preview of Blob-Level Tiering enabling you to optimize storage costs by easily managing the lifecycle of your data across these tiers at the object level.
For more detail, please see the original post at Microsoft Azure Blog
0 notes
openmicrosoft · 7 years
Text
Improve Windows Update Performance for New Windows Embedded Image Deployments
Occasionally, the Windows Update (WU) process can take significantly longer than expected for new image deployments on Windows Embedded Standard 7 SP1, POSReady7, and ThinPC. To help speed up this process, you need to install three specific KB packages, and you must stop the Windows Update service before each installation. The packages are:
KB3138612
KB3172605
KB3177467
Open an administrator command prompt. Click the Start menu, right-click Command Prompt, and then click Run as Administrator.
Stop the Windows Update service using the net stop wuauserv
After the service stops, immediately install a KB package. This will automatically restart the WU service.
Repeat steps 2 and 3 until all three packages listed above are installed.
Close the Command Prompt window, and run Windows Update from within Control Panel to check for any additional updates.
May 25, 2017 at 01:32AM http://ift.tt/2riWWoX
0 notes
openmicrosoft · 7 years
Text
Windows Embedded OS Down-Level Servicing Model FAQ
We have created this FAQ below to better support our partners seeking embedded-specific details around the down-level OS servicing changes for Windows 7 SP1, Windows 8.1, Windows Server 2008 R2, Windows Server 2012, and Windows Server 2012 R2. Please note that questions 4-8 apply broadly to non-embedded scenarios.
FAQ Q1. For embedded partners who don’t have assigned account managers, how can servicing changes be communicated before wide spread announcements? A1. Since we are following the Windows update model customers who do not have an account manager may review the http://ift.tt/29qWbkJ blog for updates.
Q2. How can offline embedded machines access or download Monthly Rollups? A2. The same way that they consume updates offline today. The only difference is that they only have to download one update instead of many.
Q3. How will offline embedded machines (often mission critical) be serviced if a Monthly Rollup causes an issue? A3. We expect MUC offline scenarios to have advanced testing in-house before being released to devices. As such, we don’t anticipate many situations where a rollback is necessary if compatibility checks have already been done on test systems. If a Monthly Rollup happens to cause an offline issue, the customer will need to report the issue to CSS. If a fix is issued, then it would be included in the next relevant update.
Q4. How can machines with size limitations download Monthly Rollups? A4. We understand that some machines have size limitations. While either the Security Only Update OR the Monthly Rollup is necessary to be covered for critical security fixes for a given month, we recommend installing the Monthly Rollup because each update will only download the new delta fixes (for customers using Windows Update, or WSUS with “express installation files” support enabled). In addition, with new Monthly Rollups superseding those from previous months, disk cleanup will remove the older installed and superseded Monthly Rollups after a certain amount of time (see below Questions for further details). In comparison, the Security Only updates (which are not superseded by the subsequent Security Only update) will continue to reside on disk and not be replaced if any binaries are in multiple updates, which consumes greater space over time. Please note that removal of superseded updates happens automatically on Windows versions equal to or newer than Windows 8. For Windows 7, the user can apply the Task Scheduler to create a recurring task to run the disk cleanup tool.
Starting February 2017, the Security Only update does not include updates for Internet Explorer. With this separation, the Security Only update package size is significantly reduced.
Q5. Can customers skip or uninstall Monthly Rollups? A5. Customers can choose to not install a given Monthly Rollup, and can uninstall the Monthly Rollup as well. If multiple Monthly Rollups are installed and present on disk, then uninstalling the latest Monthly Rollup would “revert” to the state of the older Monthly Rollup. If no older Monthly Rollups are installed on disk, then uninstalling the latest Monthly Rollup would “revert” to a state with none of the Monthly Rollups present. See below for further details on how older Monthly Rollups can be removed from disk.
Q6. What is the Monthly Rollup disk cleanup process? A6. Since each new Monthly Rollup supersedes the previous one, disk cleanup will automatically take care of removing older Monthly Rollups from disk over time. On Windows 8 and later versions, a cleanup task will run regularly during the next maintenance window that identifies all installed updates that have been superseded by another installed update. Once 30 days have passed since a particular update has been marked superseded by this cleanup task, that update will be removed from disk on the next task run. The same behavior will apply for the Monthly Rollups, where an older Monthly Rollup will remain on disk for approximately 30 days (may vary by a few days based on maintenance windows) after a newer / superseding Monthly Rollup is installed. Note that once an older Monthly Rollup is removed from disk, it will no longer be a state to “revert” to if uninstalling a newer Monthly Rollup.
Below is an example of this cleanup and uninstallation timing:
November 2016 Monthly Rollup is installed on 11/8/2016
December 2016 Monthly Rollup is installed on 12/13/2016
On the next scheduled run of the cleanup task (assume this happens to run on 12/14/2016), the November 2016 Monthly Rollup is marked as superseded.
January 2017 Monthly Rollup is installed on 1/10/2017
On the next scheduled run of the cleanup task (assume this happens to run on 1/11/2017), the December 2016 Monthly Rollup is marked as superseded.
On 1/13/2017, the November 2016 Monthly Rollup will have been marked superseded for 30 days
On the next scheduled run of the cleanup task, November 2016 Monthly Rollup will be removed from disk.
The January 2017 and December 2016 Monthly Rollups are still present and installed on the PC, meaning that you can still “revert” to the December 2016 Monthly Rollup state if uninstalling the January 2017 Monthly Rollup.  The November 2016 Monthly Rollup is no longer available as an installed state to “revert”
Note that for Windows 7, this cleanup task does not run automatically. Task Scheduler could be utilized to create a recurring task to run cleanup – otherwise this will not automatically occur. In addition, a user-initiated Windows Update Cleanup using the disk cleanup tool can also be used to remove superseded updates, but please note that this would immediately remove any superseded updates and not follow the 30 day process used by the cleanup task (in Windows 8 and later versions).
Q7. What is the guidance around re-applying Monthly Rollups after enabling Optional Components? A7. When Windows installs an update, all serviced content is staged for installation, including updates for Optional Components (such as Features and Roles in Server Manager). When an Optional Component is later enabled, the component servicing model will apply the highest available version of the component, which is the latest serviced version. Therefore, Optional Components will be up-to-date based on previously installed updates once they are enabled. Note that this assumes the staged content has not been corrupted, missing, etc., in which case Windows Update would be required to download the repair content. In summary, enabling Optional Components will apply them in the latest serviced state without needing to re-apply the update. Features that are installed via other means (like Features on Demand, apps downloaded outside of Optional Components, etc.) would likely need Windows Update to re-apply the update.
Q8. How can customers assess their optimal update strategy? A8. Please review “Update strategy choices” on this blog post.
http://ift.tt/2dZw72Z
May 23, 2017 at 10:53PM http://ift.tt/2qT0CfM
0 notes
openmicrosoft · 7 years
Text
Announcing AzCopy on Linux Preview
Today we are pleased to announce the preview version of the AzCopy on Linux with redesigned command-line interface that adopts POSIX parameter conventions. AzCopy is a command-line utility designed for copying large amounts of data to, and from Azure Blob, and File storage using simple commands with optimal performance. AzCopy is now built with .NET Core which supports both Windows and Linux platforms. AzCopy also takes a dependency on the Data Movement Library which is built with .NET Core enabling many of the capabilities of the Data Movement Library in AzCopy!
Install and run AzCopy on Linux
Install .NET Core on Linux
Download and extract the tar archive for AzCopy (version 6.0.0-netcorepreview)
wget -O azcopy.tar.gz http://ift.tt/2q9Ncfn tar -xf azcopy.tar.gz
Install and run azcopy
sudo ./install.sh azcopy
If you do not have superuser privileges, alternatively you can also run AzCopy by changing to azcopy directory and then running ./azcopy.
What is supported?
Feature parity with AzCopy on Windows (5.2) for Blob and File scenarios
Parallel upload and downloads
Built-in retry mechanism
Resume, or restart from a failed transfer session
And many other features highlighted in the AzCopy guide
What is not supported?
Azure Storage Table service is not supported in AzCopy on Linux
Samples
It is as simple as the legacy AzCopy, with command line options that follow POSIX conventions. Watch the following sample where I upload a directory of 100GB in size. It is simple!
To learn more about all the command line options, issue ‘azcopy –help‘ command.
Here are a few other samples:
Upload VHD files to Azure Storage
azcopy --source /mnt --include "*.vhd" --destination "http://ift.tt/2q9SffH"
Download a container using Storage Account Key
azcopy --recursive --source http://ift.tt/YlPjeQ --source-key "lYZbbIHTePy2Co…..==" --destination /mnt
Synchronous copy across Storage accounts
azcopy --source http://ift.tt/2qdx4Ya --source-key "lXHqgIHTePy2Co….==" --destination http://ift.tt/2pt4Ewd --dest-key "uT8nw5…. ==" –-sync-copy
AzCopy on Windows
AzCopy on Windows developed with .NET Framework will continue to be released and documented here. AzCopy on Windows offers DOS command-line parameters that Windows users are familiar with.
Feedback
AzCopy on Linux is currently in preview, and we will make improvements as we hear from our users. So, if you have any comments or issues, please leave a comment below.
0 notes
openmicrosoft · 7 years
Text
New Azure Storage JavaScript client library for browsers – Preview
Today we are announcing our newest library: Azure Storage Client Library for JavaScript. The demand for the Azure Storage Client Library for Node.js, as well as your feedback, has encouraged us to work on a browser-compatible JavaScript library to enable web development scenarios with Azure Storage. With that, we are now releasing the preview of Azure Storage JavaScript Client Library for Browsers.
Enables web development scenarios
The JavaScript Client Library for Azure Storage enables many web development scenarios using storage services like Blob, Table, Queue, and File, and is compatible with modern browsers. Be it a web-based gaming experience where you store state information in the Table service, uploading photos to a Blob account from a Mobile app, or an entire website backed with dynamic data stored in Azure Storage.
As part of this release, we have also reduced the footprint by packaging each of the service APIs in a separate JavaScript file. For instance, a developer who needs access to Blob storage only needs to require the following scripts: <script type=”javascript/text” src=”azure-storage.common.js”/> <script type=”javascript/text” src=”azure-storage.blob.js”/>
Full service coverage
The new JavaScript Client Library for Browsers supports all the storage features available in the latest REST API version 2016-05-31 since it is built with Browserify using the Azure Storage Client Library for Node.js. All the service features you would find in our Node.js library are supported. You can also use the existing API surface, and the Node.js Reference API documents to build your app!
Built with Browserify
Browsers today don’t support the require method, which is essential in every Node.js application. Hence, including a JavaScript written for Node.js won’t work in browsers. One of the popular solutions to this problem is Browserify. The Browserify tool bundles your required dependencies in a single JS file for you to use in web applications. It is as simple as installing Browserify and running browserify node.js -o browser.js and you are set. However, we have already done this for you. Simply download the JavaScript Client Library.
Recommended development practices
We highly recommend use of SAS tokens to authenticate with Azure Storage since the JavaScript Client Library will expose the authentication token to the user in the browser. A SAS token with limited scope and time is highly recommended. In an ideal web application it is expected that the backend application will authenticate users when they log on, and will then provide a SAS token to the client for authorizing access to the Storage account. This removes the need to authenticate using an account key. Check out the Azure Function sample in our Github repository that generates a SAS token upon an HTTP POST request.
Use of the stream APIs are highly recommended due to the browser sandbox that blocks users from accessing the local filesystem. This makes the stream APIs like getBlobToLocalFile, createBlockBlobFromLocalFile unusable in browsers. See the samples in the link below that use createBlockBlobFromStream API instead.
Sample usage
Once you have a web app that can generate a limited scope SAS Token, the rest is easy! Download the JavaScript files from the repository on Github and include in your code.
Here is a simple sample that can upload a blob from a given text:
1. Insert the following script tags in your HTML code. Make sure the JavaScript files located in the same folder. <script src="azure-storage.common.js"></script/> <script src="azure-storage.blob.js"></script/> 2. Let’s now add a few items to the page to initiate the transfer. Add the following tags inside the BODY tag. Notice that the button calls uploadBlobFromText method when clicked. We will define this method in the next step. <input type="text" id="text" name="text" value="Hello World!" /> <button id="upload-button" onclick="uploadBlobFromText()">Upload</button> 3. So far, we have included the client library and added the HTML code to show the user a text input and a button to initiate the transfer. When the user clicks on the upload button, uploadBlobFromText will be called. Let’s define that now: <script> function uploadBlobFromText() { // your account and SAS information var sasKey ="...."; var blobUri = "http://ift.tt/2mCj6PZ"; var blobService = AzureStorage.createBlobServiceWithSas(blobUri, sasKey).withFilter(new AzureStorage.ExponentialRetryPolicyFilter()); var text = document.getElementById('text'); var btn = document.getElementById("upload-button"); blobService.createBlockBlobFromText('mycontainer', 'myblob', text.value,  function(error, result, response){ if (error) { alert('Upload filed, open browser console for more detailed info.'); console.log(error); } else { alert('Upload successfully!'); } }); } </script> Of course, it is not that common to upload blobs from text. See the following samples for uploading from stream as well as a sample for progress tracking.
•    JavaScript Sample for Blob •    JavaScript Sample for Queue •    JavaScript Sample for Table •    JavaScript Sample for File
Share
Finally, join our Slack channel to share with us your scenarios, issues, or anything, really. We’ll be there to help!
0 notes
openmicrosoft · 7 years
Text
(Cross-Post) New Azure Storage Release – Larger Block Blobs, Incremental Copy, and more!
Originally posted in the Microsoft Azure Blog.
We are pleased to announce new capabilities in the latest Azure Storage Service release and updates to our Storage Client Libraries. This latest release allows users to take advantage of increased block sizes of 100 MB, which allows block blobs up to 4.77 TB, as well as features like incremental copy for page blobs and pop-receipt on add message.
REST API version 2016-05-31
Version 2016-05-31 includes these changes:
The maximum blob size has been increased to 4.77 TB with the increase of block size to 100 MB. Check out our previous announcement for more details.
The Put Message API now returns information about the message that was just added, including the pop receipt. This enables the you to call Update Message and Delete Message on the newly enqueued message.
The public access level of a container is now returned from the List Containers and Get Container Properties APIs. Previously this information could only be obtained by calling Get Container ACL.
The List Directories and Files API now accepts a new parameter that limits the listing to a specified prefix.
All Table Storage APIs now accept and enforce the timeout query parameter.
The stored Content-MD5 property is now returned when requesting a range of a blob or file. Previously this was only returned for full blob and file downloads.
A new Incremental Copy Blob API is now available. This allows efficient copying and backup of page blob snapshots.
Using If-None-Match: * will now fail when reading a blob. Previously this header was ignored for blob reads.
During authentication, the canonicalized header list now includes headers with empty values. Previously these were omitted from the list.
Several error messages have been clarified or made more specific. See the full list of changes in the REST API Reference.
Check out the REST API Reference documentation to learn more.
New client library features
.NET Client Library (version 8.0.1)
All the service features listed above
Support for portable class library (through the NetStandard 1.0 Façade)
Key rotation for client side encryption for blobs, tables/ and queues
For a complete list of changes, check out the change log in our Github repository.
Storage Emulator
All the service features listed above
The storage emulator v4.6 is available as part of the latest Microsoft Azure SDK. You can also install the storage emulator using the standalone installer.
We’ll also be releasing new client libraries for Java, C++, Python and NodeJS to support the latest REST version in the next few weeks along with a new AzCopy release. Stay tuned!
0 notes
openmicrosoft · 7 years
Text
(Cross-Post) General Availability: Larger Block Blobs in Azure Storage
Originally posted in the Microsoft Azure Blog.
Azure Blob Storage is a massively scalable object storage solution capable of storing and serving tens to hundreds of petabytes of data per customer across a diverse set of data types including media, documents, log files, scientific data and much more. Many of our customers use Blobs to store very large data sets, and have requested support for larger files. The introduction of larger Block Blobs increases the maximum file size from 195 GB to 4.77 TB. The increased blob size better supports a diverse range of scenarios, from media companies storing and processing 4K and 8K videos to cancer researchers sequencing DNA.
Azure Block Blobs have always been mutable, allowing a customer to insert, upload or delete blocks of data without needing to upload the entire blob. With the new larger block blob size, mutability offers even more significant performance and cost savings, especially for workloads where portions of a large object are frequently modified. For a deeper dive into the Block Blobs service including object mutability, please view this video from our last Build Conference. The REST API documentation for Put Block and Put Block List also covers object mutability.
We have increased the maximum allowable block size from 4 MB to 100 MB, while maintaining support for up to 50,000 blocks committed to a single Blob. Range GETs continue to be supported on larger Block Blobs allowing high speed parallel downloads of the entire Blob, or just portions of the Blob. You can immediately begin taking advantage of this improvement in any existing Blob Storage or General Purpose Storage Account across all Azure regions.
Larger Block Blobs are supported by the most recent release of the .NET Client Library (version 8.0.0), with support for Java, Node.js and AzCopy rolling out over the next few weeks. You can also directly use the REST API as always. Larger Block Blobs are supported by REST API version 2016-05-31 and later. There is nothing new to learn about the APIs, so you can start uploading larger Block Blobs right away.
This size increase only applies to Block Blobs, and the maximum size of Append Blobs (195 GB) and Page Blobs (1 TB) remains unchanged. There are no billing changes. To get started using Azure Storage Blobs, please see our getting started documentation, or reference one of our code samples.
0 notes
openmicrosoft · 8 years
Text
General availability: Azure cool blob storage in additional regions
Azure Blob storage accounts with hot and cool storage tiers are generally available in six new regions: US East, US West, Germany Central, Germany Northeast, Australia Southeast, and Brazil South. You can find the updated list of available regions on the Azure services by region page.
Blob storage accounts are specialized storage accounts for storing your unstructured data as blobs (objects) in Azure Storage. With Blob storage accounts, you can choose between hot and cool storage tiers to store your less frequently accessed (cool) data at a lower storage cost, and store more frequently accessed (hot) data at a lower access cost.
Customers in the new regions can take advantage of the cost benefits of the cool storage tier for storing backup data, media content, scientific data, active archival data—and in general, any data that is less frequently accessed. For details on how to start using this feature, please see our getting-started documentation.
For details on regional pricing, see the Azure Storage pricing page.
0 notes
openmicrosoft · 8 years
Text
(Cross Post) Announcing Azure Storage Client Library GA for Xamarin
We are pleased to announce the general availability release of the Azure Storage client library for Xamarin. Xamarin is a leading mobile app development platform that allows developers to use a shared C# codebase to create iOS, Android, and Windows Store apps with native user interfaces. We believe the Azure Storage library for Xamarin will be instrumental in helping provide delightful developer experiences and enabling an end-to-end mobile-first, cloud-first experience. We would like to thank everyone who has leveraged previews of Azure Storage for Xamarin and provided valuable feedback.
The sources for the Xamarin release are the same as the Azure Storage .Net client library and can be found on Github. The installable package can be downloaded from nuget (version 7.2 and beyond) or from Azure SDK (version 2.9.5 and beyond) and installed via the Web Platform installer. This generally available release supports all features up to and included in the 2015-12-11 REST version.
Getting started is very easy. Simply follow the steps below:
Install Xamarin SDK and tools and any language specific emulators as necessary: For instance, you can install the Android KitKat emulator.
Create a new Xamarin project and install the Azure Storage nuget package version 7.2 or higher in your project and add Storage specific code.
Compile, build and run the solution. You can run against a phone emulator or an actual device. Likewise you can connect to the Azure Storage service or the Azure Storage emulator.
Please see our Getting Started Docs and the reference documentation to learn how you can get started with the Xamarin client library and build applications that leverage Azure Storage features.
We currently support shared asset projects (e.g., Native Shared, Xamarin.Forms Shared), Xamarin.iOS and Xamarin.Android projects. This Storage library leverages the .Net Standard runtime library that can be run on Windows, Linux and MacOS. Learn about .Net Standard library and .Net Core. Learn about Xamarin support for .Net Standard.
As always, we continue to do our work in the public GitHub development branch for visibility and transparency. We are working on building code samples in our Azure Storage samples repository to help you better leverage the Azure Storage service and the Xamarin library capabilities. A Xamarin image uploader sample is already available for you to review/ download. If you have any requests on specific scenarios you’d like to see as samples, please let us know or feel free to contribute as a valued member of the developer community. Community feedback is very important to us.
Enjoy the Xamarin Azure Storage experience!
Thank you
Dinesh Murthy, Michael Roberson, Michael Curd, Elham Rezvani, Peter Marino and the Azure Storage Team.
0 notes
openmicrosoft · 8 years
Text
Announcing the General Availability of Storage Service Encryption for Data at Rest
Storage Service Encryption for Azure Blob Storage helps you address organizational security and compliance requirements by encrypting your Blob storage (Block Blobs, Page Blobs and Append Blobs).
Today, we are excited to announce the General Availability of Storage Service Encryption for Azure Blob Storage. You can enable this feature on any Azure Resource Manager storage account using the Azure Portal, Azure Powershell, Azure CLI or the Microsoft Azure Storage Resource Provider API.
Microsoft Azure Storage handles all the encryption, decryption and key management in a totally transparent fashion. All data is encrypted using 256-bit AES encryption, also known as AES-256, one of the strongest block ciphers available. Customers can enable this feature on all available redundancy types of Azure Storage – LRS, GRS, ZRS, RA-GRS and Premium-LRS for all Azure Resource Manager Storage accounts and Blob Storage accounts. There is no additional charge for enabling this feature.
Note that SSE encrypts when blobs are written or updated. This means that when you enable SSE for an existing storage account, only new writes are encrypted; it does not go back and encrypt the data already present.
Find out more about Storage Service Encryption with Service Managed Keys.
0 notes