#Unable to delete selected backup Repository
Explore tagged Tumblr posts
Text
Fix missing path and delete a Veeam Backup Repository
In this article, we shall discuss how to “fix missing path and delete a Veeam Backup Repository”. When your Backup Repository is in an invalid State, please see the possible causes and troubleshooting advice. Also see how to install Splunk and Veeam App on Windows Server to monitor VBR, and the “Deep Dive into Protecting AWS EC2 and RDS Instances and VPC“. On a Veeam Backup and replication…
#Change VBR Backup Repository#Delete VBR Backup Jobs#Delete VBR Repository#Delete Veeam Backup and Replication Repository#Detach Disk on VBR#Fix Veeam Backup Repository Path Issue#Unable to delete selected backup Repository#VBR#Veeam Backup and Replication#Windows#Windows Server#Windows Server 2012#Windows Server 2016#Windows Server 2022
0 notes
Text
Version 368
youtube
windows
zip
exe
os x
app
linux
tar.gz
source
tar.gz
I had a great week. The PTR has moved successfully, and I got multiple local tag services working.
PTR has moved
The Public Tag Repository has changed management. I am no longer running it or involved in working as a janitor. There is more information here:
https://hydrus.tumblr.com/post/187561442294/the-ptr-will-undergo-a-change-of-management-in-two
As a result, the PTR no longer has bandwidth limits! The user now running it is also putting together a janitorial team to catch up on petitions as well. About ten million delete mapping petitions and six thousand add sibling petitions had piled up! If you would like to talk to the new management, they are available on the discord. The current plan is to keep running the PTR with the same loose rules as I did--the main concern was overcoming my bandwidth limits.
If you currently sync with the PTR, you will be given a yes/no dialog when you update asking if you would like to keep using the PTR at the new location. If you select yes, the client will automatically update your service (the only credentials difference is that instead of it being at hydrus.no-ip.org, it is now at ptr.hydrus.network), and you will keep syncing and be able to continue uploading without skipping a beat. If you select no, your PTR will pause. If you still sync with my read-only test file repository, this will be paused automatically.
While I am very thankful for the 650 million submissions I have had to the PTR over the past seven years, it is a load off my mind to no longer be responsible for it. I much prefer being a developer than an administrator and hope to use the extra time and upcoming feedback to work on improving the admin side of hydrus repositories, which, due to me being the primary user, have always had debug-tier UI and half-broken features.
The various areas in the help have been updated to reflect that the PTR is no longer mine, and the quick setup under the help menu is now just help->add the public tag repository.
running your own continuance of the PTR
If you are an advanced user and would like to run your own version of the PTR from where I left off, be that a public or private thing, I have uploaded the same sanitized and 'frozen' version of the server db we used in the transfer here:
https://mega.nz/#F!w7REiS7a!bTKhQvZP48Fpo-zj5MAlhQ
If you do run your own and would like it to be public, let me know and I'll happily add its info to my help files and the auto-setup links.
Hydrus repositories are anonymous. No IPs or other identifying info is logged. There was not much to sanitize, but out of an abundance of caution I deleted the various petition 'reasons' people have submitted over the years, as some had some personal jokes to me, and I collapsed the content submission timestamp record across the db to be no more detailed than what a syncing client already knows. Since this timestamp collapse reduces server-specific knowledge about uploads and does not affect server operation, it is now standard practise for repositories going forward. If you run a repository, updating this week will take a few minutes (or, ha ha, if you have 650 million mappings, about five hours) to update its existing data.
I also uploaded Hydrus Tag Archives of the PTR's tag/sibling/parent content, which I simply made with the new tag migration system and am making available for convenience. If you know python or SQLite and would like to play with this data, check them out.
Be warned that these archives unpack to large files that work best on an SSD. The server db is a 5.5GB .7z that will ultimately grow (after following an internal readme guide) to 42GB or so. As I write this document, the namespaced/unnamespaced mappings Hydrus Tag Archives are not yet uploaded (even zipped, they are 5GB each), so please check back later if you do not yet see them.
multiple local tag services
I planned to only start work on multiple local tag services (being able to have more than one 'local tags', just like you can have more than one tag repository) this week, but I accidentally finished it! it turned out not to be as huge a job as I thought, so I just piled some time into it and got it done.
So you can now add new 'local tag' services under services->manage services. You can add (and delete) as many as you want, but you have to have at least one. It is now possible, for instance, to create a new separate local tag service just for your subjective 'favourite'-style tags, or one that only pulls tags from a certain booru, or from filename imports. You can also use the new tag migration system to move tags between your local tag services. I know some users have wanted a separate local tag service as a prep area for what they will submit to the PTR--I think all the tools are now in place for this.
As a side note, just to emphasise that it is not the only local service you can have, if your local tags service still has the default name 'local tags', it will be renamed to the new default 'my tags' on update. Feel free to rename it to whatever you like, again under manage services.
Once I have some better tag show/hide tech working, I'll likely add additional default tag services that do not show their content in the main UI but do pull all tags from downloaders and filenames from hard drive imports, so you will be able to retroactively 'mine' this information store for your real tag services if you miss it the first time around.
I am really pleased with this feature. If you have been interested in it yourself, let me know how you get on. Adding multiple local file services is another long-planned feature, unfortunately significantly more complicated (I think 8-12 times at least), but I would like to hear how tags go for now.
full list
multiple local tag services:
you can now add additional local tag services under services->manage services!
new local tag services will appear in manage tags and tag import options and so on, just like when you add a tag repository
you can also delete local tag services, but you must have at least one
the default local tag service created for a new client is now renamed from 'local tags' to 'my tags'. any existing user with their local tag service called 'local tags' will be renamed on update to 'my tags'
.
ptr migration:
the ptr has been successfully migrated to user management! hydrus dev is no longer involved in running or administering it. the old bandwidth limits are removed! it has the same port and access key, but instead of hydrus.no-ip.org, it is now at ptr.hydrus.network
on update, if you sync with the ptr, you will get a yes/no asking if you want to continue using it at the new location. on yes, it'll update your server's address automatically. on no, it'll leave it as-is and pause it. if you still have a connection to my old read-only file repo, that will be paused
changed the auto repo setup command to be _help->add the public tag repository_. it points to the new location
as repo processing and related maintenance is now nicer, and secondarily since bandwidth limits are less a problem for the ptr specifically, the default clientside hydrus bandwidth limit of 64MB/day is lifted to 512MB/day. any users who are still on the old default will be updated
updated the help regarding the public tag repository, both in general description and the specific setup details
a copy of the same sanitized and frozen PTR db used to start the new PTR, and convenient tag archives of its content, are now available at https://mega.nz/#F!w7REiS7a!bTKhQvZP48Fpo-zj5MAlhQ
.
the rest:
fixed a small bug related to the new 'caught up' repository mechanic for clients that only just added (or desynced) a repository
rewrote the tag migration startup job to handle specific 'x files' jobs better--they should now start relatively instantly, no matter the size of the tag service
on 'all known files' tag migrations, a startup optimisation will now be applied if the tag service is huge
fixed the tag filter's advanced panel's 'add' buttons, which were not hooked up correctly
the internal backup job now leaves a non-auto-removing 'backup complete!' message when finished
on update, server hydrus repositories will collapse all their existing content timestamps to a single value per update. also, all future content uploads will collapse similarly, meaning all update content has the same timestamp. this adds a further layer of anonymity and is a mid-step towards future serverside db compaction (I think I can ultimately reduce server.mappings.db filesize by ~33%). if you have a tag repo with 10M+ mappings, this will take some time
hydrus servers now generate new cert/key files on boot if they are missing. whenever they generate a new cert/key, they now print a notification to the log
misc help fixes and updates, and removed some ancient help that referred to old systems
corrected journalling->journaling typo for the new experimental launch parameter
next week
The PTR work took much longer than I expected, and I was unable to get to modified date or file maintenance improvements, so I will have a re-do for those next week. For the ongoing tag work, I will start work on updating the tag 'censorship' system, which is still running on very old code, to more of a 'don't show these tags in places x, y, z', and see about a related database cache to speed up various sibling/display tag choices.
This week was a little bonkers and I fell behind on messages. I am sorry for the delay and will put some time aside to catch up when I can. I think my immediate busy period is done for a bit (although we'll be back to it for the wx->Qt conversion in mid-October), so I'll be grateful to take it a bit easier for a while.
0 notes
Text
DBA interview Question and Answer Part 22
I have configured the RMAN with Recovery window of 3 days but on my backup destination only one days archive log is visible while 3 days database backup is available there why?I go through the issue by checking the backup details using the list command. I found there is already 3 days database as well as archivelog backup list is available. Also the backup is in Recoverable backup. Thus it is clear due to any reason the backup is not stored on Backup place.Connect rman target database with catalogList backup Summary;List Archivelog All;List Backup Recoverable;When I check the db_recovery_dest_size, it is 5 GB and our flash-recovery area is almost full because of that it will automatically delete archive logs from backup location. When I increase the db_recovery_dest_sizethen it is working fine.If one or all of control file is get corrupted and you are unable to start database then how can you perform recovery?If one of your control file is missing or corrupted then you have two options to recover it either delete corrupted CONTROLFILE manually from the location and copy the available rest of CONTROLFILE and rename it as per the deleted one. You can check the alert.log for exact name and location of the control file. Another option is delete the corrupted CONTROLFILE and remove the location from Pfile/Spfile. After removing said control file from spfile and start your database.In another scenario if all of your CONTROLFILE is get corrupted then you need to restore them using RMAN.As currently none of the CONTROLFILE is mounted so RMAN does not know about the backup or any pre-configured RMAN setting. In order to use the backup we need to pass the DBID (SET DBID=691421794) to the RMAN.RMAN>Restore Controlfile from ‘H:oracleBackup C-1239150297-20130418’ You are working as a DBA and usually taking HOTBACKUP every night. But one day around 3.00 PM one table is dropped and that table is very useful then how will you recover that table?If your database is running on oracle 10g version and you already enable the recyclebin configuration then you can easily recover dropped table from user_recyclebin or dba_recyclebin by using flashback feature of oracle 10g.SQL> select object_name,original_name from user_recyclebin;BIN$T0xRBK9YSomiRRmhwn/xPA==$0 PAY_PAYMENT_MASTERSQL> flashback table table2 to before drop;Flashback complete.In that case when no recyclebin is enabled with your database then you need to restore your backup on TEST database and enable time based recovery for applying all archives before drop command execution. For an instance, apply archives up to 2:55 PM here.It is not recommended to perform such recovery on production database directly because it is a huge database will take time.Note: If you are using SYS user to drop any table then user’s object will not go to the recyclebin for SYSTEM tablespace, even you have already set recyclebin parameter ‘true’. And If you database is running on oracle 9i you require in-complete recovery for the same.Sometimes why more archivelog is Generating?There are many reasons such as: if more database changes were performed either using any import/export work or batch jobs or any special task or taking hot backup (For more details why hot backup generating more archive check my separate post).You can check it using enabling log Minor utility.How can I know my require table is available in export dump file or not?You can create index file for export dump file using ‘import with index file’ command. A text file will be generating with all table and index object name with number of rows. You can confirm your require table object from this text file.What is Cache Fusion Technology?Cache fusion provides a service that allows oracle to keep track of which nodes are writing to which block and ensure that two nodes do not updates duplicates copies of the same block. Cache fusion technology can provides more resource and increase concurrency of users internally. Here multiple caches can able to join and act into one global cache. Thus solving the issues like data consistency internally without any impact on the application code or design.Why we should we need to open database using RESETLOGS after finishing incomplete recovery?When we are performing incomplete recovery that means, it is clear we are bringing our database to past time or re-wind period of time. Thus this recovery makes database in prior state of database. The forward sequence of number already available after performing recovery, due to mismatching of this sequence numbers and prior state of database, it needs open database with new sequence number of redo log and archive log.Why export backup is called as logical backup?Export dump file doesn’t backup or contain any physical structure of database such as datafiles, redolog files, pfile and password file etc. Instead of physical structure, export dump contains logical structure of database like definition of tablespace, segment, schema etc. Due to these reason export dump is call logical backup.What are difference between 9i and 10g OEM?In oracle 9i OEM having limited capability or resource compares to oracle 10g grids. There are too many enhancements in 10g OEM over 9i, several tools such as AWR and ADDM has been incorporated and there is SQL Tuning advisor also available.Can we use same target database as catalog DB?The recovery catalog should not reside in the target database because recovery catalog must be protected in the event of loss of the target database.What is difference between CROSSCHECK and VALIDATE command?Validate command is to examine a backup set and report whether it can be restored successfully where as crosscheck command is to verify the status of backup and copies recorded in the RMAN repository against the media such as disk or tape. How do you identify or fix block Corruption in RMAN database?You can use the v$block_corruption view to identify which block is corrupted then use the ‘blockrecover’ command to recover it.SQL>select file# block# from v$database_block_corruption;file# block10 1435RMAN>blockrecover datafile 10 block 1435;What is auxiliary channel in RMAN? When it is required?An auxiliary channel is a link to auxiliary instance. If you do not have automatic channel configured, then before issuing the DUPLICATE command, manually allocate at least one auxiliary channel within the same RUN command.Explain the use of Setting GLOBAL_NAME equal to true?Setting GLOBAL_NAMES indicates how you might connect to the database. This variable is either ‘TRUE’ or ‘FALSE’ and if it is set to ‘TRUE’ which enforces database links to have the same name as the remote database to which they are linking.How can you say your data in database is Valid or secure?If data of the database is validated we can say that our database is secured. There is different way to validate the data:1. Accept only valid data2. Reject bad data.3. Sanitize bad data. Write a query to display all the odd number from table.Select * from (select employee_number, rownum rn from pay_employee_personal_info)where MOD (rn, 2) 0;-or- you can perform the same things through the below function.set serveroutput on; begin for v_c1 in (select num from tab_no) loop if mod(v_c1.num,2) = 1 then dbms_output.put_line(v_c1.num); end if; end loop; end;What is difference between Trim and Truncate?Truncate is a DDL command which delete the contents of a table completely, without affecting the table structures where as Trim is a function which changes the column output in select statement or to remove the blank space from left and right of the string.When to use the option clause "PASSWORD FILE" in the RMAN DUPLICATE command? If you create a duplicate DB not a standby DB, then RMAN does not copy the password file by default. You can specify the PASSWORD FILE option to indicate that RMAN should overwrite the existing password file on the auxiliary instance and if you create a standby DB, then RMAN copies the password file by default to the standby host overwriting the existing password file. What is Oracle Golden Gate?Oracle GoldenGate is oracle’s strategic solution for real time data integration. Oracle GoldenGate captures, filters, routes, verifies, transforms, and delivers transactional data in real-time, across Oracle and heterogeneous environments with very low impact and preserved transaction integrity. The transaction data management provides read consistency, maintaining referential integrity between source and target systems.What is meaning of LGWR SYNC and LGWR ASYNC in log archive destination parameter for standby configuration.When use LGWR with SYNC, it means once network I/O initiated, LGWR has to wait for completion of network I/O before write processing. LGWR with ASYNC means LGWR doesn’t wait to finish network I/O and continuing write processing.What is the truncate command enhancement in Oracle 12c?In the previous release, there was not a direct option available to truncate a master table while child table exist and having records.Now the truncate table with cascade option in 12c truncates the records in master as well as all referenced child table with an enabled ON DELETE constraint.
0 notes
Text
WD My Cloud Home Review
New Post has been published on http://secondcovers.com/wd-my-cloud-home-review/
WD My Cloud Home Review
(adsbygoogle = window.adsbygoogle || []).push();
Most of us have hundreds or thousands of photos, documents, songs, and/ or movies spread out across our phones, PCs, and assorted online services. Consolidating everything and backing it all up is a pain, and things often get forgotten about or lost. WD wants to automate all of that and give home users a central, always-on repository for all their data and devices – somewhat like a cloud storage service, except that it’s a physical product that lives in your house.
The new My Cloud Home plugs into your router and is accessible not only to all the devices on your home network, but also over the Internet. Instead of a monthly subscription charge, you buy it once and then it’s yours to use as you like. It’s meant to be extremely easy to set up and use, and so WD has developed software and apps that hide a lot of the nuts and bolts of the technology from users.
A lot of people who are considering this device might think that it’s just a new version of WD’s similarly named My Cloud series, which has been around for quite a few years now, but that isn’t true at all. As we discovered during our review, this is a not a standard network-attached storage (NAS) device, and might not behave as you expect. Read on to see whether that’s a good or bad thing, and whether the new My Cloud Home is right for you.
WD My Cloud Home design and specifications
WD began redesigning its entire product line in late 2016, and the popular My Passport and My Book models were given dramatic facelifts. However, with the My Cloud series of consumer network-attached drives, there hasn’t been a simple cosmetic change. WD has instead launched the new My Cloud Home series, which is a different kind of product altogether.
The My Cloud Home is available in both single-drive and dual-drive options, and both look the same apart from their size. We’re reviewing the entry-level variant which has one 2TB 3.5-inch hard drive inside and is no larger than any external desktop hard drive. It has a blocky body with sharp corners and no curves anywhere. The split design is similar to that of the recently launched My Passport and My Book models, while the colour scheme echoes the My Passport SSD. The upper half is plain white plastic while the lower half is textured and metallic. Instead of the diagonal stripes we’ve seen many times already, there’s a pattern of interlocking triangles. It’s a striking look, which is surprising considering that network appliances are usually hidden away.
A status indicator is cleverly designed into the seam between the two halves on the front. The top and bottom have large grilles for air circulation, and unless you find an enclosed space for this device, dust is sure to settle inside. You’ll find one Gigabit Ethernet port on the rear, along with a recessed reset button, a power inlet, and a USB 3.0 host port.
On the inside, there’s a 1.4GHz ARM-based Realtek RTD1296 processor with four Cortex-A53 cores, plus a Mali-T820 GPU which isn’t used at all. This is a processor designed for storage servers as well as media transcoding and streaming boxes. There’s also 1GB of RAM, and predictably, WD uses one of its own Red series drives, which are optimised for network-attached storage. Capacity options range from 2TB to 8TB for the single-drive My Cloud Home, and 4TB to 16TB (in mirrored RAID) for the larger My Cloud Home Duo. Only the latter can be opened for its drives to be swapped out.
In the box, you get the drive, a power adapter, a CAT 5E Ethernet cable, a warranty leaflet and a slip of paper with the drive’s unique security key and pictorial setup instructions.
WD My Cloud Home setup, usage and features
The Ethernet port is just about the only thing that the WD My Cloud Home has in common with its predecessor, the WD My Cloud. This is not a traditional NAS device, in that it doesn’t work like a server that sits on your network. It cannot be accessed or configured using universal network standards and protocols. You can’t type its IP address into a Web browser to get to its configuration portal because there simply isn’t one – all you’ll see is an error message.
You have no choice but to use WD’s website to set this drive up. The address is printed on what passes for a quick start guide in the box, and once there, you have to create a WD account. Once that’s done, the drive is associated with your account – it should happen automatically, but that didn’t work for us. We used the backup method of typing in the unique security key, after which we were fine. You then have to download the WD Discovery program (Windows 7-10 or macOS 10.9 and above) and/ or WD My Cloud Home app for Android or iOS in order to actually get anything onto or off the drive. Note that the WD My Cloud apps, which worked with the previous models, don’t work with the My Cloud Home.
WD Discovery is a bit spammy, as we reported in our last encounter with it during our WD My Passport SSD review. However, we couldn’t just avoid it this time, because it’s the only way for your PC to detect the My Cloud Home. On our Windows 10 test machine, the drive was detected and mounted as Z:, though not through any standard network path. We were able to drag and drop files to the drive using Windows Explorer – but only as long as the Discovery app was running and we were online. Unplugging our router’s Internet connection kicked the Z: drive offline even though it was running right next to us. Being unable to access a local network device without an Internet connection is completely and utterly antithetical to the concept of local network-attached storage.
You can drag and drop files in Windows, and the My Cloud website and apps work pretty much exactly how all cloud service such as Dropbox, Box, Google Drive and OneDrive work. The primary advantage is that all your saved content is available to you no matter where in the world you sign in from – all you need is a Web browser or your smartphone. WD hasn’t publicised the type of encryption it uses, though the website does use HTTPS. The mobile apps let you upload your device’s camera roll automatically, but not any other type of file.
All administrative functions are also restricted to the apps, and even then, there isn’t much you can do. You can share public links with anyone, but you can’t set read-only permission. If you want anyone else to use the drive, they’ll have to set up their own WD accounts and none of your content is visible to them, even if you want it to be. You also can’t set quotas or see how your space is being used. According to WD’s forums, a common Family folder will be introduced with a future update.
WD has worked out quite a few third-party service integrations. You can set up automatic imports from several popular social media and cloud services. There’s also an Alexa skill and IFTTT integration. However, there’s absolutely zero information about how to use any of these – we had to rely on trial and error and dig through WD’s convoluted online knowledge base for even the most basic instructions. It would be nice to know up front, for example, that the Alexa skill only lets you play your stored music files through an Echo or similar device.
The integration that’s most likely to be useful to people is the Plex media server which adds a whole new dimension to the My Cloud Home. You have to sign up for an account and go through a fairly long setup process, and then make sure all your media files are in the appropriate Plex subfolders that are created for you. While the My Cloud Home itself doesn’t offer as any kind of media server functionality, Plex lets you discover content within its folders using standard DLNA streaming apps and devices. Unfortunately, you’ll have to pay for a Plex Pass subscription if you want to use most of the remote streaming options.
Shop On SecondCovers
In Windows, you can select folders to sync to the My Cloud Home by right-clicking on them. There’s no way to track which folders are syncing or manage this centrally. The Discovery app advertises WD’s Backup software, but that tool doesn’t recognise the My Cloud Home as a destination.
If you’re expecting any NAS or network appliance functionality, you can forget about it. You can’t even set a static IP address for this device. For some reason, devices plugged into the My Cloud Home’s USB port are not mounted as network resources. You’re given the option to import content from them but not the other way around – and bizarrely, this can only be done through the mobile apps. There’s no FTP or SSH access, no downtime schedule to save power, no per-user permissions, and perhaps worst of all, no easy way to either back up the contents of this drive or use it as a target for third-party backup software.
At this point we should note that there are two exceptions to the app-only access restrictions. If you want to use Time Machine for backups on a Mac, a standard network share will be created because that’s how Time Machine works and it’s too popular to not support. Secondly, one folder called Public is permanently exposed as a standard network share. We used a free network scanning tool to discover the My Cloud Home’s IP address, and were able to mount the Public folder manually both in Windows and in macOS. This folder is visible to everyone on your network – there is no way to assign permissions – which means that anyone can see and even delete its contents without a password. This lets it work with the Windows Backup and Restore tool.
The Public folder is also for some reason hidden from all the apps, and even the virtual Z: drive in Windows Explorer. There’s no storage quota and no security, and the apps can’t show you how much space its contents are occupying, all of which make it a poor backup destination. At least it’s a standard network share which could come in handy – and yes, it worked just fine without an Internet connection, exactly as it’s supposed to. We just wish we could create more folders and manage permissions, bypassing the apps altogether. Clearly, since WD allowed these two exceptions, it could have gone all the way.
WD My Cloud Home performance
Once plugged in, the WD My Cloud Home can pretty much be left alone. It makes a constant low hissing noise when it’s running, and there are occasional chirps and murmers when the drive is being accessed. We would have liked some kind of power saving schedule option, but WD doesn’t even publish the unit’s power consumption except to say that there is no standby mode.
We used our 2TB review unit with a Xiaomi Mi Router 3C. On the same network, we had our Windows 10 PC connected wirelessly or using fixed Ethernet, as well as a MacBook Air and iPhone SE (Review) using only Wi-Fi. We performed a few basic file copy operations over Wi-Fi, which is how most of WD’s target audience will use this device.
We found that file transfers were painfully slow using Windows Explorer and the apps – much slower than copying files between computers on the same network. We measured performance using a folder of 150 JPEG images totalling 21MB, which we copied to the My Cloud Home in various ways. First, we dragged and dropped the folder into the Z: drive location using Windows Explorer and measured a ridiculous 2 minutes, 45 seconds. At that rate, it would take forever to back up large collections of movies, music, or photos. Surprisingly, dropping the same folder into the WD My Cloud Home using the web app in Google Chrome took only 1 minute, 25 seconds.
Here’s where our frustration with WD’s decisions really comes into focus: we dragged and dropped the exact same folder of images into the exposed Public folder, and it took precisely 19.81 seconds. That’s how standard network transfers should work, but WD’s overheads completely erase this potential. Finally, we attempted to do the same on our iPhone using the iOS Files app, and found that the My Cloud Home app couldn’t handle importing that many files at once.
Verdict If we look at the WD My Cloud Home as a NAS device, then it feels like a frustrating product that has been dumbed down too far and utterly fails to live up to its potential. However, it isn’t meant to be seen or used as a NAS device. It’s a simplified storage solution for non-technical users that leverages home routers – not even home networks. It serves as a central location where media and backups from multiple PCs and other devices can be stored and retrieved. We can’t fault WD’s motivations for choosing this path. Even we can see that configuring SAMBA shares plus figuring out device and OS admin permissions isn’t everyone’s cup of tea, and we readily agree that the My Cloud Home is better suited to casual users – especially those who are used to how Dropbox and similar online services work.
We just wish that WD had been clearer about this product’s purpose and positioning. It shouldn’t be grouped with the existing WD My Cloud product line which is generally well-liked. There should have been much better communication, especially because it’s so easy to assume that this is a refresh of the My Cloud. WD’s own support forum is full of angry and frustrated buyers venting about how badly this product fell below their expectations.
The 2TB single-drive My Cloud Home is priced at Rs. 11,999 which is enough to buy two 2TB USB drives or pay for a 1TB Dropbox account for slightly less than two years. The 8TB variant costs Rs. 25,150, and the dual-drive models range from Rs. 25,800 for 4TB to Rs. 52,250 for 16TB. If that seems tempting, remember that online cloud services have redundant backend systems and so there’s very little chance of downtime, and data is generally safe in case of hard drive crashes, power surges and failures, fires, and other disasters. My Cloud Home buyers are entirely responsible for the physical upkeep of their devices.
Thankfully, the older My Cloud models continue to exist, and there’s no sign of WD phasing them out just yet. If you’re familiar with basic networking between PCs or are willing to learn, an actual NAS would be a far better choice for you. The My Cloud Home is only good for those who value simplicity above all else and who want their data saved locally rather than online.
WD My Cloud Home (Single Drive) Price: 2TB: Rs. 11,999 3TB: Rs. 13,230 4TB: Rs. 14,725 6TB: Rs. 20,050 8TB: Rs. 25,150
Pros
Easy to set up and use
Can be accessed from anywhere in the world
No recurring subscription charges
Plex media server and other integrations
Looks good
Cons
No standard NAS functionality
No power saving or standby mode
Won’t work with common backup tools
No admin control over folder syncing, permissions, or quotas
Ratings (Out of 5)
Performance: 2.5
Value for Money: 3.5
Overall: 3
(adsbygoogle = window.adsbygoogle || []).push();
0 notes