#flashcache
Explore tagged Tumblr posts
Text







#DidYouKnow These About Oracle Database Management Services?
Swipe left to explore!
💻 Explore insights on the latest in #technology on our Blog Page 👉 https://simplelogic-it.com/blogs/
🚀 Ready for your next career move? Check out our #careers page for exciting opportunities 👉 https://simplelogic-it.com/careers/
#didyouknowfacts#knowledgedrop#interestingfacts#factoftheday#learnsomethingneweveryday#mindblown#oracle#database#oracledatabase#databasemanagement#speed#query#flashcache#orackeflashcache#rman#backup#application#applicationservice#didyouknowthat#triviatime#makingitsimple#learnsomethingnew#simplelogicit#simplelogic#makeitsimple
0 notes
Text
Flash Cache Amplifying Data Speed
Something interesting about flash caching drives dramatic performance gains by keeping active #data copies on the very fast #flashcache. This saves money since customers can even replace expensive #SAS with #SATA.
Read more: https://stonefly.com/hyper-converged/appliance

0 notes
Text
How does the ONTAP cluster work? (part 1)
How does the ONTAP cluster work? (part 1). In this article about HW & SDS appliances, disks, ADP, RAID, Aggregates, Plex, FlashPool, FabricPool, FlashCache, FlexArray, NSE, NVE, FlexVol and FlexGroup
In my previous series of articles I explained how ONTAP system memory works and talked about: NVRAM/NVMEM, NVLOGs, Memory Buffer, HA & HA interconnects, Consistency Points, WAFL iNodes, MetroCluster data availability, Mailbox disks, Takeover, Active/Active and Active/Passive configurations, Write-Through, Write Allocation, Volume Affinities (Wafinity), FlexGroup, RAID & WAFL interaction, Tetris…
View On WordPress
#Data Protection#FC-NVME#FlexGroup#HA#How ONTAP Cluster Works#MetroCluster#NAS#NVMe#NVMeF#ONTAP#ONTAP 9#ONTAP Architecture#ONTAP Select#SAN
0 notes
Link
Acutelearn is leading training company provides corporate, online and classroom training on various technologies like AWS, Azure, Blue prism, CCNA, CISCO UCS, CITRIX Netscaler,CITRIX Xendesktop, Devops chef, EMC Avamar, EMC Data Domain, EMC Networker, EMC VNX, Exchange Server 2016, Hyper-V, Lync server, Microsoft windows clustering, Netapp, Office 365, Openspan, RedHat openstack, RPA, SCCM, vmware nsx 6.0, vmware vrealize, vmware vsphere, windows powershell scripting. For more information reach us on +917702999361 / 371 www.acutelearn.com
Citrix Netscaler course content:
Introducing and deploying Citrix NetScaler Introduction to the NetScaler system Planning a NetScaler deployment Deployment scenarios NetScaler platform and product editions Product features Hardware platforms and components NetScaler architecture overview Initial NetScaler access Networking NetScaler-owned IP addresses NetScaler modes Network address translation Virtual local area networks Link Aggregation Internet Control Message Protocol Path MTU discovery Dynamic routing support and route health injection Configuring high availability Introduction to high availability High availability node configuration Propagation and synchronization High availability management Securing the NetScaler system NetScaler system communication Access control lists Configuring load balancing Load-balancing process Entity management Load-balancing traffic types Service monitoring Load-balancing topology, methods, and additional options Advanced load balancing methods Link load balancing Custom load Load monitor process Service and virtual server management Load Balancing Visualizer Configuring SSL offload SSL and digital certificates SSL concepts SSL offload overview Offload performance SSL administration and deployment decisions Deployment scenarios Configuring SSL offload Creating an SSL virtual server Advanced SSL settings Configuring Global Server Load Balancing GSLB concepts Metric exchange protocol GSLB DNS methods GSLB persistence Configuring DNS virtual servers GSLB configuration Implementing traditional GSLB, proximity-based GSLB, and GSLB failover for disaster recovery GSLB entity relationship GSLB site communication example Using AppExpert Classic to optimize traffic Policy overview and basics Hypertext Transfer Protocol Expression structures Content filtering Introduction to compression Using AppExpert for responder, rewrite, and URL transform Understanding the packet processing flow Actions Understanding bind points Using pattern sets Typecasting Rewrite, responder, and URL transformation overview Identifying packet processing flow Basic configurations: policies and actions Configuring rewrite actions Citrix Netscaler
Rewrite policies Responder actions and policies Configuring URL transformation Using AppExpert for content switching Introduction to content switching Configuring content-switching virtual servers Rule-based policy example Using AppExpert Advanced to optimize traffic Compression with advanced policy expressions Integrated caching Cache policies and cache expressions Graceful cache configuration changes Cache content groups and aging Content group settings FlashCache Global cache attributes Caching management AppExpert templates Policy-based routing Management Simple Network Management Protocol SNMPv3 Dashboard Reporting and monitoring tools Auditing and logging Configuring an auditing server Global auditing parameters Configuring auditing policies NetScaler log management Replacing a high availability node Upgrading as a standalone NetScaler system Upgrading a high availability pair Password recovery Network traffic capture using NSTCPDUMP TCPDUMP options and filter expressions Network traffic capture using NSTRACE.SH NSTRACE options and filter expressions
Address: Acutelearn Technologies, Flat No 80 & 81, 4th floor, Above Federal Bank Building, Besides Cafe coffee day Lane, Madhapur, Hyderabad-500081
0 notes
Text
How-to: Improving IO with FlashCache #fix #it #programming
How-to: Improving IO with FlashCache #fix #it #programming
Improving IO with FlashCache
I have a server with 2 HDD’s (2x 1 TB), running in RAID 1 (SW-RAID). I want to improve IO performance by using flashcache. There are running KVM virtual machines on it, using LVM.
Regarding this, I have the following questions:
Will this even work? flashcache works for block devices, however these are all virtual machines with their own setup.
How much would I expect…
View On WordPress
0 notes
Text
Setting up Flashcache the hard way and some talk about initramfs
If you follow the latest versions of... everything and tried to install flashcache you probably noticed that none of the current guides are correct regarding how to install it. Or they are mostly correct but with some bits missing. So here's an attempt to do a refreshed guide. I'm using kernel version 3.7.10 and mkinitcpio version 0.13.0 (this actually matters, the interface for adding hooks and modules has changed).
Some of the guide is likely to be Arch-specific. I don't know how much, so please watch out if you're using another system. I'm going to explain why things are done the way they are, so you can replicate them under other circumstances.
Why flashcache?
First, what do I want to achieve? I'm setting up a system which has a large spinning disk (300GB) and a rather small SSD (16GB). Why such a weird combination? Lenovo allowed me to add a free 16GB SSD drive to the laptop configuration - couldn't say no ;) The small disk is not useful for a filesystem on its own, but if all disk writes/reads were cached on it before writing them back to the platters, it should give my system a huge performance gain without a huge money loss. Flashcache can achieve exactly that. It was written by people working for Facebook to speed up their databases, but it works just as well for many other usage scenarios.
Why not other modules like bcache or something else dm-based? Because flashcache does not require kernel modifications. It's just a module and a set of utilities. You get a new kernel and they "just work" again - no source patching required. I'm excited about the efforts for making bcache part of the kernel and for the new dm cache target coming in 3.9, but for now flashcache is what's available in the easiest way.
I'm going to set up two SSD partitions because I want to cache two real partitions. There has to be a persistent 1:1 mapping between the cache and real storage for flashcache to work. One of the partitions is home (/home), the other is the root (/).
Preparation
Take backups, make sure you have a bootable installer of your system, make sure you really want to try this. Any mistake can cost you all the contents of your harddrive or break your grub configuration, so that you'll need an alternative method of accessing your system. Also some of your "data has been written" guarantees are going to disappear. You've been warned.
Building the modules and tools
First we need the source. Make sure your git is installed and clone the flashcache repository: https://github.com/facebook/flashcache
Then build it, specifying the path where the kernel source is located - in case you're in the middle of a version upgrade, this is the version you're compiling for, not the one you're using now:
make KERNEL_TREE=/usr/src/linux-3.7.10-1-ARCH KERNEL_SOURCE_VERSION=3.7.10-1-ARCH sudo make KERNEL_TREE=/usr/src/linux-3.7.10-1-ARCH KERNEL_SOURCE_VERSION=3.7.10-1-ARCH install
There should be no surprises at all until now. The above should install a couple of things - the module and 4 utilities:
/usr/lib/modules/<version>/extra/flashcache/flashcache.ko /sbin/flashcache_load /sbin/flashcache_create /sbin/flashcache_destroy /sbin/flashcache_setioctl
The module is the most interesting bit at the moment, but to load the cache properly at boot time, we'll need to put those binaries on the ramdisk.
Configuring ramdisk
Arch system creates the ramdisk using mkinitcpio (which is a successor to initramfs (which is a successor to initrd)) - you can read some more about it at Ubuntu wiki for example. The way this works is via hooks configured in /etc/mkinitcpio.conf. When the new kernel gets created, all hooks from that file are run in the defined order to build up the contents of what ends up in /boot/initramfs-linux.img (unless you changed the default).
The runtime scripts live in /usr/lib/initcpio/hooks while the ramdisk building elements live in /usr/lib/initcpio/install. Now the interesting part starts: first let's place all needed bits into the ramdisk, by creating install hook /usr/lib/initcpio/install/flashcache :
# vim: set ft=sh: build () { add_module "dm-mod" add_module "flashcache" add_dir "/dev/mapper" add_binary "/usr/sbin/dmsetup" add_binary "/sbin/flashcache_create" add_binary "/sbin/flashcache_load" add_binary "/sbin/flashcache_destroy" add_file "/lib/udev/rules.d/10-dm.rules" add_file "/lib/udev/rules.d/13-dm-disk.rules" add_file "/lib/udev/rules.d/95-dm-notify.rules" add_file "/lib/udev/rules.d/11-dm-lvm.rules" add_runscript } help () { cat<<HELPEOF This hook loads the necessary modules for a flash drive as a cache device for your root device. HELPEOF }
This will add the required modules (dm-mod and flashcache), make sure mapper directory is ready, install the tools and add some useful udev disk discovery rules. Same rules are included in the lvm2 hook (I assume you're using it anyway), so there is an overlap, but this will not cause any conflicts.
The last line of the build function makes sure that the script with runtime hooks will be included too. That's the file which needs to ensure everything is loaded at boot time. It should contain function run_hook which runs after the modules are loaded, but before the filesystems are mounted, which is a perfect time for additional device setup. It looks like this and goes into /usr/lib/initcpio/hooks/flashcache:
#!/usr/bin/ash run_hook () { if [ ! -e "/dev/mapper/control" ]; then /bin/mknod "/dev/mapper/control" c $(cat /sys/class/misc/device-mapper/dev | sed 's|:| |') fi [ "${quiet}" = "y" ] && LVMQUIET=">/dev/null" msg "Activating cache volumes..." oIFS="${IFS}" IFS="," for disk in ${flashcache_volumes} ; do eval /usr/sbin/flashcache_load "${disk}" $LVMQUIET done IFS="${oIFS}" } # vim:set ft=sh:
Why the crazy splitting and where does flashcache_volumes come from? It's done so that the values are not hardcoded and adding a volume doesn't require rebuilding initramfs. Each variable set as kernel boot parameter is visible in the hook script, so adding a flashcache_volumes=/dev/sdb1,/dev/sdb2 will activate both of those volumes. I just add that to the GRUB_CMDLINE_LINUX_DEFAULT variable in /etc/default/grub.
The commands for loading sdb1, sdb2 are in my case the partitions on the SSD drive - but you may need to change those to match your environment.
Additionally if you're attempting to have your root filesystem handled by flashcache, you'll need two more parameters. One is of course root=/dev/mapper/cached_system and the second is lvmwait=/dev/maper/cached_system to make sure the device is mounted before the system starts booting.
At this point regenerating the initramfs (sudo mkinitcpio -p linux) should work and print out something about included flashcache. For example:
==> Building image from preset: 'default' -> -k /boot/vmlinuz-linux -c /etc/mkinitcpio.conf -g /boot/initramfs-linux.img ==> Starting build: 3.7.10-1-ARCH -> Running build hook: [base] -> Running build hook: [udev] -> Running build hook: [autodetect] -> Running build hook: [modconf] -> Running build hook: [block] -> Running build hook: [lvm2] -> Running build hook: [flashcache] -> Running build hook: [filesystems] -> Running build hook: [keyboard] -> Running build hook: [fsck] ==> Generating module dependencies ==> Creating gzip initcpio image: /boot/initramfs-linux.img ==> Image generation successful
Finale - fs preparation and reboot
To actually create the initial caching filesystem you'll have to prepare the SSD drive. Assuming it's already split into partitions - each one for buffering data from a corresponding real partition, you have to run the flashcache_create app. The details of how to run it and available modes are described in the flashcache-sa-guide.txt file in the repository, but the simplest example is (in my case to create the root partition cache:
flashcache_create -p back cached_system /dev/sdb1 /dev/sda2
which creates a devmapper device called cached_system with fast cache on /dev/sdb1 and backing storage on /dev/sda2.
Now adjust your /etc/fstab to point at the caching devices where necessary, install grub to include the new parameters and reboot. If things went well you'll be running from the cache instead of directly from the spinning disk.
Was it worth the work?
Learning about initramfs and configuring it by hand - of course - it was lots of fun and I got a ramdisk failing to boot the system only 3 times in the process...
Configuring flashcache - OH YES! It's a night and day difference. You can check the stats of your cache device by running dmsetup status devicename. In my case after a couple of days of browsing, watching movies, hacking on python and haskell code, I get 92% cache hits on read and 58% on write on the root filesystem. On home it's 97% and 91% respectively. Each partition is 50GB HDD with 8GB SDD cache. Since the cache persists across reboots, startup times have also dropped from ~5 minutes to around a minute in total.
I worked on SSD-only machines before and honestly can't tell the difference between them and one with flashcache during standard usage. The only time when you're likely to notice a delay is when loading a new, uncached program and the disk has to spin up for reading.
Good luck with your setup.
0 notes
Link
Flashcache is a simple Block Level Cache for Linux implemented at Facebook.
It is built as a loadable kernel module, as a Device Mapper client. It supports both write back and write through caching modes. Flashcache is pushed onto the storage stack under the filesystem, with the intent of caching disk blocks on SSDs. Linux kernels from 2.6.18 to 2.6.32 are supported.
The code is available for download on GitHub.
41 notes
·
View notes
Link
According to FB MySql group said:
Flashcache is a simple write back persistent block cache designed to accelerate reads and writes from slower rotational media by caching data in SSD's. We built Flashcache to help us scale InnoDB/MySQL, but it was designed as a generic caching module that can be used with any application built on top of any block device. For InnoDB, when the working set does not fit in the InnoDB buffer pool, read latency is significantly improved due to caching more of the working set in faster media, such as SSD's. We also improve write performance by first caching writes in SSD's and lazily flushing the data back to disk.
3 notes
·
View notes
Text








#DidYouKnow These Oracle Database Management Hacks?
Swipe left to explore!
💻 Explore insights on the latest in #technology on our Blog Page 👉 https://simplelogic-it.com/blogs/
🚀 Ready for your next career move? Check out our #careers page for exciting opportunities 👉 https://simplelogic-it.com/careers/
#didyouknowfacts#knowledgedrop#interestingfacts#factoftheday#learnsomethingneweveryday#mindblown#oracle#database#oracledatabase#databasemanagement#speed#query#flashcache#orackeflashcache#rman#backup#application#applicationservice#didyouknowthat#triviatime#makingitsimple#learnsomethingnew#simplelogicit#simplelogic#makeitsimple
0 notes
Text
Enhancing Write and Read Speeds with Flash Cache
#FlashCache enables #IT environments to process enterprise-grade workloads at incredible speeds. By setting up Flash Cache, businesses can effectively optimize #datastorage & speed up their storage experience.
Explore: https://stonefly.com/hyper-converged/appliance

#data#data security#Data Recovery#database#data storage#data center#hyperconverged#hyperconverged infrastructure#Hyper-converged#hyper converged#flash cache
0 notes
Text
Increasing Speeds with Flash Cache Technology
Flash Cache #technology facilitates business environments looking to setup high-speed #datastorage for frequently accessed data. #FlashCache enables IT environments to process enterprise-grade #workloads at incredible speeds.
Learn more here: https://stonefly.com/hyper-converged/appliance

#data#data security#data protection#bigdata#data storage#data center#hyper converged#hyper-converged#hyperconverged storage#hybrid cloud
0 notes
Text
Flash Cache Technology
#FlashCache enables #IT environments to process #enterprise-grade #workloads at incredible speeds.
Processing large chunks of #data becomes easier with high-speed flash cache/#SSD that enhances data read & write speeds.
Learn more here: https://stonefly.com/hyper-converged/appliance

#data#data security#data protection#bigdata#data storage#data center#hyper converged#hyper-converged#hyperconverged storage#hybrid cloud#virtualization#software#flash cache#cache#ssd
0 notes
Text
What you might miss about NetApp from Aug-Nov 2019, including Insight in Las Vegas? Part 5
Part 5. AFF, FAS, ASA, ONTAP
AFF & FAS
AFF400, FAS8300, FAS8700. All of those systems basically the same platform but with a different number of Memory & CPU cores. FAS systems also have FlashCache module onboard for read caching. All three requires 9.7 RC1 and have the latest Intel Cascade Lake processors. And that gives us hope we might see Optane as cache as NetApp showed as part of its vision on Insight 2017:
Rea…
View On WordPress
#All-Flash#Antivirus#Best Practice#Cloud#Containers#DR#FlexGroup#HCI#Max Data#MCC-IP#NAS#NetApp#ONTAP#ONTAP 9#ONTAP Select#Performance#SAN#SnapMirror#Snapshot#StorageGRID#Veeam#VMware#Windows
0 notes