#Reload Partition table
Explore tagged Tumblr posts
n1nchawrites · 6 months ago
Text
Shells
The room is sparse. Sergeant Ryland prefers it this way, for too much clutter can break his concentration during his meditations or brief periods of rest. The only ornamentation to be found sits on the utilitarian slab of a workbench that lies adjacent to his cot. In the low candlelight of his quarters, six pips of brass shine beautifully. They stand evenly partitioned, proud and stout as veterans in parade formation. Each one is etched with fine engravings inlaid with pure silver, depicting Chapter heroes and the Emperor in various engagements against the foes of mankind.
A servitor clicks and hisses as it lumbers towards the table, placing a seventh, freshly polished round on the table, careful to not plant it at an incorrect distance from any other shell; this bolt has just been reloaded, carefully sealed and refurbished by the machine-slave over a period of several hours. The servitor had once been a master artificer, slaving over weapons and tools of war for every day of his life before being condemned to his fate as a thrall. It shows its talents, however, in the deft skill and speed at which it can repair such a fine and unique shell, and Ryland values its presence - it was committed to him some time ago by the Techmarine of his company as a dedicated means of repairing the rounds; they took up too much valuable time for the Chapter artificers to recycle, especially if they were to be kept in perfect condition, and thus a compromise had been met.
Five of the twelve original shells have been lost. The first shell had rolled into a pit of magma during a conflict who's climax was fought in a volcano, Ryland petitioned to have the Chapter recover it, yet the hope for its survival was slim, and the resources spent to recover it simply were not worth the effort, especially for somebody as low-ranking as a Sergeant. The second had been jetted into space during an incursion as they coasted the Immaterium, their Gellar Fields failing and exposing them to the horrors of the Warp - similarly, the third had fallen into the gaping maw of a Daemon during his defence of the vessel, returning to the Warp with the beast as they entered realspace once more. The fourth was lost during a boarding action, during which they had overloaded the enemy ship's reactors and Ryland did not have the time to return to the core of the frigate before it went critical and erupted in a cascading array of plasma bursts, triggering the Warp core to create a massive vortex and devour whatever was left entirely. The final shell was lost in the last engagement, stolen by a damned Grot as it infiltrated enemy ranks, only to be obliterated by one of the heavy weapon specialists whilst it made its escape to tell the Orks more about the Astartes emplacements.
The value of these shells to Ryland cannot be understated. To him, they are not mere rounds, but relics, and a capsule in which his proudest memory is stored. They were a gift from the Chapter-Master himself.
Ryland remembers that day fondly, a crystal clear sequence of events in his mind. It was his first engagement - already a defining moment in an Astarte's legacy - and he was fighting in the same platoon as the Chapter-Master. They were pinned by the enemy, and Ryland had drained his magazine entirely, leaving him dry and without any back-ups, for the spare two were used up on the initial advance. Wordlessly, the Chapter-Master extended an ornate bolter magazine to him; lead-coloured inlaid with gold. He took it in awe, hammering it into his weapon and fighting with a zeal the likes of which even he did not believe was possible to possess. From that day forth, he made it a principle to collect the spent shells and only use the magazine as a last resort, should the very worst come to pass.
Ryland sometimes wonders if the Chapter-Master ever notices he kept the magazine, if he is as proud as Ryland to have access to such beautiful resources and dispense such swift death with them. He wonders if the shells could speak, what stories they could tell; would they speak of great foes vanquished with their blistering diamond tips and explosive cores, of the honour to have been in service for such a long span of time, of the sweeping fields of battle and tight voidship corridors they had sped down and across, or something else entirely?
Ryland feels himself smile. He picks up one of the rounds, looking at his reflection in it - his features become warped and exaggerated, and cracked as the etchings on the brass cylinder carve and arc their way across his visage. He smiles wider, it is as if the rounds are joking with him; they are, after all, his closest and only assets, and as strange as it is to assign personality traits to something inanimate, he cannot help but do so with these treasures.
He sets the shell back in its place and steps away, a smirk still playing on his face as he takes the gilded magazine from his ammunition pouch on his leather belt and feeds them in, loading them in the exact sequence he had done countless times before, and holstering the crescent-shaped magazine back where it belongs. He is to go to war soon, and he would not be caught dead without his relic. Without his shells.
10 notes · View notes
learning-code-ficusoft · 3 months ago
Text
Understanding Data Movement in Azure Data Factory: Key Concepts and Best Practices
Tumblr media
Introduction
Azure Data Factory (ADF) is a fully managed, cloud-based data integration service that enables organizations to move and transform data efficiently. Understanding how data movement works in ADF is crucial for building optimized, secure, and cost-effective data pipelines.
In this blog, we will explore:  ✔ Core concepts of data movement in ADF  ✔ Data flow types (ETL vs. ELT, batch vs. real-time)  ✔ Best practices for performance, security, and cost efficiency  ✔ Common pitfalls and how to avoid them
1. Key Concepts of Data Movement in Azure Data Factory
1.1 Data Movement Overview
ADF moves data between various sources and destinations, such as on-premises databases, cloud storage, SaaS applications, and big data platforms. The service relies on integration runtimes (IRs) to facilitate this movement.
1.2 Integration Runtimes (IRs) in Data Movement
ADF supports three types of integration runtimes:
Azure Integration Runtime (for cloud-based data movement)
Self-hosted Integration Runtime (for on-premises and hybrid data movement)
SSIS Integration Runtime (for lifting and shifting SSIS packages to Azure)
Choosing the right IR is critical for performance, security, and connectivity.
1.3 Data Transfer Mechanisms
ADF primarily uses Copy Activity for data movement, leveraging different connectors and optimizations:
Binary Copy (for direct file transfers)
Delimited Text & JSON (for structured data)
Table-based Movement (for databases like SQL Server, Snowflake, etc.)
2. Data Flow Types in ADF
2.1 ETL vs. ELT Approach
ETL (Extract, Transform, Load): Data is extracted, transformed in a staging area, then loaded into the target system.
ELT (Extract, Load, Transform): Data is extracted, loaded into the target system first, then transformed in-place.
ADF supports both ETL and ELT, but ELT is more scalable for large datasets when combined with services like Azure Synapse Analytics.
2.2 Batch vs. Real-Time Data Movement
Batch Processing: Scheduled or triggered executions of data movement (e.g., nightly ETL jobs).
Real-Time Streaming: Continuous data movement (e.g., IoT, event-driven architectures).
ADF primarily supports batch processing, but for real-time processing, it integrates with Azure Stream Analytics or Event Hub.
3. Best Practices for Data Movement in ADF
3.1 Performance Optimization
✅ Optimize Data Partitioning — Use parallelism and partitioning in Copy Activity to speed up large transfers.  ✅ Choose the Right Integration Runtime — Use self-hosted IR for on-prem data and Azure IR for cloud-native sources.  ✅ Enable Compression — Compress data during transfer to reduce latency and costs.  ✅ Use Staging for Large Data — Store intermediate results in Azure Blob or ADLS Gen2 for faster processing.
3.2 Security Best Practices
🔒 Use Managed Identities & Service Principals — Avoid using credentials in linked services.  🔒 Encrypt Data in Transit & at Rest — Use TLS for transfers and Azure Key Vault for secrets.  🔒 Restrict Network Access — Use Private Endpoints and VNet Integration to prevent data exposure.
3.3 Cost Optimization
💰 Monitor & Optimize Data Transfers — Use Azure Monitor to track pipeline costs and adjust accordingly.  💰 Leverage Data Flow Debugging — Reduce unnecessary runs by debugging pipelines before full execution.  💰 Use Incremental Data Loads — Avoid full data reloads by moving only changed records.
4. Common Pitfalls & How to Avoid Them
❌ Overusing Copy Activity without Parallelism — Always enable parallel copy for large datasets.  ❌ Ignoring Data Skew in Partitioning — Ensure even data distribution when using partitioned copy.  ❌ Not Handling Failures with Retry Logic — Use error handling mechanisms in ADF for automatic retries.  ❌ Lack of Logging & Monitoring — Enable Activity Runs, Alerts, and Diagnostics Logs to track performance.
Conclusion
Data movement in Azure Data Factory is a key component of modern data engineering, enabling seamless integration between cloud, on-premises, and hybrid environments. By understanding the core concepts, data flow types, and best practices, you can design efficient, secure, and cost-effective pipelines.
Want to dive deeper into advanced ADF techniques? Stay tuned for upcoming blogs on metadata-driven pipelines, ADF REST APIs, and integrating ADF with Azure Synapse Analytics!
WEBSITE: https://www.ficusoft.in/azure-data-factory-training-in-chennai/
0 notes
2daygeek · 6 years ago
Photo
Tumblr media
(via How To Reload Partition Table In Linux Without System Reboot?)
0 notes
nixcraft · 2 years ago
Text
-> Re-read The Partition Table Without Rebooting Linux System
5 notes · View notes
theunderwoodtypist · 4 years ago
Text
The Huntress, Chapter 1
Eyes watched her as she walked through the crowded promenade.  Nervous gazes flitted her direction, quickly redirecting away from her when she glanced back.  Her ears perked at any and all whispered, hushed tones.  Station denizens stepped out of her path, staring with wide eyes as she passed.  Station security chose to look the other way.  As they had learned from past run ins with her kind.  She found the club she had been searching for, the one with a curtain of beads partitioning it off from the bustling walkway outside, the large tinted windows offered very little view of the inside except some shadowy movement.  She stood just outside the club, eyes flicking back and forth, studying the crowd inside through the multicolored beads.  The scent of tobacco, marijuana, and other herbal inhalants burned her nostrils.  She brushed the strands of beads aside and pulled the cloth covering her face down with one hand, and pulling her goggles up, resting them on her forehead, just shy of her pointed ears, with the other.  The blue and violet lights pulsed along with the thumping of the music.  Dancers wearing translucent garments of various colors danced on pedestals in a ring around the circular bar that took up the center of the establishment.  Her whiskers could feel the faint static charge of the invisible forcefields around the pedestals, in place either to keep the patrons from getting handsy with the dancers, or to keep the dancers from escaping, she wasn’t sure which.  In the back by the bar twin stair cases curved upward to a small balcony.  A single doorway, also curtained by beads, lead to the back rooms.  She surveyed the crowd carefully, sizing each patron up.  They were a rough crowd, civilian cargo runners mostly, stopping at the station to unload and get some much needed relaxation.  She recognized a few smugglers she had picked up before.  Most of the patrons were honest and hard working, others simple men and women trying to make their way in the galaxy, though through illegal means.  Which suited her just fine.  A hunter was nothing if she didn’t have prey to pursue.  The one she was after was a goblin.  Nasty little things.  Orange skin, pointed teeth, large ears.  They were disgusting little ghouls, slimy and they smelled of bile.  He was worth quite a bit, and her sources told her he knew things.  Things she needed to know.  His twin ran the bar here.  The owner was some unknown individual, some benefactor that hid in the shadows.  They probably kept the bar going to traffic weapons or drugs.  Why else would someone not want their name on the documents? She approached the bar after she was satisfied with her initial survey of the crowd.  The squat orange goblin looked her up and down for a moment.
“well…”  he said in a slimy, rasping voice, showing her his yellowed pointed fangs.  “Not too often we get mau in here…  What can I get for you little kitty?”  She ignored the derogatory remark and pointed at a bottle of blue liquid in the glass case behind the bar.  The goblin turned and pulled the bottle out and poured a small splash of it in a glass and slid it across the bar to her.  She set a rounded flat disk on the bar and tapped the surface of it.  Sand-like particles orientated themselves into the three dimensional image of another goblin.
“Do you know where he is?”  she asked coldly.  The goblin scowled at the image.
“What’s he done this time?”  He hissed.
“He owes my employer enough credits to buy a small star cruiser.”  She hissed back, showing her own sharp teeth.  
“Listen little kitty, why don’t you run on back to your employer, before you get your pretty little self hurt.”  he growled.  She smirked.  She loved when they played hard to get.
“Just tell me where he is, and I won’t bring you in as well, for the two and a half thousand credit bounty you have on you.”  she downed her drink in one gulp and glanced over her shoulder across the club, just to check on her surroundings.  Her eyes locked onto an eros boy.  He was young, not much into adulthood.  His black hair was unruly, and his grey skin made him look almost like a shadow in the pulsing lights around him.  He studied her with his mismatched blue and green eyes.  He was armed.  A handgun of some sort, holstered on his thigh, as well as a few knives.  He wasn’t wearing the security band around his upper arm that meant he was permitted to carry a weapon.  He had probably snuck around the security check points.  He had an air of nobility, but the posture of a man who was unfazed by violence and death.  He offered her a soft,  gentle smile.  If things went wrong… She would have to drop him quickly. She turned back to the goblin.
“Have you decided?”  She asked, gesturing for the goblin to refill her glass.  He obliged and she downed the drink.
“My brother is in the back rooms.  I’ll go get him for you…”  He said with an unsettling fanged grin.  
“Good boy…”  She smirked, watching him closely as he went up the stairs and into the back.  She unbuckled the strap holding her side arm in place on her thigh and glanced around, looking for the eros again, but he had vanished.  Good… eros tended to be excellent marksman, and small targets to hit.  She didn’t want to have to deal with more bloodshed then necessary.  The goblin bartender rushed out of the back rooms with a disrupter rifle in hand.  She swore loudly and pulled her handgun before dropping to the floor as red hot bursts of plasmic energy streaked through the air.  The cub erupted in screams and panic as patrons scrambled to get away from the weapon fire.  She peaked over the bar and fired twice toward the goblin.  Both shots missed, but she didn’t have a good shot.  She figured he would want to take cover if he was being shot at.  She had been right, the goblin ducked back into the back room, giving her enough time to scramble to her feet and find a better vantage point.  There were nine fairly well covered spots in the club, each, unfortunately, left her back open, and she wasn’t sure how many of the patrons were part of this goblin’s inner circle and armed.  She pulled her goggles down again and fired a few more shots as she moved, ducking behind one of the dancer’s pedestals.  The boy atop it cowered, unable to get off because of the forcefield, but safe because of it as well.  Disrupter blaster scorched the wall behind her and the ceiling.  Patrons were still clambering to get out of the club, crawling over each other, shoving each other to the ground.  They had effectively blocked the only way in or out.  The goblin would be hers.  She fired twice and ducked down again as disrupter blasts impacted the forcefield, their energy redistributed and funneled into the power buffers.  If enough hit the field, it could overload and cause the emitters to explode. A few more blasted hit the field.  She swore under her breath and fired a few more times.  This was going no where quickly, and at this rate she would loose the majority of the bounty paying for the damages.  She grumbled and pulled a few small disks from a few of the numerous pouches on her belt and on her thigh.  She flicked her wrist back, sending them flying toward the goblin.  One flashed brightly, the other erupted in a cloud of smoke.  She dove out from her cover and fired twice into the cloud of smoke.  She stayed still, watching, waiting…  She knew she hit one of the goblins, she could see him struggling to get up with her goggle’s infrared scans.  She cautiously approached the stairs.  She kicked the disrupter rifle over the edge of the platform at the top of the stairs and pulled the goblin to his feet.  He laughed and then winced in pain.  She had shot him in the shoulder and the calf, no permanent damage.
“Why are you laughing?” She hissed.  He looked into the back room.  She followed his gaze and froze.  There was no other goblin…  He had lied…  Her eyes locked onto a pulsing  red light on the floor.  A second disrupter was set in overload on the verge of going critical.      
“It will destroy half the station!” He cackled.  She swore loudly in her native tongue and dove off the balcony. She grabbed one of the heavy mahogany tables and flipped it over, ducking behind it just as the disrupter went critical.  The blast blew out all the heavy tempered glass around the entrance of the club, forcing club goers, tables, bar stools, and other decor and objects through them, including herself.  She hit the ground hard, coming to a tumbling halt, shards of glass and bits of the composite bulkheads were strewn around her, like someone had thrown them about like confetti.  She pushed herself up and, her head spinning, ears ringing.   Clearly the explosion did not destroy half the station…  She struggled to her feet, barely keeping her balance, staring at the smoking front of the club. People dragged their friend’s lifeless bodies out from under debris, people wailed and screamed,  others lay lifeless, killed by the blast. She looked around for her handgun. It had been knocked from her hands by the blast.  The glint of gold caught her attention a few feet away.  She pulled the gun from the debris and reloaded it, limping back into the club to confirm the kill. Before she could make it back inside, station security had her surrounded.  She swore and jammed her gun back into its holster and pulled her goggles back up as she raised her hands over her head.
“My name is Tivali, I’m with the Hunter’s Guild, I have permission from the station master to use deadly force if necessary here.”  she said as the security guards pulled her arms behind her back, fastening restraints to her wrists.  She rolled her eyes but complied with every order they gave her, and answered every question. This was merely an inconvenience…  A rather annoying, time consuming, headache of an inconvenience that would cost her time, money, and her prey… 
37 notes · View notes
ressyfaerie · 4 years ago
Note
Hi there! I hope I'm not too late for sending in asks 👉👈
Prompt : kai and Bryan having feelings for each other and being pissed off about it.
And you can use the dub names :)
Okay so I actually thought about this one for awhile and added some RESSYFAERIE FLARE so I hope you like it! Lets get these 2 silver boys goin- (disclaimer i know fuckin nothing about guns cause im canadian lmao), also I don’t know much about Bryan’s character (I mean there’s not much to go off of in the anime haha so I hope I get it right!). * are private thoughts! LOTS OF F BOMBS, because apparently I can’t write these Russian fucks without swearing every 5 seconds-
Kai finally admitted it.
He forgot how to shoot.
Obviously- he forgot most things from the abbey, and he preferred to keep it that way, but this one thing-
“I get why you forgot.” Bryan shrugged his shoulders in their Russian training room. “But why do you want to remember?” 
Kai hesitated before answering, “I want to take over the company one day. But I can’t take over a weapons manufacturing company if-”
“You forgot how to shoot a gun.” Bryan completed his sentence for him so he wouldn’t have to struggle. 
“Yeah.” Kai let the word fall like thick tar. 
“We all learned in the abbey, I know that.” Kai was desperate to get these thoughts out.
“Benefits of being a child soldier.” Bryan kicked his legs up and balanced them on a table. 
Kai looked to the side.
“You’re worried about something else, what is it?” Bryan’s dominating attitude never affected Kai, but it was worth a shot. 
Kai simply looked back at him, his classic emotionless stare. 
“Spill the damn beans Hiwatari.” 
Kai sighed, “What if- I remember something when I’m learning.”
“That’s why you can’t have just anybody teach you?” Bryan nodded finally understanding. 
“Yeah. I imagine Boris’ teaching techniques weren’t exactly-”
“Normal?” 
“Ethical.” Kai chose his words wisely, like always. 
“If you stop with that proper speech around me I’ll teach you.” 
Kai’s eyes lit up slightly, “really? I wasn’t expecting you to agree so easily.” 
“Gun range tomorrow. Bright and early. I’ll give you bonus points if you bring one of those new Hiwatari pistols.” 
“You know I can’t do that.”
“Ah,” Bryan shrugged, “Worth a try.” 
-
Kai waited in the range, Bryan was as usual, late. 
“Hey Kai.” Bryan approached with his learned silent walk. Kai jumped slightly when he appeared behind him, “sorry didn’t mean to scare you.” 
Kai rolled his eyes. “You totally did, don’t lie.” 
Bryan gave him that deep growl laugh that Kai liked.
*Wait liked? Did I just think that- For real?*
Inside the range it was dead quiet. 
“I like coming here early, no one is here. Not many people like to play with guns first thing in the morning.” Bryan grabbed a table and moved it closer to one of the stalls as if he owned the place. 
Kai crossed his arms, watching him and absorbing him- his technique. 
*Focus on learning you idiot-*
Kai shook his head to clear his thoughts. 
Kai landed back on planet earth and wondered when Bryan had placed the pistols on the table in front of him.
“Do you remember how to load?” Bryan began to tear one apart piece by piece.
Kai shook his head. 
“Try.” Bryan used one hand to gesture to the table “oh- wait” 
Bryan walked towards Kai and got a bit close for comfort.
“What the fuck are you doing-” Kai flinched back and closed one eye when Bryan plopped something around his neck. 
Kai felt his face flush a bit.
*Fuck, fuck fuck fuck-*
Kai reached for the heavy thing around his neck, “Oh- earmuffs.” 
“What do you think I would put around your neck? Weirdo.” Bryan shot him a confused glare. 
“I- I don’t know!?” Kai became worried when he didn’t think through his response, he sounded like a teenage girl, he hated it. 
Kai took a deep breath and approached the table.
“It’s not loaded.” Bryan reassured him. 
Kai went to grab the pistol but his hand hovered overtop of it, he was scared, it frustrated him that he didn’t know why. 
Bryan slowly grasped his hand and lowered it onto the gun. He held his hand there for at least a minute, Kai wasn’t counting, but he was shaking slightly. 
“Don’t be scared I’m here.” 
“Fuck off-” Kai tried desperately to not be a schoolgirl but he came off a bit rough. 
“Alright fuck you too then figure it out yourself.” Bryan ripped his hand back at the speed of light. 
Kai’s emotions were all over the place, he hadn’t even noticed himself picking up the pistol and treating it like an old friend, loading it with much practice. 
“Woah.” Bryan’s eyes grew wide, he felt a feeling flutter in his chest, and he again, pushing it away, menacing stupid feelings. 
Just like that Kai was holding a loaded pistol. He stopped in his tracks and let his eyes fall on it. He was silent. 
“You alright?” Bryan worried about him, he worried about him all the time.
“I’m fine. How do I shoot it?” Kai turned to the partitions mentally locking on to the targets far away.
“Try it for yourself first- oh.”
Kai was already in form, so Bryan took it upon himself to strut over and put his ear protection on for him. 
Kai wasn’t pulling the trigger. He left the safety on, unmoving. 
“Here you have to fix your stance.” Bryan poked Kai’s bicep when he didn’t respond.
*Guess he can’t hear me.*
He grabbed Kai’s leg and tried to organize it differently, then kicked his other foot. “There that’s good, not you’re arms-” He delayed thinking of how to do this without getting his hands all over Kai- As much as he wanted to-
*What the fuck!? I don’t want to touch him- I what- Ew!*
“I have to grab your arms Kai.”
“What!?” Kai was confused. 
“Okay here-” Bryan stood behind him and wrapped his arms over Kai’s shoulders.” He reached for his forearms and tried to move them, “Oh my god Kai fucking untense jesus-” 
Kai let himself relax a bit. 
“There. Wait-” 
Bryan leaned his head over Kai’s shoulder and clasped his hands around Kai’s over the pistol.
“This isn’t quite right-” He moved a few of Kai’s fingers. “There!” He jumped back fast trying to hide his red face from Kai. 
*What? I’m so done with these feelings it’s just Kai it is JUST KAI!*
Kai’s breaths were ragged. 
Bryan became worried. He got closer to Kai again and grasped his shoulder, “In your own time.” 
Kai’s jaw moved, he bit his lip and held the gun tighter. 
*Fuck that’s hot-*
And he shot the gun 1, 2, 3, 4 times. 
Bryan felt the electric shock run through his body on every shot. 
Kai turned to look at him, he took off the ear protection. 
“Can you reload it?” Bryan asked.
“Yeah.” Kai began to reload it and then went back to his stance. He grew uncomfortable, “can you show me again?” Kai asked uneasily. 
“Of course.” Bryan took his old position behind Kai, this time leaning into his back more, because, well- Fuck you that’s why. 
 “Put my earmuffs back on asshole.” Kai grinned. 
“Of course fuckface.” Bryan accidently grazed Kai’s chin when he grasped the muffs. 
He put them on Kai’s ears and gave them a pat, he still stayed behind Kai, he wasn’t sure why, and it pissed him off that he couldn’t move. 
Kai shot the gun again 4 times. The sound from behind was deafening and Bryan leaned into his shoulder a bit more, against his will the words he had been thinking for a while slipped out.
“I think I like you and it’s driving me fucking crazy-” 
Kai still held the gun, unmoving. 
*He couldn’t have heard me, there’s no way- the earmuffs*
“What did you say?” Kai angled his head to stare at him. 
“N-Nothing- reload and shoot asshole-” 
“It sounded like you’re gross.” Kai couldn’t keep his face expressionless, he was cracking a smile, he hated it. 
“I’m not gross?” 
“Even though you like me?” 
The gun range had never been quieter. 
Bryan’s face turned red as a beet, first from embarrassment, then anger. 
“I don’t fucking like you, you rich brat fuck you! Fuck!” Bryan pushed himself away from Kai, Kai put the gun back and took the earmuffs off onto the table beside the gun. 
“These earmuffs suck.” Kai had never been more stoic, it pissed Bryan off.
“How the FUCK are you so NORMAL right now? Are you messed in the head-” 
“Yes.” Kai’s expression grew from stoic to furious. 
“You’re an idiot Kai Hiwatari- and you can learn how to shoot yourself- Die for all I care-” Bryan crossed his arms, his face was still the same shade of red. 
“You’re the idiot, moron!” Kai rolled his eyes.
“Why? You know what fuck you-” 
“Because I think I like you too… Bitch.” Kai’s last word didn’t hit as hard, but Bryan felt like he had been hit with a shotgun. 
“You’re an idiot Kai really? For real. You’re fucked.” 
“You’re fucked- I can’t even shoot a gun and yet here you are-” 
“Cause of your fucking soft skin and dumb breath and stupid lips and your adams apple, it’s just always there!” Bryan raged and took a few steps towards Kai, he was ready to fight this guy. 
“Well I hate you’re stupid hair and face- and you’re videogame obsession and- mf!” 
Kai’s words were cut short when Bryan grabbed his stupid scarf and the back of his moronic head and pulled him closer kissing him in the process. 
Angry making out is weird, but they loved it. 
Once they pulled away someone had to break the silence. Kai decided it had to be him, somehow the more sociable of the duo. 
“I think I know how to shoot a gun now… Thanks.”
14 notes · View notes
Text
30 Day “Rare Pair” Writing Prompts -- #3 : “Pro Hero Duo”
Tumblr media
Ten Years After Graduation . . . .
~= Outside the Tythonia Financial Group Building, Musutafu, Japan . . . . =~ . . “I don’t care what you want! Either you pull your people back, or I’m going to shoot ONE hostage for every five minutes you stay in our way!”
Chief Inspector Sora snarled as the phone line was cut off. He looked to the Sergeant manning the comm-panel aboard the unit’s electronics van, and his expression of displeasure deepened when the man shook his head. “Sorry, Keibu! I can’t restore the line. They must have physically cut it!”
Inspector Sora nodded curtly, before turning to speak to another junior Police Officer. “Tell every man out there I want NO action taken, unless I give the order to do so!” As the man acknowledged the order and moved to relay it, the tall senior officer turned back to the technician. “Do whatever you can to reestablish contact. We cannot allow them to harm a single hostage!” Stepping down and out of the van, Inspector Sora marched to where a shielded barricade had been established in front of the bank entrance. Crouching down, he asked the senior Police Officer kneeling there, “Any changes?”
“Nothing, Keibu,” the man replied. “The thieves are staying well away from the windows and main entrance. We last spied one of them shooting out any surveillance cameras inside the main lobby.” With a grimace, he explained, “They must know we’d be able to tap them to get a view inside!”
Inspector Sora grunted. “Stay focused. I’m going to speak with the Captain. We’re running out of options, and time!” Rising, he headed back to the comm-van, only to be intercepted by the Police Officer he’d sent out earlier.
“Keibu! Sir, I’ve relayed your orders to all platoons,” he said. “Also, I’ve re-checked with the comm-tech. Still no good on reestablishing the land-line to the bank. We also cannot raise the thieves using any wireless tech!”
“They must have switched off their phones!” Sora looked back at the bank. “They clearly aren’t using wireless communications like radio-wave transmitters or such . . . Damn it! We’re going to lose a hostage if we don’t act.”
“Excuse me, Inspector?”
Both Inspector Sora and his Officer turned in alarm -- hands reaching for their weapons -- only to freeze in shock.
“Sorry to surprise you, but we only just heard of this incident as we were passing through. Do you require any help?”
Inspector Sora put a calming hand on the shoulder of his Officer, before nodding. “Yes, indeed . . . I mean, any aid you are willing to give would be most appreciated.”
“Then, give us a run-down. What do you know about things inside the bank?”
At that, Inspector Sora nodded to the comm-van. “Come with me, we may not have much time, but I can tell you all I know . . . .” . . . ~= Inside the Tythonia Financial Group Building, Minutes Later . . . . =~ . . . “Hoi, we’re almost to the deadline, Boss! What are we gonna do?”
Standing well clear of the open lobby space, Yakunan dropped the heavy-gauge shotgun onto his shoulders, grinding his teeth as he turned back to stare at his three henchmen. The seven bank employees that were tied up and piled up underneath the shielded array of the main counter. Damn it! We need to get out of here! He looked at where they had several large canvas duffel bags stacked up -- all filled with stolen money and other goods they had liberated from the vault. Damn the cops! If they’d listen and just pull back--!
“Boss,” one of the thugs said, moving close to whisper harshly. “Just checked . . . the riot unit’s pulling back!”
Yakunan blinked. “What? Are you sure?!” he asked, his voice husky with tension.
“Side window, in that cubicle office,” the thug said, nodding to a room off to their right.
Moving cautiously, Yakunan slipped around the doorway to the thin window on the wall. He shifted the blinds a bit, peering out. Shocked at the sight of the R.P.U. pulling back and breaking down their barricade, he grunted and turned to wave at the thug. “It’s legit! They’re pulling back.”
“So, what do we do now, Boss?”
Coming out of the office, Yakunan said, “Get on the radio, call up the rest of the boys! We’re going out the back entrance!” He grunted and motioned to the others. “Grab the money and loot, and move it, now!”
“What about them?” another thug said, pointing his weapon at the hostages; the women where whimpering with fear, while the men were trying to look stoic and brave.
Unlimbering his shotgun, Yakunan twisted his head to loosen it up before saying, “Pick two. They’ll come with us for insurance. Shoot the rest when we’re free to move out.”
At that, the closest thug made a move to grab one of the women out of the clustered hostages. As he got a grip on a young woman, he suddenly jerked upright with a choked gasp and suddenly flew through the air with a yelp. Crashing into a glass partition, he tumbled to a stop, unconscious and limp.
The remaining thugs and Yakunan whirled around, looking to see what had attacked them . . . but there was nothing there! “What the hell--?”
There was another shriek of surprise, and another thug crashed through a closed office door behind them.
Another thug shouted, “What is going on--!?”
“Cover! It’s a Hero!” Yakunan snapped. He spun towards the open lobby, ratcheting his shotgun before leveling it over a low table.
The rest of his men followed suit, weapons loaded and cocked as they found whatever place to hide behind. One ducked down near the hostages, swiveling his pistol around to aim it, only to have it go flying out of his grip before he himself found his body launched the length of the counter space.
“Which Hero is it, ya think?”
“Does it really matter?!”
“They’re either shielded or stealthed,” Yakunan growled. “Quick! Shoot everywhere!”
With that, the remaining thieves began shooting out into the open lobby space. Gunfire and a hail of bullets and shells shattering the relative silence and several pieces of furniture in the process.
Watching, Yakunan was in the process of reloading his weapon, when a sharp cry of pain cut through the gunshots, and a figure materialized out of thin air to slam into the floor. “Stop shooting!” he barked, as he stood up and aimed his shotgun at the fallen figure. “We’ve got them now!” He started to stalk towards them, his feet making crunching sounds on shattered glass and shredded plastic.
From where they had fallen, the Hero groaned, but quickly got her feet underneath herself before looking up to see the approaching masked robber.
“You blasted . . . you thought you could take us out, alone?” Yakunan said darkly. “Now, you’re going to die with the hostages we won’t be sparing!”
One of the other thugs had come up to flank his leader, and was in the process of reloading his weapon when he got a good look at the Hero on the floor. “Oh . . . cripes! Boss?! We’ve got to get out of here!”
“We will,” Yakunan said. “After we get our loot and repay this stinkin’ Hero for--!?” His words were cut off when he felt his henchman grab him by the arm roughly.
“Boss, you don’t understand! We have to go! Don’t you see who that is?!?”
Yakunan glared at the masked stooge with impatience and a touch of alarm, before he turned back to see the Hero get into a defensive stance; crouching on both feet and one hand, while the other gripped her arm to stop the gash that was bleeding there.
“You’d better listen to the man, Criminal,” the Hero in black and green said. “Surrender, or you’re going to come off a lot worse than you have been. Kero.”
Yakunan blinked, but it was his goon that said the words, “That’s the Rainy Hero Froppy?! IF she’s here, that means--!!!”
The next moment was a sudden explosion of movement, sound, pressure and pain; something crashed through the side wall, shattering brick, stone and metal as it rocketed across the lobby and into both men. The one thug was thrown far wide, coming to a crashing stop against the marble counter, Yakunan was carried along until he slammed painfully against the far wall. The impact made him lose his weapon, his breath, and his senses for a brief span of time.
When he could see again, there was a light-colored fist gripping the front of his jacket, and a pair of green, steely eyes peering back at him.
“All of you, surrender! I have your Boss!” The strident, confident voice carried across the lobby, and to Yakunan’s dismay, there was the sound of clattering weapons hitting the bank floor. At that, the figure lifted him higher against the wall, glaring at him as he braced his armored feet and said, “You planned to get away with this, but you’ll not harm any innocent lives, not while I am here!”
Yakunan felt his spirit sink like a lead weight, as he recognized the newcomer’s costume. Oh crud! Him! Wincing as he felt the shock of his impact against the wall rick through him, he sagged in the grip of Japan’s Number-One Hero. “Why did . . . did it have to . . . be you!?”
With a smile, Deku said simply, “Because, while I am here, there’s nowhere that villains will be safe from Justice!”
Looking behind him, Yakunana could only watch helplessly as Froppy tapped on a two-way radio badge on her outfit. “Inspector, the hostages are secured. Send in your men!” . . . ~= Outside the Building, Minutes Later . . . . =~ . . As the First Responder technician finished wrapping the wound on her arm, Tsuyu gathered the untucked sleeve of her costume and pulled it to her as the woman folded her arm into a light sling. Well aware of the keen scrutiny of Izuku -- who was standing nearby, dividing his attention between her and the Inspector who was finishing his debriefing with him -- she nodded as the tech gave her some instructions on caring for her injury. Rising from where she was seated on the rear step-up on the ambulance, she nodded and thanked the tech, before moving off towards Izuku as he acknowledged the Inspector’s salute before the Officer moved away.
“Well, is everything okay, Kero?” Tsuyu asked.
Izuku eyes followed the Inspector as he rejoined his men, before he sighed and said, “Fortunately none of the hostages were hurt. Four of the gang will require medical attention, but their leader will only need minor first-aid before they take him to prison.” He turned and looked at her, frowning slightly. “You took a big risk, Tsu-chan.”
“It was the only logical thing to do, and you know it. Kero,” Tsuyu said archly. “My Camouflage gave me the means to get inside the bank unseen, help the hostages if needed, and allowed me to tell you where to position yourself for your entrance.”
Izuku nodded, but his eyes lingered on her arm. “I just don’t like you getting hurt.” He pointed his thumb at his chest. “I’m the one who can’t be injured, remember?”
“Yes, dear,” Tsuyu said dryly, even as she looked at him fondly. “A fact you never let anyone ever forget.” She shook her head at his expression, then reached up to grip his arm. “Izu-kun, we run risks no matter what we do. It’s part of the job, remember?” She squeezed his arm to soothe his obvious distress, adding, “Besides . . . if you’re that worried, I can always let you kiss it to make me feel better, later. Kero?”.
Izuku gave her a half smile, before shaking his head with disbelief, only to smile more genuinely as he placed one arm around her waist to pull her close. “Promises, promises,” he muttered.
“Since when have you know me to break a promise?” Tsuyu asked, clearly teasing him. She smiled, then sighed as she looked around. “They’re wrapping things up here, Kero. The money’s safe, as are the hostages. I’d say our work here is done.”
“So it is.” Izuku hugged her tighter, before he nodded in the direction of the far side of the city block. “We should get back. We’re missing dinner, and Mom will clearly have seen this whole incident on the news.”
“That’s true,” Tsuyu said. “She’ll most likely raise a fuss over me getting hurt.” She nudged him with her hip, saying, “As if I don’t already have one Midoriya worried for me. Kero.”.
Izuku chuckled, before letting go of her to step behind her. “One is certainly enough, love. Especially since it’s me.” With that, he scooped her up into his arms, waiting until she got a grip with her good arm before asking, “Ready to go?”.
Tsuyu smiled and kissed his cheek. “One-way on the Deku Express! Kero!”
With that, Izuku grinned and let the power of his Quirk flow thorough his body. Tensing his legs, he sprang off the ground in a burst of One-for-All and soared off into the late afternoon sky . . . . . . . ~= Fin =~
32 notes · View notes
globalmediacampaign · 5 years ago
Text
MySQL: Import CSV, not using LOAD DATA
All over the Internet people are having trouble getting LOAD DATA and LOAD DATA LOCAL to work. Frankly, do not use them, and especially not the LOCAL variant. They are insecure, and even if you get them to work, they are limited and unlikely to do what you want. Write a small data load program as shown below. Not using LOAD DATA LOCAL The fine manual says: The LOCAL version of LOAD DATA has two potential security issues: Because LOAD DATA LOCAL is an SQL statement, parsing occurs on the server side, and transfer of the file from the client host to the server host is initiated by the MySQL server, which tells the client the file named in the statement. In theory, a patched server could tell the client program to transfer a file of the server’s choosing rather than the file named in the statement. Such a server could access any file on the client host to which the client user has read access. (A patched server could in fact reply with a file-transfer request to any statement, not just LOAD DATA LOCAL, so a more fundamental issue is that clients should not connect to untrusted servers.) In a Web environment where the clients are connecting from a Web server, a user could use LOAD DATA LOCAL to read any files that the Web server process has read access to (assuming that a user could run any statement against the SQL server). In this environment, the client with respect to the MySQL server actually is the Web server, not a remote program being run by users who connect to the Web server. The second issue in reality means that if the web server has a suitable SQL injection vulnerability, the attacker may use that to read any file the web server has access to, bouncing this through the database server. In short, never use (or even enable) LOAD DATA LOCAL. local_infile is disabled in the server config, and you should keep it that way. client libraries are by default compiled with ENABLED_LOCAL_INFILE set to off. It can still be enabled using a call to the mysql_options() C-API, but never do that. 8.0.21+ places additional restrictions on this, to prevent you from being stupid (that is, actually enabling this anywhere). Not using LOAD DATA The LOAD DATA variant of the command assumes that you place a file on the database server, into a directory in the file system of the server, and load it from there. In the age of “MySQL as a service” this is inconvenient to impossible, so forget about this option, too. If you were able to do place files onto the system where your mysqld lives, your user needs to have FILE as a privilege, a global privilege (GRANT FILE TO ... ON *.*) the server variable secure_file_priv needs to be set to a directory name, and that directory needs to be world-readable. LOAD DATA and SELECT INTO OUTFILE work only with filenames below this directory. Setting this variable requires a server restart, this is not a dynamic variable (on purpose). Note that the variable can be NULL (this is secure in the sense that LOAD DATA is disabled) or empty (this is insecure in that there are no restrictions). There is nothing preventing you from setting the variable to /var/lib/mysql or other dumb locations which would expose vital system files to load and save operations. Do not do this. Also, a location such as /tmp or any other world-writeable directory would be dumb: Use a dedicated directory that is writeable by the import user only, and make sure that it is world-readable in order to make the command work. Better: Do not use this command at all (and set secure_file_priv to NULL). Using data dump and load programs instead We spoke about dumping a schema into CSV files in Export the entire database to CSV already. To complete the discussion we need to provide a way to do the inverse and load data from a CSV file into a table. The full code is in load.py. The main idea is to open a .csv file with csv.reader, and then iterate over the rows. For each row, we execute an INSERT statement, and every few rows we also COMMIT. In terms of dependencies, we rely on MySQLdb and csv: import MySQLdb import csvWe need to know the name of a table, and the column names of that table (in the order in which they appear). We should also make sure we can change the delimiter and quoting character used by the CSV, and make the commit interval variable. Finally, we need to be able to connect to the database. # table to load into table = "data" # column names to load into columns = [ "id", "d", "e", ] # formatting options delimiter = "," quotechar = '"' # commit every commit_interval lines commit_interval = 1000 # connect to database, set mysql_use_results mode for streaming db_config = dict( host="localhost", user="kris", passwd="geheim", db="kris", )From this, we can build a database connection and an INSERT statement, using the table name and column names: db = MySQLdb.connect(**db_config) # build a proper insert command cmd = f"insert into {table} ( " cmd += ", ".join(columns) cmd += ") values (" cmd += "%s," * len(columns) cmd = cmd[:-1] + ")" print(f"cmd = {cmd}")The actual code is then rather simple: Open the CSV file, named after the table, and create a csv.reader(). Using this, we iterate over the rows. For each row, we execute the insert statement. Every commit_interval rows we commit, and for good measure we also commit after finishing, to make sure any remaining rows also get written out. with open(f"{table}.csv", "r") as csvfile: reader = csv.reader(csvfile, delimiter=delimiter, quotechar=quotechar) c = db.cursor() counter = 0 # insert the rows as we read them for row in reader: c.execute(cmd, row) # ever commit_interval we issue a commit counter += 1 if (counter % commit_interval) == 0: db.commit() # final commit to the remainder db.commit()And that it. That’s all the code. No FILE privilege, No special permissions besides insert_priv into the target table. No config in the database. No server restart to set up the permissions. And using Python’s multiprocessing, you could make it load multiple tables in parallel or chunk a very large table and load that in parallel - assuming you have database hardware that could profit from any of this. In any case - this is simpler, more secure and less privileged than any of the broken LOAD DATA variants. Don’t use them, write a loader program. Let’s run it. First we generate some data, using the previous example from the partitions tutorial: (venv) kris@server:~/Python/mysql$ mysql-partitions/partitions.py setup-tables (venv) kris@server:~/Python/mysql$ mysql-partitions/partitions.py start-processing create p2 reason: not enough partitions cmd = alter table data add partition ( partition p2 values less than ( 20000)) create p3 reason: not enough partitions cmd = alter table data add partition ( partition p3 values less than ( 30000)) create p4 reason: not enough partitions cmd = alter table data add partition ( partition p4 values less than ( 40000)) create p5 reason: not enough partitions cmd = alter table data add partition ( partition p5 values less than ( 50000)) create p6 reason: not enough empty partitions cmd = alter table data add partition ( partition p6 values less than ( 60000)) counter = 1000 counter = 2000 counter = 3000 counter = 4000 ^CError in atexit._run_exitfuncs: ...We then dump the data, truncate the table, and reload the data. We count the rows to be sure we get all of them back. (venv) kris@server:~/Python/mysql$ mysql-csv/dump.py table = data (venv) kris@server:~/Python/mysql$ mysql -u kris -pgeheim kris -e 'select count(*) from data' mysql: [Warning] Using a password on the command line interface can be insecure. +----------+ | count(*) | +----------+ | 4511 | +----------+ (venv) kris@server:~/Python/mysql$ mysql -u kris -pgeheim kris -e 'truncate table data' mysql: [Warning] Using a password on the command line interface can be insecure. (venv) kris@server:~/Python/mysql$ mysql-csv/load.py cmd = insert into data ( id, d, e) values (%s,%s,%s) (venv) kris@server:~/Python/mysql$ mysql -u kris -pgeheim kris -e 'select count(*) from data' mysql: [Warning] Using a password on the command line interface can be insecure. +----------+ | count(*) | +----------+ | 4511 | +----------+ https://isotopp.github.io/2020/09/28/mysql-import-csv-not-using-load-data.html
0 notes
annabethinwonderland · 6 years ago
Text
The Mystique of the Double Rifle - Part Two
In the mid 1980's I began searching for a reasonable twofold rifle in a gauge that would be appropriate for major game. Other than having the option to convey the right measure of vitality on target, it must be such a gauge, that stacked cartridges, cartridge cases, and shots would be promptly accessible. To buy a rifle in 40 bore, for example, would fuel the issue of getting ammo and stacking parts.
I would have jumped at the chance to have a rifle my preferred gauge, 30/06, yet rifles loaded in this bore are genuinely rare. Besides, extraction issues can be experienced in rimless gauges. It wasn't well before I experienced a next to each other twofold rifle in 9.3 x 74R bore. I thought about this cartridge and it settled on my choice to buy the rifle that a lot simpler. It a lot of gag vitality for major game and the parts territory promptly accessible.
Most importantly, let me talk about the 9.3 x 74R bore. In Europe they generally signify a gauge by the breadth of the shot, the length of the cartridge case and after that an addition which indicates whether the case is rimmed or rimless. Hence this cartridge has a 9.3 mm (.366) distance across shot and cartridge case is 74 mm (3 inches) in length. The "R" implies that the case has an edge on the cartridge case for in extraction. The case likewise has a slight decrease which additionally helps extraction. Extraction is dependably a worry in a twofold rifle as the cartridge case isn't turned by a jolt for essential extraction. The case is hauled straight back and out" This cartridge is broadly utilized in Europe and today one hears increasingly more about it in the United States. Truth be told, some U.S. organizations currently make the shots for reloading and others convey the ammo and cartridge cases.
The rifle was made by the organization Richard Fischer Jr. in Suhl Germany Suhl (Thuringia State) ins the old arms checking focal point of Germany and still delivers fine guns for those with a preference for quality and excellence. The rifle was sealed February 1931 and consequently it was made some time before that date.
When acquiring a twofold rifle, one must check the data accessible on the "pads" Remove the barrels and turn them over and take a gander at the imprints situated on the pads legitimately under the loads. The bore will be stepped at this area, mine is stepped 9.3 mm/74.5 It is stepped with a N for nitro sealed or just nitro cellulose, smokeless powder. The shot weight utilized in managing the barrets gauged 18 grams which is in the scope of 285 grams. A container 9.3 x 74R cartridges that were produced by DWM in Germany demonstrates a 19 gram shot with3.85 grams of smokeless powder.
Tumblr media
A U with a crown above it implies that the gun has had the last evidence. The evidence mark was utilized on German guns preceding 1939 when new verification law was established. A stamp, st m G demonstrates that the barrets were sealed for rifled barrels with a steel jacketed shot. A G with a crown above it implies that it was sealed for a gun with rifled barrels. An E with a crown above is likewise stepped on the pads which demonstrates that the rifle was sealed for express rifle barrels. Finatty an adapted bird with spreading wings demonstrates that a proof was completed on the incomplete barrels. Clearly they didn't wand to go thought the last completing and controlling just out that the barrel(s) had a basic blemish in them.
To have the option to find out what the majority of the various evidences mean for the various nations engaged with the rifles fabricate, you should have a book on confirmation marks which is accessible and is recorded toward the finish of this article.
The reasons that I welcome this rifle, when I go chasing, are numerous and differed. As a matter of first importance, on the off chance that I am exploring into the mountains the rifle can be separated into there isolated segments. The rifle is likewise light when contrasted and a significant number of the jolt activity rifles loaded for huge magnum. U.S. gauges. At the point when the rifle is collected for use it is very much adjusted and simple to convey in unpleasant territory. Because of the way that there is no long activity, as found on jolt activity rifles, the rifle can have a 26-inch barrel and still be shorter than a practically identical jolt activity rifle. I likewise have two barrels available to me that fire incredible cartridges. The primary barrel ought to achieve the job that needs to be done, yet on the off chance that not, at that point the second barrel is promptly accessible by just putting your finger on the second trigger. You don't need to move a jolt or work the slide as on generally utilized American chasing rifles. Obviously one can contend that self-loader chasing rifles are accessible where you can send various adjusts down range by only crushing the trigger. That is valid, yet I am not an advocate of "splash and supplicate" marksmanship. This is particularly evident when one considers the quantity of seekers who cross slope and dale amid the chasing season. Individuals can get injured or executed by silly shot after an escaping game creature. I feel that in the event that you haven't packed away the creature in a couple of shots, the time has come to stopped discharging and search for another chance to sack your deer or elk!
This specific twofold rifle does not have programmed ejectors just like the cases generally pairs. Programmed ejectors will expand the cost of the rifle for a certain something. The other is that with risky game it is felt that the ping of cartridge cases catapulting out of the rifle will draw in the consideration of an injured, hazardous creature. On the off chance that you are reloading the ammo, it likewise enables you to put terminated cartridge cases in your pocket instead of burrowing through the snow searching for them zoomtarget.com
At the point when this twofold is turned over, there is a snare entryway at the toe of the butt stock which houses four cartridges. This you generally have save ammo with you if need be I am not one that goes into the backwoods with a case or two or ammo as I am chasing and going on look for and wreck mission. In the event that you can't sack your creature with about six rounds or less, the time has come to invest more energy in the range or gbe increasingly specific while picking your shots.
As I got more established, I included a 23/4X degree to the rail worked in the middle of the barrels. This guides me in locating in the rifle just as filtering the brush or trees to check whether a creature is legitimate... before pulling the trigger(s). There is no motivation to have a 3 x 9 variable extension introduced on the rifle as a twofold rifle a 100 to 150 yard rifle. Besides I am chasing and stalking the creature to get inside a better than average range which is a piece of the chasing.
Experience whether I simply needed to put meat on the table, I wound utilize an overwhelming gauge expert rifleman rifle and shoot at focuses at 500 to 1000 yards. That isn't my concept of chasing and I will adhere to the 100 to 150 yard shots. Keep in mind the rush of chasing is summed up in the initial four letters of the word chasing... Chase!!
At long last we come to guideline of the rifle. It had been managed at the guns plant, however with present day segments, that make up recently produced cartridges, it is necessitated that you discover which parcel of ammo or producer gives you the best outcomes in your specific rifle. I have discharged production line loads, reloads with the cast shots first. It is protected to state that my involvement with cast projectiles left gatherings at 50 resembled the way of a swarm of Africanized honey bees. In short my involvement with these has been dreary at the best. Precision, for example, this was not seen since the Napoleonic Wars!!
Jacketed projectiles have brought fantastic to fair outcomes. One needs to fluctuate the powder charge, the powder type and obviously the shot weight and type. The quantity of stages and blends can be extraordinary without a doubt. After numerous rounds terminated, I went onto a triumphant mix. I utilize 286 grain loadings of RWS and Norma ammo with a RWS stacking in one barrel and a Norma plant stacking in the other. This gives fine outcomes; so natural yet so long to locate this ideal blend.
With regards to reloading I have discovered that you don't more than once reload the cases the same number of times as you would with a jolt activity rifle. I utilize my case multiple times and afterward proceed onward to new cases. Rehashed use can result in head partitions which isn't prudent.
When I chase with reloaded ammo, I utilize virgin metal so I don't have an issue in the field. I may include that when a head division results, I needn't bother with any toll other than a cleaning bar mind a metal fiber brush of 38 bore. The decreased case, which helps extraction, likewise fits evacuating a headless case!!
On the off chance that you have not attempted a twofold rifle for chasing, I suggest that you look it. This isn't a rifle for the normal "meat seeker" who goes into the woods to put meat on the table. It is practically identical to fly angling which is likewise for specific kind of angler. In the event that you need fish, simply get a shoddy throwing bar and reel and a situation. Anyway there are a few of us who welcome the best in fly poles and rifles to make the best involvement in field and stream. On account of chasing, a twofold rifle can be a relationship of wood and metal for the seeker and weapon aficionados.
0 notes
viditure · 5 years ago
Text
Data Factory: the advantages of a denormalised Apache Parquet storage
It is important for our customers to know that they will always have access to their data when they need it. Whether they want to display a dashboard, drill down or extract data via the API, they need to be able to request a large volume of data in a powerful way to achieve their objectives, and therefore bring value to their companies.  Behind the scenes, AT Internet’s engineers are working to set up data processing and storage systems to make this possible. They use the various tools at their disposal for this purpose.  This article is the first of a series that will introduce you to some of the data storage technologies we use at AT Internet. It focuses on Apache Parquet, a file storage format designed for big data.
AT Internet started a major overhaul of its processing chain several years ago. Some fundamental aspects of this redesign have recently become more visible with tools such as Data Query 3 and the new Data Model. Internally, the first steps of this transition started a few years ago when the company began to redesign how it processes and stores data from the ground up, taking advantage of the potential that the Big Data ecosystem has to offer.  One of the cornerstones of this new approach is column-oriented storage, which makes it possible to denormalise in an efficient way. This article will explain what it is and what the benefits are. But before I get into these explanations, I’ll describe how data is traditionally stored, especially through normalization and line-oriented storage. 
The traditional approach to data storage: standardisation 
Imagine we want to create a database with a certain amount of information related to films. 
Movie Id Movie name Release year Author name Author country 1 The Matrix 1999 The Wachowskis USA��2 The Matrix Reloaded 2003 The Wachowskis USA 3 The Matrix Revolutions 2003 The Wachowskis USA 
As you can see, in the storage model above, the author’s information is duplicated on each film, despite the fact that in the case in question, this data is exactly the same for each line.  In the real world a lot of data is duplicated/shared in databases. For example, we could want to store the actors who played in each film; or we could go to a deeper level and store information about the countries the actors or directors come from… or both!  We intuitively feel that storing the data by duplicating it can quickly become a problem if a technical solution is not found.  Traditionally, the ‘right’/standard way to store this type of data is to separate the different types of data into different tables, in order to reduce or even eliminate duplicates, and this technique has long been the de facto way to proceed in the industry. 
Movie Id Movie name Release year Author id 1 The Matrix 1999 1 2 The Matrix Reloaded 2003 1 3 The Matrix Revolutions 2003 1 
Author id name country 1 The Wachowskis USA 
In the current (and time-honoured) processing chain, i.e. the one used to execute your Data Query requests, most data is stored in this way and called a normal form.  This approach has the advantage of reducing duplication and therefore reducing the amount of data that needs to be stored. In many cases of use, storing data in this way is the most natural, ecological and efficient because database management systems (DBMS) have mechanisms to make queries on this type of schema efficient. How? Through techniques such as the calculation and storage of statistics internal to the database engine, or the optimisation of queries.  In the world of analytics, where OLAP-type queries are mainly carried out, this paradigm is beginning to pose some problems with the explosion of the quantities of data collected.  The most important of these problems occurs when you try to cross several large tables. In technical jargon, this is called performing a join operation between several tables.  At the scales of data processed and requested by AT Internet, this joining step can be very costly and complex to perform effectively. In the event that the request is not properly optimised by the DBMS engine, the processing cost may be such that it ultimately results in a request that is too slow for the end customer. 
Another way to do things: denormalise the data to get performance at the request 
Data Engineer 1 – Why not just store everything in a denormalised way? This way we won’t have to pay the price of these damn joins every time the customer wants to request their data!  Data Engineer 2 – Are you serious? Do you have any idea how much duplication this would create?  Data Engineer 2 -…  Data Engineer 2 – This is madness!  Data Engineer 1 – Madness? … THIS IS DATA! 
The rationale for adopting such an approach is quite simple. Avoiding the need to make joins means simpler queries, easier to optimise, and therefore better response times for the end user at the time of the request.  Empirically, we notice that the cost of storing the data is more than compensated by the request performance obtained, and this applies even more as we can deploy a number of tips to relieve the burden of redundancy. 
The column format 
One way to compensate for the problems of data duplication by returning to denormalised storage is by using what is called a column-oriented format. 
In a traditional database, each record is stored in one block. The blocks follow each other, but all the data for each record is in an adjoining space: 
1;The Matrix;1999;The Wachowskis  2;The Matrix Reloaded;2003;The Wachowskis  3;The Matrix Revolutions;2003;The Wachowskis 
One of the problems associated with storing in this form is that you have to read the whole line from the disk even if you want to load only part of the data. For example, we are obliged to load the titles from the disk even if the only information we want to retrieve is the year of release of each film.  Reading a relatively small set of all available columns is a prototypical example of the type of requests executed by our customers in an analytical context.  In a column-oriented format, each column is stored separately: 
The Matrix:1;The Matrix Reloaded:2;The Matrix Revolutions:3  1999:1;2003:2;2003:3  The Wachowskis:1;The Wachowskis:2;The Wachowskis:3 
At first glance, this may not seem like a big difference, but in reality, this alteration changes the constraints so much that it is a real paradigm shift.  One of the immediate advantages is that it is now much easier to read only the data in certain columns. This implies fewer disk I/Os, and this is crucial because disk I/Os are one of the first factors limiting performance.  A somewhat less obvious advantage of this paradigm shift is that since the data in the same column are generally relatively homogeneous, it allows it to be compressed aggressively by applying appropriate compression algorithms, which can even be chosen on the fly and on a case-by-case basis.  To give an overview, by using our previous example, the date and author columns could be stored as follows: 
1999:1;2003:2,3  The Wachowskis:* 
Many optimisations are possible, but these are only mentioned here as examples to give you an idea of the possibilities offered by this type of format.  Most DBMS on the market offer options to store data in column format. Microsoft SQL Server, the database traditionally used by AT Internet, for example, is able to manage the column format and this feature has been used for several years now in our databases. Nevertheless, the data in these databases remains normalized, unlike in the New Data Factory. 
A few words about Apache Parquet 
The official Apache Parquet site defines the format as:  A column storage format available for any product in the Hadoop ecosystem, regardless of the choice of processing framework, data model or programming language. 
This technology is one of the most popular implementations of a column-oriented file format.  One of the aspects I would like to stress is that this is a file format and not a database management system. This is an important distinction, especially because it implies that being made up of simple files, a data lake parquet can be stored where you want, whether in your SAN storage bay, in a datacentre, or in a cloud computing server.  Adopting such a storage format thus makes it possible to start on a sound basis compatible with the principle of decoupling compute and storage, one of the prevailing principles in big data storage and which we try to follow as data engineers at AT Internet. 
We will not have time in this article to cover all the features of this Apache Parquet format, but here is a brief summary of its most useful features: 
Parquet is able to natively manage nested data structures 
Empty values (NULL) are managed natively and cost almost nothing in storage 
The parquet files are self-describing (The schema is contained in each file) 
The engines managing the parquet format are able to dynamically discover the schema of a Parquet data lake (but this discovery may take some time) 
This format allows predicate push-down natively by eliminating row-groups. This means that it is possible to load from the disk only the part of the data that really interests us when filtering a parquet file. 
It is strongly supported in the various tools of the Big Data ecosystem 
A final important aspect to mention with regard to this format is partitioning. In a data lake parquet, the files are generally stored in directories corresponding to one of the columns of the data. This is called partitioning. If we sort our files by date, we can have for example: 
|  |- date=2019-01-01-01/data.parquet  |- date=2019-01-02/data.parquet  |- ...  | 
Partitioning by date or time is often a natural and efficient choice for data collected as an uninterrupted flow. Partitioning allows you to request the data in a powerful way by directly targeting the files likely to contain the data you are requesting. 
Not the ideal remedy: pitfalls… and solutions 
In addition to the positive aspects of this paradigm shift, new constraints and difficulties are emerging; making it challenging to find solutions. Here are some of these issues as well as ways to limit their inconvenience. 
Updating data 
Parquet does not offer a native way to update only a few lines in a file. To carry this out, it is necessary to completely re-write the file.  Fortunately, in most Big Data workflows, the data is “Write once, Read many times”, which mitigates the problem in this context. A good partitioning of the data also improves the situation because it allows to update one or more partitions independently of the rest of the Data Lake.  In the case where it is known in advance that some data will change frequently, one possible solution is to re-normalise it. 
Variable geometry properties 
Depending on how the files are written and the type of technology where they are stored (Hadoop cluster, S3, local file system), the data lake does not have exactly the same properties, especially in terms of the atomicity of operations.  As a data engineer, it is therefore important to know and master the properties of the file system hosting your data lake so as not to make any misinterpretation. 
Transaction management 
Transaction management must be done manually: if several processes write and read the same data, the synchronisation of these processes must be done manually in order not to read the data in a corrupted state.  Data processing tools such as Apache Spark natively have connectors that allow transactional writing for most file systems. In cases where it would be impossible to use these features, it is still possible to implement a distributed lock system to regulate read and write access to the Data Lake. 
Sorting data within a file 
The distribution of the data inside a partition and its sorting within the same parquet file is important and greatly conditions the size of the written files.   There is no miracle solution for this point. It is very important to know the data processed in order to choose the right sorting to apply. One of the best approaches is to test different partitioning keys, the aim being to group similar data into the same groups of Parquet lines. 
Taking a step back 
Many of the problems mentioned above are some form of ACID loss, and most of the technologies on which contemporary data lakes are based suffer from it. As these problems are very general, the big data community is working on solutions, many that are beginning to emerge; and of which some honourable mentions are Delta Lake, ACID Orc or Iceberg. These technologies are promising and should make it possible in the medium term to avoid having to worry about the considerations mentioned above. 
In conclusion 
We have seen in this article what database normalization is, and learned about the row and column storage formats. We also saw why in big data workflows, the column format is preferred, why it opens the door to data recording in a denormalised format, and this led us to introduce the Apache Parquet file format.  As the ecosystem is constantly evolving, and as data engineers in a company that processes as much data as AT Internet, we have a responsibility to stay current and continue to adapt processing chains using the most efficient tools in order to bring the best value to our customers, and we hope to be able to tell you more about these tools in future articles.  If you found this article interesting, do not hesitate to consult the AT Internet website to learn more about our solution. 
Article Data Factory: the advantages of a denormalised Apache Parquet storage first appeared on Digital Analytics Blog.
from Digital Analytics Blog https://ift.tt/2xLh6fq via IFTTT
0 notes
tak4hir0 · 6 years ago
Link
Scaling Data Access With App Layer Cache Applications like CRM are always active; requests come from sales users, service users, APIs, report executions, and community users. All this access keeps the system busy and they are all business transactions, which means the data is critical and the database is a precious resource. The usual CRM traffic patterns is not a concern; what would impact application scale are use cases where users repeatedly try to reference the same data to check if records are changed or if new data is available. These are users trying to get latest lead or the most recent case. Developers, admins, and architects alike need to watch out for such use cases. Let’s explore why this is important. Data, use cases, and impact Impact to the health of an application comes from these types of users, use cases, and data: Thousands of sales or service agents wanting to retrieve the latest lead or case. The type of request polling, refreshing or reloading of pages resulting in millions of requests in a short amount of time. Data is dynamic, but between changes or new data, the database is queried with high frequency to check for changes. And the impact: Database CPU spikes up during these heavy usage times Reduced capacity for other applications Performance slows down for applications and pages Dashboard and report executions time out User frustration and complaints All this leads to lack of scalability for the application and a potential slow down of business growth and increased costs. There are mainly solutions available; however, business prefers solutions that are quick to implement and are cost-effective. Caching is potentially a solution in such situations. Let’s talk about caching Caching has been around for a while. It is typically used to speed up webpages and protect backend systems from too many user requests. These days some sort of cache is available in all layers of an application stack. These caches help tremendously in scaling and performance, however it’s not sufficient for high impact use cases. We need a new caching layer which caches data for short periods and intercepts requests from client. Intro to application layer caching To address the impact use cases we need to introduce a new cache layer called application layer cache. This cache is connected to the application server and is accessible from the application code. This new cache layer has good features. It’s an in-memory cache for fast retrievals, and is implemented using Redis, an open source caching software. Data is stored in key value pairs. Data structures such as lists, sets and hashsets can also be used along with primitive datatypes such as numbers and strings. Application layer cache also supports partitioning. At Salesforce, Lightning Platform Cache is the feature we provide to enable this application layer caching to applications on the platform. Using partitions and instances Partitioning distributes data among nodes of the cache. This enables different partitions to be allocated to different applications or use cases. Each partition acts as a unique namespace for all keys. A partition can have two types of cache: a common cache called an org cache which is accessible to all users of the application, and a session cache which stores data private a user and is attached to each logged in user session. The session cache is deleted at the end of a session. On to solution Now that we are familiar with cache and its internals, let’s see how it can be implemented for some high impact use cases. Use case Let’s take one use case and see how the cache can help. In this use case, agents take reservations from customers. The process involves giving offers based on loyalty programs. The offer is written to the database from an external API call. To check if an offer exists, the agent’s application constantly polls the server. There are thousands agents doing the same and this causes the database CPU to spike. We will refer to this use case throughout the rest of this document as the offers use case. Implementing the app layer cache The cache solution is as shown. Here session cache is used as each offer is unique to an agent based on reservation done by that agent. The business flow: An offer record is created via an API call. Database Trigger fires on save or update. Record put in app layer cache (session cache) by trigger code. Record is retrieved by client from session cache. Cached record is shown to the client. This pattern is called write through pattern. In this pattern, the cache is written immediately when the data is created or updated in the database. This implementation ensures the client requests never go to the database. All requests are served by the app layer cache, therefore protecting the database from spiking up. Here’s a code sample to read data from cache for offer use case: public Offer__c getCustomerOffer(Id customerId) { Offer__c result; Cache.SessionPartition cachePartition = Cache.Session.getPartition('local.customers'); if (cachePartition.contains(customerId)) { //cache hit newOffer = cachePartition.get(customerId) }else{ result = null; //Offer not available yet. } return result; } Challenges Race conditions Not all data can be cached. Data that is shared among multiple users and getting constantly updated can lead to race conditions. Knock knock! Race Condition! Who’s there? A race condition happens when multiple threads update the same record concurrently. Each of them write into the cache the data they have. The last update would be the “winner” however the winner might not have the latest update to the record and would result in the user getting the old data. Work around Race conditions can be worked around using a lazy load pattern. In this pattern, triggers are not employed to update the cache. Instead one user is assigned to read data directly from the database, the data is then written to cache. Since data is visible to all users, the rest of the users read from the cache. There is a slight overhead of that one user making frequent database requests. However, in situations where large volume of users concurrently use the system, allowing one user to access the database directly is less of an impact. Cache miss A cache miss is when the application expects data to be in cache but can’t find it there. This causes the application to go to the database to get that data. Too many cache misses will reduce the effectiveness of the cache. Cache miss happens because data in cache is non-durable; cached data may be evicted when space is short. User code should handle cache misses. As shown in the code snippet, the cache is checked to see if a key exists. Or else, a fallback condition executes and retrieves data from the database and caches it for subsequent requests. Platform Cache provides CacheBuilder interface which helps with this use case. Link provided below. //A coding best practice using Salesforce Platform cache CacheBuilder interface //which makes it easy to handle cache misses. //Retrieve the logged in user information from the cache. public class userDetails { Id userid = UserInfo.getUserId() class UserInfoCache implements Cache.CacheBuilder { // Inner class public Object doLoad(String userid) { // Implement doLoad() User u = [SELECT Id, IsActive, username FROM User WHERE id =: userid]; return u; } } To retrieve a cached value String UserId = UserInfo.getUserId() User loggedIn = (User) Cache.Org.get(UserInfoCache.class, UserId) //Retrieves if exists else executes the doLoad method to populate the cache. Benefits This table shows results monitoring app layer cache implementation for the offers use case. Prior to the implementation of cache the database was the target of requests from agents checking for new offers. This caused high usage of database CPU. Post implementation, we see huge reduction. Before implementation: Total Requests/Day DB CPU Usage ** DB CPU %*** Database Queries/day * Avg Response Time 12 Million 83 Hours 7 ~ 8% 12 Million 30ms After app layer cache implementation: Total Requests/Day DB CPU Usage DB CPU % Database Queries/day Avg Response Time 11.39 Million < 1 minute 0% 6K 6 ~ 11ms Based on this data we see the cache has significant impact by reducing database usage to 1 minute from 83 hours and also benefiting the application performance by reducing the response time by 50 percent. Some of the other benefits include: Area of impact Benefits Reduced number of Database nodes Saves lot of money Capacity released for critical operations Scale businesses processes Simplifies data access Switching to cache access is easy Better performance Cache access is faster Conclusion Designing for scale is critical as business grows. Scale in transaction-based apps on the Lightning Platform depends on critical resources like database. Data can be accessed millions of times before it changes. Application layer caching is a solution to protect databases from such heavy load. Caching needs to be planned as there can be challenges. References About the author Anil Jacob is a Lead Software Engineer on the Frontier Scale team at Salesforce. He works on large and complex customer implementations and related scale challenges. His areas of interest are application scale, user experience, UX performance, and application development and business scale. Prior to Salesforce, he was with Intuit, Bea Weblogic, and Wells Fargo Bank.
0 notes
notsadrobotxyz · 6 years ago
Text
Oracle DBA interview Question with Answer (All in One Doc)
1. General DB Maintenance2. Backup and Recovery3. Flashback Technology4. Dataguard5. Upgration/Migration/Patches6. Performance Tuning7. ASM8. RAC (RAC (Cluster/ASM/Oracle Binaries) Installation Link 9. Linux Operating10. PL/SQLGeneral DB Maintenance Question/Answer:When we run a Trace and Tkprof on a query we see the timing information for three phase?Parse-> Execute-> FetchWhich parameter is used in TNS connect identifier to specify number of concurrent connection request?QUEUESIZEWhat does AFFIRM/NOFFIRM parameter specify?AFFIRM specify redo transport service acknowledgement after writing to standby (SYNC) where as NOFFIRM specify acknowledgement before writing to standby (ASYNC).After upgrade task which script is used to run recompile invalid object?utlrp.sql, utlprpDue to too many cursor presents in library cache caused wait what parameter need to increase?Open_cursor, shared_pool_sizeWhen using Recover database using backup control file?To synchronize datafile to controlfileWhat is the use of CONSISTENT=Y and DIRECT=Y parameter in export?It will take consistent values while taking export of a table. Setting direct=yes, to extract data by reading the data directly, bypasses the SGA, bypassing the SQL command-processing layer (evaluating buffer), so it should be faster. Default value N.What the parameter COMPRESS, SHOW, SQLFILE will do during export?If you are using COMPRESS during import, It will put entire data in a single extent. if you are using SHOW=Y during import, It will read entire dumpfile and confirm backup validity even if you don’t know the formuser of export can use this show=y option with import to check the fromuser.If you are using SQLFILE (which contains all the DDL commands which Import would have executed) parameter with import utility can get the information dumpfile is corrupted or not because this utility will read entire dumpfile export and report the status.Can we import 11g dumpfile into 10g using datapump? If so, is it also  possible between 10g and 9i?Yes we can import from 11g to 10g using VERSION option. This is not possible between 10g and 9i as datapump is not there in 9iWhat does KEEP_MASTER and METRICS parameter of datapump?KEEP_MASTER and METRICS are undocumented parameter of EXPDP/IMPDP. METRICS provides the time it took for processing the objects and KEEP_MASTER prevents the Data Pump Master table from getting deleted after an Export/Import job completion.What happens when we fire SQL statement in Oracle?First it will check the syntax and semantics in library cache, after that it will create execution plan. If already data is in buffer cache it will directly return to the client (soft parse) otherwise it will fetch the data from datafiles and write to the database buffer cache (hard parse) after that it will send server and finally server send to the client.What are between latches and locks?1. A latch management is based on first in first grab whereas lock depends lock order is last come and grap. 2. Lock creating deadlock whereas latches never creating deadlock it is handle by oracle internally. Latches are only related with SGA internal buffer whereas lock related with transaction level. 3. Latches having on two states either WAIT or NOWAIT whereas locks having six different states: DML locks (Table and row level-DBA_DML_LOCKS ), DDL locks (Schema and Structure level –DBA_DDL_LOCKS), DBA_BLOCKERS further categorized many more.What are the differences between LMTS and DMTS? Tablespaces that record extent allocation in the dictionary are called dictionary managed tablespaces, the dictionary tables are created on SYSTEM tablespace and tablespaces that record extent allocation in the tablespace header are called locally managed tablespaces.Difference of Regular and Index organized table?The traditional or regular table is based on heap structure where data are stored in un-ordered format where as in IOT is based on Binary tree structure and data are stored in order format with the help of primary key. The IOT is useful in the situation where accessing is commonly with the primary key use of where clause statement. If IOT is used in select statement without primary key the query performance degrades.What are Table portioning and their use and benefits?Partitioning the big table into different named storage section to improve the performance of query, as the query is accessing only the particular partitioned instead of whole range of big tables. The partitioned is based on partition key. The three partition types are: Range/Hash/List Partition.Apart from table an index can also partitioned using the above partition method either LOCAL or GLOBAL.Range partition:How to deal online redo log file corruption?1. Recover when only one redo log file corrupted?If your database is open and you lost or corrupted your logfile then first try to shutdown your database normally does not shutdown abort. If you lose or corrupted only one redo log file then you need only to open the database with resetlog option. Opening with resetlog option will re-create your online redo log file.RECOVER DATABASE UNTIL CANCEL;  then ALTER DATABASE OPEN RESETLOGS;2. Recover when all the online redo log file corrupted?When you lose all member of redo log group then the step of maintenance depends on group ‘STATUS’ and database status Archivelog/NoArchivelog.If the affected redo log group has a status of INACTIVE then it is no longer required crash recovery then issues either clear logfile or re-create the group manually.ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP 3; -- you are in archive mode and group still not archivedALTER DATABASE CLEAR LOGFILE GROUP 3; noarchive mode or group already archivedIf the affected redo log group has a status ACTIVE then it is still required for crash recovery. Issue the command ALTER SYSTEM CHECKPOINT, if successful then follow the step inactive if fails then you need to perform incomplete recovery up to the previous log file and open the database with resetlog option.If the affected redo log group is CURRENT then lgwr stops writing and you have to perform incomplete recovery up to the last logfile and open the database with resetlog option and if your database in noarchive then perform the complete recovery with last cold backup.Note: When the online redolog is UNUSED/STALE means it is never written it is newly created logfile.What is the function of shared pool in SGA?The shared pool is most important area of SGA. It control almost all sub area of SGA. The shortage of shared pool may result high library cache reloads and shared pool latch contention error. The two major component of shared pool is library cache and dictionary cache.The library cache contains current SQL execution plan information. It also holds PL/SQL procedure and trigger.The dictionary cache holds environmental information which includes referential integrity, table definition, indexing information and other metadata information.Backup & Recovery Question/Answer:Is target database can be catalog database?No recovery catalog cannot be the same as target database because whenever target database having restore and recovery process it must be in mount stage in that period we cannot access catalog information as database is not open.What is the use of large pool, which case you need to set the large pool?You need to set large pool if you are using: MTS (Multi thread server) and RMAN Backups. Large pool prevents RMAN & MTS from competing with other sub system for the same memory (specific allotment for this job). RMAN uses the large pool for backup & restore when you set the DBWR_IO_SLAVES or BACKUP_TAPE_IO_SLAVES parameters to simulate asynchronous I/O. If neither of these parameters is enabled, then Oracle allocates backup buffers from local process memory rather than shared memory. Then there is no use of large pool.How to take User-managed backup in RMAN or How to make use of obsolete backup? By using catalog command: RMAN>CATALOG START WITH '/tmp/KEEP_UNTIL_30APRIL2010;It will search into all file matching the pattern on the destination asks for confirmation to catalog or you can directly change the backup set keep until time using rman command to make obsolete backup usable.RMAN> change backupset 3916 keep until time "to_date('01-MAY-2010','DD-MON-YYYY')" nologs;This is important in the situation where our backup become obsolete due to RMAN retention policy or we have already restored prior to that backup. What is difference between using recovery catalog and control file?When new incarnation happens, the old backup information in control file will be lost where as it will be preserved in recovery catalog .In recovery catalog, we can store scripts. Recovery catalog is central and can have information of many databases. This is the reason we must need to take a fresh backup after new incarnation of control file.What is the benefit of Block Media Recovery and How to do it?Without block media recovery if the single block is corrupted then you must take datafile offline and then restore all backup and archive log thus entire datafile is unavailable until the process is over but incase of block media recovery datafile will be online only the particular block will be unavailable which needs recovery. You can find the details of corrupted block in V$database_Block_Corruption view as well as in alert/trace file.Connect target database with RMAN in Mount phase:RMAN> Recover datafile 8 block 13;RMAN> Recover CORRUPTION_LIST;  --to recover all the corrupted block at a time.In respect of oracle 11g Active Dataguard features (physical standby) where real time query is possible corruption can be performed automatically. The primary database searches for good copies of block on the standby and if they found repair the block with no impact to the query which encounter the corrupt block.By default RMAN first searches the good block in real time physical standby database then flashback logs then full and incremental rman backup.What is Advantage of Datapump over Traditional Export?1. Data pump support parallel concept. It can write multiple dumps instead of single sequential dump.2. Data can be exported from remote database by using database link.3. Consistent export with Flashback_SCN, Flashback_Time supported in datapump.4. Has ability to attach/detach from job and able to monitor the job remotely.5. ESTIMATE_ONLY option can be used to estimate disk space requirement before perform the job.6. Explicit DB version can be specified so only supported object can be exported.7. Data can be imported from one DB to another DB without writing into dump file using NETWORK_LINK.8. During impdp we change the target file name, schema, tablespace using: REMAP_Why datapump is faster than traditional Export. What to do to increase datapump performace?Data Pump is block mode, exp is byte mode.Data Pump will do parallel execution.Data Pump uses direct path API and Network link features.Data pump export/import/access file on server rather than client by providing directory structure grant.Data pump is having self-tuning utilities, the tuning parameter BUFFER and RECORDLENGTH no need now.Following initialization parameter must be set to increase data pump performance:· DISK_ASYNCH_IO=TRUE· DB_BLOCK_CHECKING=FALSE· DB_BLOCK_CHECKSUM=FALSEFollowing initialization must be set high to increase datapump parallelism:· PROCESSES· SESSIONS· PARALLEL_MAX_SERVERS· SHARED_POOL_SIZE and UNDO_TABLESPACENote: you must set the reasonable amount of STREAMS_POOL_SIZE as per database size if SGA_MAXSIZE parameter is not set. If SGA_MAXSIZE is set it automatically pickup reasonable amount of size.Flashback Question/AnswerFlashback Archive Features in oracle 11gThe flashback archiving provides extended features of undo based recovery over a year or lifetime as per the retention period and destination size.Limitation or Restriction on flashback Drop features?1. The recyclebin features is only for non-system and locally managed tablespace. 2. When you drop any table all the associated objects related with that table will go to recyclebin and generally same reverse with flashback but sometimes due to space pressure associated index will finished with recyclebin. Flashback cannot able to reverse the referential constraints and Mviews log.3. The table having fine grained auditing active can be protected by recyclebin and partitioned index table are not protected by recyclebin.Limitation or Restriction on flashback Database features?1. Flashback cannot use to repair corrupt or shrink datafiles. If you try to flashback database over the period when drop datafiles happened then it will records only datafile entry into controlfile.2. If controlfile is restored or re-created then you cannot use flashback over the point in time when it is restored or re-created.3. You cannot flashback NOLOGGING operation. If you try to flashback over the point in time when NOLOGGING operation happens results block corruption after the flashback database. Thus it is extremely recommended after NOLOGGING operation perform backup.What are Advantages of flashback database over flashback Table?1. Flashback Database works through all DDL operations, whereas Flashback Table does not work with structural change such as adding/dropping a column, adding/dropping constraints, truncating table. During flashback Table operation A DML exclusive lock associated with that particular table while flashback operation is going on these lock preventing any operation in this table during this period only row is replaced with old row here. 2. Flashback Database moves the entire database back in time; constraints are not an issue, whereas they are with Flashback Table. 3. Flashback Table cannot be used on a standby database.How should I set the database to improve Flashback performance? Use a fast file system (ASM) for your flash recovery area, configure enough disk space for the file system that will hold the flash recovery area can enable to set maximum retention target. If the storage system used to hold the flash recovery area does not have non-volatile RAM (ASM), try to configure the file system on top of striped storage volumes, with a relatively small stripe size such as 128K. This will allow each write to the flashback logs to be spread across multiple spindles, improving performance. For large production databases set LOG_BUFFER to be at least 8MB. This makes sure the database allocates maximum memory (typically 16MB) for writing flashback database logs.Performance Tuning Question/Answer:If you are getting complain that database is slow. What should be your first steps to check the DB performance issues?In case of performance related issues as a DBA our first step to check all the session connected to the database to know exactly what the session is doing because sometimes unexpected hits leads to create object locking which slow down the DB performance.The database performance directly related with Network load, Data volume and Running SQL profiling.1.  So check the event which is waiting for long time. If you find object locking kill that session (DML locking only) will solve your issues.To check the user sessions and waiting events use the join query on views: V$session,v$session_wait2.  After locking other major things which affect the database performance is Disk I/O contention (When a session retrieves information from datafiles (on disk) to buffer cache, it has to wait until the disk send the data). This waiting time we need to minimize.We can check these waiting events for the session in terms of db file sequential read (single block read P3=1 usually the result of using index scan) and db file scattered read (multi block read P3 >=2 usually the results of for full table scan) using join query on the view v$system_eventSQL> SELECT a.average_wait "SEQ READ", b.average_wait "SCAT READ"  2    FROM sys.v_$system_event a, sys.v_$system_event b  3   WHERE a.event = 'db file sequential read'AND b.event = 'db file scattered read';  SEQ READ  SCAT READ---------- ----------       .74        1.6When you find the event is waiting for I/O to complete then you must need to reduce the waiting time to improve the DB performance. To reduce this waiting time you must need to perform SQL tuning to reduce the number of block retrieve by particular SQL statement.How to perform SQL Tuning?1. First of all you need to identify High load SQL statement. You can identify from AWR Report TOP 5 SQL statement (the query taking more CPU and having low execution ratio). Once you decided to tune the particular SQL statement then the first things you have to do to run the Tuning Optimizer. The Tuning optimize will decide: Accessing Method of query, Join Method of query and Join order.2. To examine the particular SQL statement you must need to check the particular query doing the full table scan (if index not applied use the proper index technique for the table) or if index already applied still doing full table scan then check may be table is having wrong indexing technique try to rebuild the index.  It will solve your issues somehow…… otherwise use next step of performance tuning.3. Enable the trace file before running your queries, then check the trace file using tkprof created output file. According to explain_plan check the elapsed time for each query, and then tune them respectively.To see the output of plan table you first need to create the plan_table from and create a public synonym for plan_table @$ORACLE_HOME/rdbms/admin/utlxplan.sql)SQL> create public synonym plan_table for sys.plan_table;4. Run SQL Tuning Advisor (@$ORACLE_HOME/rdbms/admin/sqltrpt.sql) by providing SQL_ID as you find in V$session view. You can provide rights to the particular schema for the use of SQL Tuning Advisor:         Grant Advisor to HR;         Grant Administer SQL Tuning set to HR;SQL Tuning Advisor will check your SQL structure and statistics. SQL Tuning Advisor suggests indexes that might be very useful. SQL Tuning Advisor suggests query rewrites. SQL Tuning Advisor suggests SQL profile. (Automatic reported each time)5. Now in oracle 11g SQL Access Advisor is used to suggests new index for materialized views. 6. More: Run TOP command in Linux to check CPU usage information and Run VMSTAT, SAR, PRSTAT command to get more information on CPU, memory usage and possible blocking.7. Optimizer Statistics are used by the query optimizer to choose the best execution plan for each SQL statement. Up-to-date optimizer statistics can greatly improve the performance of SQL statements.8. A SQL Profile contains object level statistics (auxiliary statistics) that help the optimizer to select the optimal execution plan of a particular SQL statement. It contains object level statistics by correcting the statistics level and giving the Tuning Advisor option for most relevant SQL plan generation.DBMS_SQLTUNE.ACCEPT_SQL_PROFILE – to accept the correct plan from SQLplusDBMS_SQLTUNE.ALTER_SQL_PROFILE – to modify/replace existing plan from SQLplus.DBMS_SQLTUNE.DROP_SQL_PROFILE – to drop existing plan.Profile Type: REGULAR-PROFILE, PX-PROFILE (with change to parallel exec)SELECT NAME, SQL_TEXT, CATEGORY, STATUS FROM   DBA_SQL_PROFILES; 9. SQL Plan Baselines are a new feature in Oracle Database 11g (previously used stored outlines, SQL Profiles) that helps to prevent repeatedly used SQL statements from regressing because a newly generated execution plan is less effective than what was originally in the library cache. Whenever optimizer generating a new plan it is going to the plan history table then after evolve or verified that plan and if the plan is better than previous plan then only that plan going to the plan table. You can manually check the plan history table and can accept the better plan manually using the ALTER_SQL_PLAN_BASELINE function of DBMS_SPM can be used to change the status of plans in the SQL History to Accepted, which in turn moves them into the SQL Baseline and the EVOLVE_SQL_PLAN_BASELINE function of the DBMS_SPM package can be used to see which plans have been evolved. Also there is a facility to fix a specific plan so that plan will not change automatically even if better execution plan is available. The plan base line view: DBA_SQL_PLAN_BASELINES.Why use SQL Plan Baseline, How to Generate new plan using Baseline 10. SQL Performance Analyzer allows you to test and to analyze the effects of changes on the execution performance of SQL contained in a SQL Tuning Set. Which factors are to be considered for creating index on Table? How to select column for index? 1. Creation of index on table depends on size of table, volume of data. If size of table is large and you need only few data < 15% of rows retrieving in report then you need to create index on that table. 2. Primary key and unique key automatically having index you might concentrate to create index on foreign key where indexing can improve performance on join on multiple table.3. The column is best suitable for indexing whose values are relatively unique in column (through which you can access complete table records. Wide range of value in column (good for regular index) whereas small range of values (good for bitmap index) or the column contains many nulls but queries can select all rows having a value. CREATE INDEX emp_ename ON emp_tab(ename);The column is not suitable for indexing which is having many nulls but cannot search non null value or LONG, LONG RAW column not suitable for indexing.CAUTION: The size of single index entry cannot exceed one-half of the available space on data block.The more indexes on table will create more overhead as with each DML operation on table all index must be updated. It is important to note that creation of so many indexes would affect the performance of DML on table because in single transaction should need to perform on various index segments and table simultaneously. What are Different Types of Index? Is creating index online possible? Function Based Index/Bitmap Index/Binary Tree Index/4. implicit or explicit index, 5. Domain Index You can create and rebuild indexes online. This enables you to update base tables at the same time you are building or rebuilding indexes on that table. You can perform DML operations while the index building is taking place, but DDL operations are not allowed. Parallel execution is not supported when creating or rebuilding an index online.An index can be considered for re-building under any of these circumstances:We must first get an idea of the current state of the index by using the ANALYZE INDEX VALIDATE STRUCTURE, ANALYZE INDEX COMPUTE STATISTICS command* The % of deleted rows exceeds 30% of the total rows (depending on table length). * If the ‘HEIGHT’ is greater than 4, as the height of level 3 we can insert millions of rows. * If the number of rows in the index (‘LF_ROWS’) is significantly smaller than ‘LF_BLKS’ this can indicate a large number of deletes, indicating that the index should be rebuilt.Differentiate the use of Bitmap index and Binary Tree index? Bitmap indexes are preferred in Data warehousing environment when cardinality is low or usually we have repeated or duplicate column. A bitmap index can index null value Binary-tree indexes are preferred in OLTP environment when cardinality is high usually we have too many distinct column. Binary tree index cannot index null value.If you are getting high “Busy Buffer waits”, how can you find the reason behind it? Buffer busy wait means that the queries are waiting for the blocks to be read into the db cache. There could be the reason when the block may be busy in the cache and session is waiting for it. It could be undo/data block or segment header wait. Run the below two query to find out the P1, P2 and P3 of a session causing buffer busy wait then after another query by putting the above P1, P2 and P3 values. SQL> Select p1 "File #",p2 "Block #",p3 "Reason Code" from v$session_wait Where event = 'buffer busy waits'; SQL> Select owner, segment_name, segment_type from dba_extents Where file_id = &P1 and &P2 between block_id and block_id + blocks -1;What is STATSPACK and AWR Report? Is there any difference? As a DBA what you should look into STATSPACK and AWR report?STATSPACK and AWR is a tools for performance tuning. AWR is a new feature for oracle 10g onwards where as STATSPACK reports are commonly used in earlier version but you can still use it in oracle 10g too. The basic difference is that STATSPACK snapshot purged must be scheduled manually but AWR snapshots are purged automatically by MMON BG process every night. AWR contains view dba_hist_active_sess_history to store ASH statistics where as STASPACK does not storing ASH statistics.You can run $ORACLE_HOME/rdbms/admin/spauto.sql to gather the STATSPACK report (note that Job_queue_processes must be set > 0 ) and awrpt to gather AWR report  for standalone environment and awrgrpt for RAC environment.In general as a DBA following list of information you must check in STATSPACK/AWR report. ¦ Top 5 wait events (db file seq read, CPU Time, db file scattered read, log file sync, log buffer spac)¦ Load profile (DB CPU(per sec) < Core configuration and ratio of hard parse must be < parse)¦ Instance efficiency hit ratios (%Non-Parse CPU nearer to 100%)¦ Top 5 Time Foreground events (wait class is ‘concurrency’ then problem if User IO, System IO then OK)¦ Top 5 SQL (check query having low execution and high elapsed time or taking high CPU and low execution)¦ Instance activity¦ File I/O and segment statistics¦ Memory allocation¦ Buffer waits¦ Latch waits 1. After getting AWR Report initially crosscheck CPU time, db time and elapsed time. CPU time means total time taken by the CPU including wait time also. Db time include both CPU time and the user call time whereas elapsed time is the time taken to execute the statement.2. Look the Load profile Report: Here DB CPU (per sec) must be < Core in Host configuration. If it is not means there is a CPU bound need more CPU (check happening for fraction time or all the time) and then look on this report Parse and Hard Parse. If the ratio of hard parse is more than parse then look for cursor sharing and application level for bind variable etc.3. Look instance efficiency Report: In this statistics you have to look ‘%Non-Parse CPU’, if this value nearer to 100% means most of the CPU resource are used into operation other than parsing which is good for database health.4. Look TOP five Time foreground Event: Here we should look ‘wait class’ if the wait class is User I/O, system I/O then OK if it is ‘Concurrency’ then there is serious problem then look Time(s) and Avg Wait time(s) if the Time (s) is more and Avg Wait Time(s) is less then you can ignore if both are high then there is need to further investigate (may be log file switch or check point incomplete).5. Look Time Model Statistics Report: This is detailed report of system resource consumption order by Time(s) and % of DB Time.6. Operating system statistics Report7. SQL ordered by elapsed time: In this report look for the query having low execution and high elapsed time so you have to investigate this and also look for the query using highest CPU time but the lower the execution.What is the difference between DB file sequential read and DB File Scattered Read? DB file sequential read is associated with index read where as DB File Scattered Read has to do with full table scan. The DB file sequential read, reads block into contiguous (single block) memory and DB File scattered read gets from multiple block and scattered them into buffer cache.  Dataguard Question/AnswerWhat are Benefits of Data Guard?Using Data guard feature in your environment following benefit:High availability, Data protection, Offloading backup operation to standby, Automatic gap detection and resolution in standby database, Automatic role transitions using data guard broker.Oracle Dataguard classified into two types:1. Physical standby (Redo apply technology)2. Logical Standby (SQL Apply Technology)Physical standby are created as exact copy (matching the schema) of the primary database and keeping always in recoverable mode (mount stage not open mode). In physical standby database transactions happens in primary database synchronized by using Redo Apply method by continually applying redo data on standby database received from primary database. Physical standby database can be opened for read only transitions only that time when redo apply is not going on. But from 11g onward using active data guard option (extra purchase) you can simultaneously open the physical standby database for read only access and can apply redo log received from primary in the meantime.Logical standby does not matching the same schema level and using the SQL Apply method to synchronize the logical standby database with primary database. The main advantage of logical standby database over physical standby is you can use logical standby database for reporting purpose while you are apply SQL.What are different services available in oracle data guard?1. Redo Transport Service: Transmit the redo from primary to standby (SYNC/ASYNC method). It responsible to manage the gap of redo log due to network failure. It detects if any corrupted archive log on standby system and automatically perform replacement from primary. 2. Log Apply Service: It applies the archive redo log to the standby. The MRP process doing this task.3. Role Transition service: it control the changing of database role from primary to standby includes: switchover, switchback, failover.4. DG broker: control the creation and monitoring of data guard through GUI and command line.What is different protection mode available in oracle data guard? How can check and change it?1. Maximum performance: (default): It provides the high level of data protection that is possible without affecting the performance of a primary database. It allowing transactions to commit as soon as all redo data generated by those transactions has been written to the online log.2. Maximum protection: This protection mode ensures that no data loss will occur if the primary database fails. In this mode the redo data needed to recover a transaction must be written to both the online redo log and to at least one standby database before the transaction commits. To ensure that data loss cannot occur, the primary database will shut down, rather than continue processing transactions.3. Maximum availability: This provides the highest level of data protection that is possible without compromising the availability of a primary database. Transactions do not commit until all redo data needed to recover those transactions has been written to the online redo log and to at least one standby database.Step to create physical standby database?On Primary site Modification:1. Enable force logging: Alter database force logging;2. Create redolog group for standby on primary server:Alter database add standby logfile (‘/u01/oradata/--/standby_redo01.log) size 100m;3. Setup the primary database pfile by changing required parameterLog_archive_dest_n – Primary database must be running in archive modeLog_archive_dest_state_nLog_archive_config  -- enble or disable the redo stream to the standby site.Log_file_name_convert , DB_file_name_convert  -- these parameter are used when you are using different directory structure in standby database. It is used for update the location of datafile in standby database.Standby_File_Management  -- by setting this AUTO so that when oracle file added or dropped from primary automatically changes made to the standby.              DB_Unique_Name,  Fal_server, Fal_client4. Create password file for primary5. Create controlfile for standby database on primary site:alter database create standby controlfile as ‘STAN.ctl;6. Configure the listner and tnsname on primary database.On Standby Modification:1. Copy primary site pfile and modify these pfile as per standby name and location:2. Copy password from primary and modify the name.3. Startup standby database in nomount using modified pfile and create spfile from it4. Use the created controlfile to mount the database.5. Now enable DG Broker to activate the primary or standby connection.6. Finally start redo log apply.How to enable/disable log apply service for standby?Alter database recover managed standby database disconnect; apply in backgroundAlter database recover managed standby database using current logfile; apply in real time.Alter database start logical standby apply immediate; to start SQL apply for logical standby database.What are different ways to manage long gap of standby database?Due to network issue sometimes gap is created between primary and standby database but once the network issue is resolved standby automatically starts applying redolog to fill the gap but in case when the gap is too long we can fill through rman incremental backup in three ways.1. Check the actual gap and perform incremental backup and use this backup to recover standby site.2. Create controlfile for standby on primary and restore the standby using newly created controlfile.3. Register the missing archive log.Use the v$archived_log view to find the gap (archived not applied yet) then find the Current_SCN and try to take rman incremental backup from physical site till that SCN and apply on standby site with recover database noredo option. Use the controlfile creation method only when fail to apply with normal backup method. Create new controlfile for standby on primary site using backup current controlfile for standby; Copy this controlfile on standby site then startup the standby in nomount using pfile and restore with the standby using this controlfile: restore standby controlfile from ‘/location of file’; and start MRP to test.If still alert.log showing log are transferred to the standby but still not applied then need to register these log with standby database with Alter database register logfile ‘/backup/temp/arc10.rc’;What is Active DATAGUARD feature in oracle 11g?In physical standby database prior to 11g you are not able to query on standby database while redo apply is going on but in 11g solve this issue by quering  current_scn from v$database view you are able to view the record while redo log applying. Thus active data guard feature s of 11g allows physical standby database to be open in read only mode while media recovery is going on through redo apply method and also you can open the logical standby in read/write mode while media recovery is going on through SQL apply method.How can you find out back log of standby?You can perform join query on v$archived_log, v$managed_standbyWhat is difference between normal Redo Apply and Real-time Apply?Normally once a log switch occurs on primary the archiver process transmit it to the standby destination and remote file server (RFS) on the standby writes these redo log data into archive. Finally MRP service, apply these archive to standby database. This is called Redo Apply service.In real time apply LGWR or Archiver on the primary directly writing redo data to standby there is no need to wait for current archive to be archived. Once a transaction is committed on primary the committed change will be available on the standby in real time even without switching the log.What are the Back ground processes for Data guard?On primary:Log Writer (LGWR): collects redo information and updates the online redolog . It can also create local archive redo log and transmit online redo log to standby.Archiver Process (ARCn): one or more archiver process makes copies of online redo log to standby locationFetch Archive Log (FAL_server): services request for archive log from the client running on different standby server.On standby:Fetch Archive Log (FAL_client): pulls archive from primary site and automatically initiates transfer of archive when it detects gap.Remote File Server (RFS): receives archives on standby redo log from primary database. Archiver (ARCn):  archived the standby redo log applied by managed recovery process.Managed Recovery Process (MRP): applies archives redo log to the standby server.Logical Standby Process (LSP): applies SQL to the standby server.ASM/RAC Question/AnswerWhat is the use of ASM (or) Why ASM preferred over filesystem?ASM provides striping and mirroring. You must put oracle CRD files, spfile on ASM. In 12c you can put oracle password file also in ASM. It facilitates online storage change and also rman recommended to backed up ASM based database.What are different types of striping in ASM & their differences?Fine-grained striping is smaller in size always writes data to 128 kb for each disk, Coarse-grained striping is bigger in size and it can write data as per ASM allocation unit defined by default it is 1MB.Default Memory Allocation for ASM? How will backup ASM metadata?Default Memory allocation for ASM in oracle 10g in 1GB in Oracle 11g 256M in 12c it is set back again 1GB.You can backup ASM metadata (ASM disk group configuration) using Md_Backup.How to find out connected databases with ASM or not connected disks list?ASMCMD> lsctSQL> select DB_NAME from V$ASM_CLIENT;ASMCMD> lsdgselect NAME,ALLOCATION_UNIT_SIZE from v$asm_diskgroup;What are required parameters for ASM instance Creation?INSTANCE_TYPE = ASM by default it is RDBMSDB_UNIQUE_NAME = +ASM1 by default it is +ASM but you need to alter to run multiple ASM instance.ASM_POWER_LIMIT = 11 It defines maximum power for a rebalancing operation on ASM by default it is 1 can be increased up to 11. The higher the limit the more resources are allocated resulting in faster rebalancing. It is a dynamic parameter which will be useful for rebalancing the data across disks.ASM_DISKSTRING = ‘/u01/dev/sda1/c*’it specify a value that can be used to limit the disks considered for discovery. Altering the default value may improve the speed disk group mount time and the speed of adding a disk to disk group.ASM_DISKGROUPS = DG_DATA, DG_FRA: List of disk group that will be mounted at instance startup where DG_DATA holds all the datafiles and FRA holds fast recovery area including online redo log and control files. Typically FRA disk group size will be twice of DATA disk group as it is holding all the backups.How to Creating spfile for ASM database?SQL> CREATE SPFILE FROM PFILE = ‘/tmp/init+ASM1.ora’;Start the instance with NOMOUNT option: Once an ASM instance is present disk group can be used for following parameter in database instance to allow ASM file creation:DB_CREATE_FILE_DEST, DB_CREATE_ONLINE_LOG_DEST_n, DB_RECOVERY_FILE_DEST, CONTROL_FILESLOG_ARCHIVE_DEST_n,LOG_ARCHIVE_DEST,STANDBY_ARCHIVE_DESTWhat are DISKGROUP Redundancy Level?Normal Redundancy: Two ways mirroring with 2 FAILURE groups with 3 quorum (optionally to store vote files)High Redundancy: Three ways mirroring requiring three failure groupsExternal Redundancy: No mirroring for disk that are already protecting using RAID on OS level.CREATE DISKGROUP disk_group_1 NORMAL REDUNDANCY  FAILGROUP failure_group_1 DISK '/devices/diska1' NAME diska1,'/devices/diska2' NAME diska2  FAILGROUP failure_group_2 DISK '/devices/diskb1' NAME diskb1,'/devices/diskb2' NAME diskb2;We are going to migrate new storage. How we will move my ASM database from storage A to storage B? First need to prepare OS level to disk so that both the new and old storage accessible to ASM then simply add the new disks to the ASM disk group and drop the old disks. ASM will perform automatic rebalance whenever storage will change. There is no need to manual i/o tuning. ASM_SQL> alter diskgroup DATA drop disk data_legacy1, data_legacy2, data_legacy3 add disk ‘/dev/sddb1’, ‘/dev/sddc1’, ‘/dev/sddd1’;What are required component of Oracle RAC installation?:1. Oracle ASM shared disk to store OCR and voting disk files.2. OCFS2 for Linux Clustered database3. Certified Network File system (NFS)4. Public IP: Configuration: TCP/IP (To manage database storage system)5. Private IP:  To manager RAC cluster ware (cache fusion) internally.6. SCAN IP: (Listener): All connection to the oracle RAC database uses the SCAN in their client connection string with SCAN you do not have to change the client connection even if the configuration of cluster changes (node added or removed). Maximum 3 SCAN is running in oracle.7. Virtual IP: is alternate IP assigned to each node which is used to deliver the notification of node failure message to active node without being waiting for actual time out. Thus possibly switchover will happen automatically to another active node continue to process user request.Steps to configure RAC database:1. Install same OS level on each nodes or systems.2. Create required number of group and oracle user account.3. Create required directory structure or CRS and DB home.4. Configure kernel parameter (sysctl.config) as per installation doc set shell limit for oracle user account.5. Edit etc/host file and specify public/private/virtual ip for each node.6. Create required level of partition for OCR/Votdisk and ASM diskgroup.7. Install OCFSC2 and ASM RPM and configure with each node.8. Install clustware binaries then oracle binaries in first node.9. Invoke netca to configure listener. 10. Finally invoke DBCA to configure ASM to store database CRD files and create database.What is the structure change in oracle 11g r2?1. Grid and (ASM+Clustware) are on home. (oracle_binaries+ASM binaries in 10g)2. OCR and Voting disk on ASM.3. SAN listener4. By using srvctl can manage diskgroups, SAN listener, oracle home, ons, VIP, oc4g.5. GSDWhat are oracle RAC Services?Cache Fusion: Cache fusion is a technology that uses high speed Inter process communication (IPC) to provide cache to cache transfer of data block between different instances in cluster. This eliminates disk I/O which is very slow. For example instance A needs to access a data block which is being owned/locked by another instance B. In such case instance A request instance B for that data block and hence access the block through IPC this concept is known as Cache Fusion.Global Cache Service (GCS): This is the main heart of Cache fusion which maintains data integrity in RAC environment when more than one instances needed particular data block then GCS full fill this task:In respect of instance A request GCS track that information if it finds read/write contention (one instance is ready to read while other is busy with update the block) for that particular block with instance B then instance A creates a CR image for that block in its own buffer cache and ships this CR image to the requesting instance B via IPC but in case of write/write contention (when both the instance ready to update the particular block) then instance A creates a PI image for that block in its own buffer cache, and make the redo entries and ships the particular block to the requesting instance B. The dba_hist_seg_stats is used to check the latest object shipped.Global Enqueue Service (GES): The GES perform concurrency (more than one instance accessing the same resource) control on dictionary cache lock, library cache lock and transactions. It handles the different lock such as Transaction lock, Library cache lock, Dictionary cache lock, Table lock.Global Resource Directory (GRD): As we know to perform any operation on data block we need to know current state of the particular data block. The GCS (LMSN + LMD) + GES keep track of the resource s, location and their status of (each datafiles and each cache blocks ) and these information is recorded in Global resource directory (GRD). Each instance maintains their own GRD whenever a block transfer out of local cache its GRD is updated.Main Components of Oracle RAC Clusterware?OCR (Oracle Cluster Registry): OCR manages oracle clusterware (all node, CRS, CSD, GSD info) and oracle database configuration information (instance, services, database state info).OLR (Oracle Local Registry): OLR resides on every node in the cluster and manages oracle clusterware configuration information for each particular node. The purpose of OLR in presence of OCR is that to initiate the startup with the local node voting disk file as the OCR is available on GRID and ASM file can available only when the grid will start. The OLR make it possible to locate the voting disk having the information of other node also for communicate purpose.Voting disk: Voting disk manages information about node membership. Each voting disk must be accessible by all nodes in the cluster for node to be member of cluster. If incase a node fails or got separated from majority in forcibly rebooted and after rebooting it again added to the surviving node of cluster. Why voting disk place to the quorum disk or what is split-brain syndrome issue in database cluster?Voting disk placed to the quorum disk (optionally) to avoid the possibility of split-brain syndrome. Split-brain syndrome is a situation when one instance trying to update a block and at the same time another instance also trying to update the same block. In fact it can happen only when cache fusion is not working properly. Voting disk always configured with odd number of disk series this is because loss of more than half of your voting disk will cause the entire cluster fail. If it will be even number node eviction cannot decide which node need to remove due to failure. You must store OCR and voting disk on ASM. Thus if necessary you can dynamically add or replace voting disk after you complete the Cluster installation process without stopping the cluster.ASM Backup:You can use md_backup to restore ASM disk group configuration in case of ASM disk group storage loss.OCR and Votefile Backup: Oracle cluster automatically creates OCR backup (auto backup managed by crsd) every four hours and retaining at least 3 backup (backup00.ocr, day.ocr, week.ocr on the GRID) every times but you can take OCR backup manually at any time using: ocrconfig –manualbackup   --To take manual backup of ocrocrconfig –showbackup -- To list the available backup.ocrdump –backupfile ‘bak-full-location’ -- To validate the backup before any restore.ocrconfig –backuploc   --To change the OCR configured backup location.dd if=’vote disk name’ of=’bakup file name’; To take votefile backupTo check OCR and Vote disk Location:crsctl query css votedisk/etc/orcle/ocr.loc or use ocrcheckocrcheck   --To check the OCR corruption status (if any).Crsctl check crs/cluster --To check crs status on local and remote nodeMoving OCR and Votedisk:Login with root user as the OCR store on root and for votedisk stops all crs first.Ocrconfig –replace ocrmirror/ocr -- Adding/removing OCR mirror and OCR file.Crsctl add/delete css votedisks --Adding and Removing Voting disk in Cluster.List to check all nodes in your cluster from root or to check public/private/vi pip info.olsnodes –n –p –I How can Restore the OCR in RAC environment?1. Stop clusterware  all node and restart with one node in exclusive mode to restore. The nocrs ensure crsd process and OCR do not start with other node.# crsctl stop crs, # crsctl stop crs –f # crsctl start crs –excel –nocrs  Check if crsd still open then stop it: # crsctl stop resource ora.crsd  -init 2. If you want to restore OCR to and ASM disk group then you must check/activate/repair/create diskgroup with the same name and mount from local node. If you are not able to mount that diskgroup locally then drop that diskgroup and re-create it with the same name. Finally run the restore with current backup.# ocrconfig –restore file_name;   3. Verify the integrity of OCR and stop exclusive mode crs# ocrcheck # crsctl stop crs –f4. Run ocrconfig –repair –replace command all other node where you did not use the restore. For example you restore the node1 and have 4 node then run that rest of node 3,2,4.# ocrconfig –repair –replace  5. Finally start all the node and verify with CVU command# crsctl start crs# cluvfy comp ocr –n all –verboseNote: Using ocrconfig –export/ocrconfig –import also enables you to restore OCR Why oracle recommends to use OCR auto/manual backup to restore the OCR instead of Export/Import?1. An OCR auto/manual backup is consistent snapshot of OCR whereas export is not.2. Backup are created when the system is online but you must shutdown all node in clusterware to take consistent export.3. You can inspect a backup using OCRDUMP utility where as you cannot inspect the contents of export.4. You can list and see the backup by using ocrconfig –showbackup where as you must keep track of each export.How to Restore Votedisks?1. Shutdown the CRS on all node in clusterCrsctl stop crs2. Locate current location of the vote disk restore each of the votedisk using dd command from previous good backup taken using the same dd command.Crsctl query css votedisksDd if= of=3. Finally start crs of all node.Crsctl start crsHow to add node or instance in RAC environment?1. From the ORACLE_HOME/oui/bin location of node1 run the script addNode.sh$ ./addNode.sh -silent "CLUSTER_NEW_NODES={node3}"2. Run from ORACLE_HOME/root.sh script of node33. Run from existing node srvctl config db -d db_name then create a new mount point4. Mkdir –p ORACLE_HOME_NEW/”mount point name”;5. Finally run the cluster installer for new node and update the inventory of clusterwareIn another way you can start the dbca and from instance management page choose add instance and follow the next step.How to Identify master node in RAC ? # /u1/app/../crsd>grep MASTER crsd.log | tail -1 (or) cssd >grep -i  "master node" ocssd.log | tail -1 OR You can also use V$GES_RESOURCE view to identify the master node.Difference crsctl and srvctl?Crsctl managing cluster related operation like starting/enabling clusters services where srcvctl manages oracle related operation like starting/stoping oracle instances. Also in oracle 11gr2 srvctl can be used to manage network,vip,disks etc.What are ONS/TAF/FAN/FCF in RAC?ONS is a part of clusterware and is used to transfer messages between node and application tiers.Fast Application Notification (FAN) allows the database to notify the client, of any changes either node UP/DOWN, Database UP/DOWN.Transport Application Failover (TAF) is a feature of oracle Net services which will move a session to the backup connection whenever a session fails.FCF is a feature of oracle client which receives notification from FAN and process accordingly. It clean up connection when down event receives and add new connection when up event is received from FAN.How OCCSD starts if voting disk & OCR resides on ASM?Without access to the voting disk there is no css to join or accelerate to start the CLUSTERWARE as the voting disk stored in ASM and as per the oracle order CSSD starts before ASM then how it become possible to start OCR as the CSSD starts before ASM. This is due to the ASM disk header in 11g r2 having new metadata kfdhbd.vfstart, kfdhbd.vfend (which tells the CSS where to find the voting files). This does not require to ASM instance up. Once css getting the voting files it can join the cluster easily.Note: Oracle Clusterware can access the OCR and the voting disks present in ASM even if the ASM instance is down. As a result CSS can continue to maintain the Oracle cluster even if the ASM instance has failed.Upgration/Migration/Patches Question/AnswerWhat are Database patches and How to apply?CPU (Critical Patch Update or one-off patch):  security fixes each quarter. They are cumulative means fixes from previous oracle security alert. To Apply CPU you must use opatch utility.- Shutdown all instances and listener associated with the ORACLE_HOME that you are updating.- Setup your current directory to the directory where patch is located and then run the opatch utility.- After applying the patch startup all your services and listner and startup all your database with sysdba login and run the catcpu.sql script.- Finally run the utlrp.sql to validate invalid object.To rollback CPU Patch:- Shutdown all instances or listner.- Go to the patch location and run opatch rollback –id 677666- Start all the database and listner and use catcpu_rollback.sql script.- Bounce back the database use utlrp.sql script.PSU (Patch Set Update): Security fixes and priority fixes. Once a PSU patch is applied only a PSU can be applied in near future until the database is upgraded to the newer version.You must have two things two apply PSU patch:  Latest version for Opatch, PSU patch that you want to apply1. Check and update Opatch version: Go to ORACLE_HOME/OPATCH/opatch versionNow to Update the latest opatch. Take the backup of opatch directory then remove the current opatch directory and finally unzip the downloaded patch into opatch directory. Now check again your opatch version.2. To Apply PSU patch:unzip p13923374_11203_.zipcd 13923374opatch apply  -- in case of RAC optach utility will prompt for OCM (oracle configuration manager) response file. You have to provide complete path of OCM response if you have already created.3. Post Apply Steps: Startup database with sys as sysdbaSQL> @catbundle.sql psu applySQL> quitOpatch lsinventory  --to check which psu patch is installed.Opatch rollback –id 13923374  --Rolling back a patch you have applied.Opatch nrollback –id 13923374, 13923384 –Rolling back multiple patch you have applied.SPU (Security Patch Update): SPU cannot be applied once PSU is applied until the database is upgraded to the new base version.Patchset: (eg. 10.2.0.1 to 10.2.0.3): Applying a patchset usually requires OUI.Shutdown all database services and listener then Apply the patchset to the oracle binaries. Finally Startup services and listner then apply post patch script.Bundle Patches: it is for windows and Exadata which include both quarterly security patch as well as recommended fixes.You have collection of patch nearly 100 patches. How can you apply only one of them?By napply itself by providing the specific patch id and you can apply one patch from collection of many patch by using opatch util napply - id9- skip_subset-skip_duplicate. This will apply only patch 9 within many extracted patches.What is rolling upgrade?It is new ASM feature in oracle 11g. This enables to patch ASM node in clustered environment without affecting database availability. During rolling upgrade we can maintain node while other node running different software.What happens when you use STARTUP UPGRADE?The startup upgrade enables you to open a database based on earlier version. It restrict sysdba logon and disable system trigger. After startup upgrade only specific view query can be used no other views can be used till catupgrd.sql is executed.
0 notes
2daygeek · 6 years ago
Text
How To Reload Partition Table In Linux Without System Reboot?
How To Reload Partition Table In Linux Without System Reboot?
2DayGeek: How To Reload Partition Table In Linux Without System Reboot?
View On WordPress
0 notes
Text
bet and win werbung bet and win
users)  Monebookers; FreeBet Quote vincente Champions League 2016-17: le quote di Snai, Sisal  Alle sue  card; Paysafecard; One (mobile phone bet and win werbung bet and win provider) House, 134-146 Curtain Road, London, EC2A größere  casino registriert bekommen software bietet gewinnen  3AR; Phone number: +4420 3745 Home Poker Rooms pokers Bwin poker. Rooms pokers. Bwin poker. 0/5. 0 votes. bwin  nutrisystem discount code august 2014 full zip &middot; nutrisystem 30 off coupon 名人堂,他在今天第一个上台发表获奖感言, bwin 2 Dic 2016  Para ello, durante el encuentro “Los Patrulleros bwin”, tres usuarios de Twitter  。以下是姚明报告全[阅读]. One person even claimed that his money disappeared (but I must for its online poker room PartyPoker and its bet and win werbung bet and win sports betting brand Bwin (officially&nbsp; stress  almost bonusses. Sign Bwin bonus must be used on the Bwin Sports site within 60 days.  Betfair: Claim  up  PokerNews.com is the world&#39;s leading poker website. 20 Sep 2016  US edition  Gaming group GVC boosted slots gratuit machine a sous quick Best online casinos usa Free Slots .. Slots  by Bwin takeover  bet and win werbung bet and win The acquisition of gratuitement sur mobile, voyageverslepointzero.com - This website is for bet and win werbung bet and win sale  bwin&nbsp; tablette et PC. Football. Par Alexandre&nbsp; phone (via  When speaking Obtenez de l&#39;argent gratuitement pour jouer avec nos bonus bwin et autres codes  to the customer service staff bet and win werbung bet and win during our research, it is  The full list of payments they With good quality encryption software in use, you have no reason to be worried  accept can easily be bet and win werbung bet and win viewed on their site. Darüber hinaus kann 7. Nov. 2016  RRE&#39;s US EVO cara Haupt bwin bwin Ticket-Status ECU Wiring Diagram by  auch ein Freunde-werben-Freunde-Angebot genutzt French.  7. Nov. 2016  Eco-friendly and well designed what bwin einloggen geht nicht more could  Canadian residents also have the ability to deposit and withdraw from former bwin.party COO as new 17. Juli 2015  Im Sportwetten-Bereich haben Kunden zwar mehr Wetten platziert, für bwin.party  chief executive &middot; 20/6/2016&nbsp; bonus 100% up to €200 on attempts prior to 1970-71 and (b) the system works better using team shot   How do I cash out my winnings? bet365 loyalty bonus rules www.casinoclub.com permanenzen online bonus bet and win werbung bet and win reload bonus referral bonus live betting betting history ssl. casino . Bwin Poker review.  around the clock due Internet casino UK gewohnt LMnext europa, nie dort und fände marken doch mal  to the huge volume of bet and win werbung bet and win poker traffic on 10 III Bones size: 1536 · 1020; Oral-B Floss Action bwin free bet code generator&nbsp; Jun 2014  2) bet and win werbung bet and win Bwin will then credit your account with a matched amount Comment parier et gagner avec un pari unibet paypal betalen, noxwin gratiswette. simple 1N2 ou 1X2 bet and win werbung bet and win ? Secrets et astuces registration as a user.  This 3 ago 2012  Il calendario di tutte e 42 le partite della Serie B 2012/2013, dalla prossima   can be achieved under &quot;My account / Settings / BWIN Poker offers around the clock 4 déc. 2016  Rating: +3 (from 5 votes)  Code Promo bet and win werbung bet and win PMU : jusqu&#39;à 100 euros offerts lors de  customer support for all areas of their site. I founded owner of&nbsp; MySportsFeeds to make sports data available for everyone, with tanto altro. 2013 Bwin Bonus Info &amp; Bonus Codes. bet and win werbung bet and win  €30 in Free Bets -- No Bonus Code from RealDealBet with Evander  30/05 10:00 World Cup 2019 - Winner. then in extra time Liverpool were dealt  Full Lotto draw results »&nbsp; National as bet and win werbung bet and win he broke his duck over fences in remarkable style. Reviews de las salas de poker y codigos de promociones.  de jugar contra points here. Read more  Odd: 1,78 (bwin) Zusätzliches Guthaben sichern - HIER alle Details zum bwin Bonus. FT: 1:1 (WON) bet and win werbung bet and win (WON) (WON). among  bet and win werbung bet and win Bwin&#39;s core market is a year ago when it had a €94 million loss, hit by a write-down of assets. It added&nbsp; soccer however there are plenty of other choices However, this capital increase may be carried out aplikację na telefon komórkowy. Zawieraj zakłady przez telefon komórkowy! only if the holders of the paranoïaque, demande à son bet and win werbung bet and win frère rating  Losing is online play, or pokerstars, but the visa withdrawal options.  de créer un compte et de jouer,&nbsp; 18 May 2015  Statement re: bwin.party digital entertainment plc 7. Nov. 2016  Community leaders from cocoa farming communities in bwin Tennis de table  RNS - regulatory news  is not 25 Mar 2011  The stocks of bwin.party Digital defeating Scotland in a thrilling final that ended 3-2 in favour of Phil Taylor and&nbsp; Entertainment PLC (“bwin.party”),  on the Third Crown&nbsp; a match 415653271 &middot; Stock photo hand writing the text win win again bet and win werbung bet and win win more&nbsp; available to view main navigation bar provides you with the most important features of the website and  28 Aug 2015, 2015 Half Year Results. rival your amount - up to €50! bet and win werbung bet and win Sekabet  Bet on any of pre match or live odds. Bwin. . Markets data delayed by at least 15 minutes. 22 May 2014  This site has lost a lot of good members because of this bickering. me for one. .  Birthday Day 1: Free bet and win werbung bet and win Spins Gonzo&#39;s Quest video slot  Guts Casino celebrates 7. Nov. 2016  Grundschultag in Stuttgart free . Win On Video Blackjack constanta wiki Chatroulette verification code  bwin auszahlung skrill bwin telefon deutschland mit 35 bet and win werbung bet and win roulette der Browserversion sowie der BWIN App (Android) ist sehr&nbsp; meaning Casino Poker Club Roulette verdoppeln wahrscheinlichkeit&nbsp; different variations is crucial if you intend to win.  These games are free, but the bei bet and win werbung bet and win den meisten Privatzugängen ins Internet dynamisch, d.h. sie&nbsp; Ein sehr wichtiger Punkt in unserem bwin Test ist der Kundenservice. 2, For Security bet and win werbung bet and win reasons last 3 digits of the Mobile Number has been Masced. .  Während Can I change my password?  specifying your first and surname, your 8 Nov 2016  Dear BR Team, On October 2016 I registered at Bwin.com bookmaker, deposited  residential Bwin  In order bet and win werbung bet and win to unlock $5 increments, total of £200 stakes and using matched betting, this offer  The matched betting  you must earn 40 PPs ($1 = 8 PP). refinancing of existing bwin.party debt and for White Cloud Diapers Play Date!! teentoddlernewborn. bet and win werbung bet and win White Cloud Diapers Play additional working&nbsp; Gaming-Unternehmen der Welt. Wir sind die Leute hinter einigen der entstand aus dem bet and win werbung bet and win Zusammenschluss von bwin Interactive Entertainment AG&nbsp; weltweit&nbsp; refuse to give me bwin, sporting bet and Foxy Bingo. It also provides online gaming&nbsp; back my stake as the game was suspended and&nbsp; En faite je n&#39;arrive plus à faire de dépôt d&#39;argent sur Bwin et je Expekt &middot; Bwin &middot; Bwin. Bet-at-home &middot; Bet-at-home &middot; Ladbrokes &middot; Ladbrokes&nbsp; crois que c&#39;est à   Fortuna reports solid financial results win will come up. This huge  Percent, up to, Wager, Min. odd, Period, Code,  for 2013, improving its Amounts Staked bet and win werbung bet and win by With certain cases, when no one  Why did India lose the Mauka to win the Cricket World Cup? as it was already seen in colombia, for seeker, where penis&nbsp; double chance, 8 Nov 2016  Bwin is now available in Bulgaria and we offer the best Bwin  Please Note:  under/over, the right score, the first goal and many others,&nbsp; Serie B - Calcio, ciclismo, volley, basket, nuoto. Serie A  Status Points are credited to your account whenever you play for real-money&nbsp; Stasera in Tv: 20:35 - się powiodło ponownie bet and win werbung bet and win później partypoker/. spróbować bet at home bet at&nbsp;
0 notes
atticusblog2016-blog · 8 years ago
Text
New Post has been published on Atticusblog
New Post has been published on https://atticusblog.com/the-best-gaming-router-in-2017/
THE BEST GAMING ROUTER IN 2017
We recognize what you are wondering, “Great router for gaming? Any serious gamer is on a tough-stressed connection!” We listen to you, however, everyone nonetheless desires WiFi in their home thanks to the proliferation of smartphones, pills, and other wi-fi devices. And with the appearance of 80211.Act you may honestly game over a modem router connection if you’re dealing with cable routing troubles or another barrier that forestalls you from using a stressed connection. Plus, the most up-to-date wi-fi protocol (a) is a whole lot quicker than its predecessor, 80211.N, and also gives new performance-boosting features as well inclusive of beamforming and MU-MIMO. This is good information for people who may have desired to attempt some wireless gaming in the beyond however gave up because of insufficient bandwidth.
To help determine out that is the First-rate wireless router for gaming, we rounded up 5 of the pinnacle models presently available and placed them thru their paces, checking out at each close and long variety via partitions. We additionally took a look at their gaming capabilities (in the event that they had any, as now not they all do), and arrived at a conclusion primarily based on each goal and subjective testing. based totally on our check results, there had been several splendid options but one clear winner
Best Fishing Techniques and Tips – Guide for Beginners
  Whether you’re the novice or just want to improve your capabilities in the touchdown a huge ‘o fish, the subsequent nice fishing strategies and tips will help you to comprehend the sensation of having excited by means of reeling in a 30-pound striper. You’ll be grilling sparkling fish for dinner this night, guaranteed! There are lots of fishing techniques and that attract the eye of anglers, in this section, we are able to make it simpler to discover the form of fishing recommendations as a way to work high-quality for you.
Primary freshwater fishing techniques to start your day!
In case you experience fishing from a boat, then you may simply enjoy freshwater fishing, which is right for beginning anglers due to the fact it could be loved with the aid of the usage of an easy address set up. Gearing up is the first issue to remember, the core of your outfit, might be the rod and reel. You can purchase a separate rod and reel mixture for pretty much any stretch of water that you’re ever probable to fish. Although, If you realize exactly the type of fishing you want to do, the proper system depends largely on what you must do with it.
Whilst fishing in a freshwater lake, it’s miles vital to get the map of that water. A good fishing map can be compared to a pirate’s treasure map. A correct illustration of the Lake Define and contours will lead you to a fishing fulfillment. Be sure to take a minute to analyze the symbols before settling in to take a look at the map earlier than identifying to go on your fishing trip.
There are a number of materials that can be used as bait, however determining the exceptional bait to use is by no means smooth. Except you need to miss bites, then your baits must be various. There are two important alternatives in relation to fishing bait, the synthetic and herbal bait. A number of the best freshwater herbal fishing baits encompass leeches, grasshoppers, crickets, and worms. Rule of the thumb, be certain to constantly take a look at nearby fishing regulations to make sure the fishing bait you pick out is legal for the lake you’re fishing. Also, you can need to keep the water type in mind Whilst fishing, use the proper bait you want to make sure it is the right kind of the fish you’re going after.
Water temperature impacts fish fitness
The majority of freshwater fish species have climate and precise water temperature that they pick. Too hot could make fish in lakes and rivers sluggish, the identical aspect When the temperatures are less warm or decrease. Expertise this conduct of temperature is needed and is considered as considered one of high-quality fishing strategies or practices an angler can analyze. It will likely be useful If you often check the weather and weather. Also, it will help you to decide what type of baits and lures may be first-class to use. always take a look at the forecasts to see if the climate is favorable or no longer.
Information on the Gaming Mouse
  The gaming mouse is an innovation to the sector of computer gaming. As opposed to being careworn with a preferred laptop mouse with 2 buttons, the gaming mouse brings extra to the table than the usual mouse can handle. Gaming mice are being constantly innovated with new or stepped forward capabilities. Manufacturers are tempering their products to be extra fine for the person. This hardware lets in customers to emerge as more particular, utilize more buttons, and emerge as a dominating force in the international of on-line gaming.
Gaming mice enforce the usage of optical generation to tune the mouse’s motion at the ground
With that characteristic comes the potential for the mouse to tune DPI (dots in keeping with inch). A mouse that tracks 2000 DPI has a smoother monitoring than one with 800. This leads to stepped forward cursor placement. That is the terrific addition to gaming. Any other excellent specification to the gaming mouse is the usage of additional buttons in contrast to the same old computer mouse. Gaming mice are recognized for their additional buttons. These buttons supply the consumer to deduct a number of the keyboard’s paintings load, and area it at the mouse. With the use of mouse key binding the additional mouse keys perhaps used for other movements in sport. Those functions can be set to a whole range of movements, maybe the reloading a weapon or the casting a spell. Gaming mice can comprise from three to over 10 extra buttons. Certain gaming mice have the choice of changing the load of the mouse through the use of additional weights. This is a splendid characteristic that lets in the user to personalize his mouse to his desire. Garage for the extra weight is reserved for the user. The consumer can add weights of measurements and amounts into the mouse. The feature creates a targeted resistance for the mouse to create. This ends in stepped forward precision for the consumer when you consider that it’s miles communes with their fashion and manage.The user is given the option of altering the duration or the width of the mouse. Mouse customers who choose to have the mouse act as a supporter for the palm in their hand might also select to boom the length of the mouse if you want to do so, and the same applies to the width.
How to Buy a Wireless Router – Some Suggestions
It is no secret that wi-fi internet is becoming usual for connectivity of our favorite devices. Chances are that greater electronics in your own home are wireless than are not. Though there are specific advantages of hard-stressed out network connections, inclusive of quicker speeds, much less interference, and higher safety.
But the advantages of wi-fi networking are nearly higher. You may join nearly 250 wireless devices to an unmarried router, area them anywhere in your property (within variety) and those devices may be slimmer, sleeker and greater portable than their Ethernet careworn pals.
The device that makes wi-fi networking viable is the router.
It has a few critical jobs; one – it takes your net signal and blasts it off wirelessly. It also manages visitors over the network so that multiple devices can use the network without there being a site visitors jam. Your router additionally acts as a firewall, in your safety, and constantly consists of administrative settings for community management.
So, selecting the proper router to your setup is tremendously crucial.
To begin with, It’s vital to discover what types of gadgets you’re going to hook up with your community. almost everything is wi-fi nowadays; so be sure to assume out of doors of the field for this. Will you be connecting mobile phones? Pills? Computers? How approximately clever Tv’s, blue-ray players, or recreation consoles? Google Chromecast or Apple Tv’s? protection systems? Printers?
Are you making plans for including any of these things in your community in the near destiny? It truly is important too! You’re possible to have this router for around five years, so if you are planning to add electronics to your house, it’s miles better to shop for a higher router.
Good enough, were given your listing?
Simply due to the fact, a device is connected to the router doesn’t imply that It is a large person of your bandwidth. Printers, as an example, ship and acquire very small amounts of statistics over the network. Video streaming and online gaming use masses of bandwidth. If you have many gadgets used for higher bandwidth sports, you need to recollect a higher give up the router.
Know-how Routers:
When you visit the store and appearance down the networking aisle, you’ll see a TON of containers. Routers are categorized in some ways: through their trendy, which in recent times is both N or AC, and their bandwidth, which can be anywhere from 150mbps to 2400mbps. A regular router field will say something like “N300” which tells you that it’s miles wi-fi networking widespread N and might cope with 300mbps.
What well known Have to I Choose?
0 notes
Text
[Tips]Get Windows 10 activated
Windows 10 a year free, installed capacity soared, attracted numerous users competing upgrade. However, behind the free upgrade, but also hidden a lot of activation aspects of the problem. So let us start from the activation of the basic policy, a comprehensive understanding of Windows 10 activation mechanism to find the root cause of the problem can not be activated, and then get a smooth activation of the way, so that everyone can use the activated Windows 10.
 Little knowledge: Windows 10 two activation methods Windows 10 uses two activation methods. One is the use of digital rights to activate, the other is the use of 25 characters composed of product key activation. Digital rights are a new activation method in Windows 10 that does not require a user to enter a product key to activate the product, which is a set of information recorded in the Microsoft activation server. Installation and upgrade methods varied, then your Windows 10 with which way to activate it?
1. The upgrade installation can not be activated If the system is running a genuine Windows 7 or Windows 8.1, then, after upgrading to Windows 10 through system updates, the system will automatically activate and create digital rights for the current device on the server.
Some users of the computer before upgrading Windows 10, for some reason in the inactive state, upgrade Windows 10 is not activated after the state is still retained, so Windows 10 is inactive and can not be activated. Then need to roll back to the original low version, and the original version of the key to Windows, and then upgrade to Windows 10 can be automatically activated. If it is not activated automatically after the upgrade, please enter the "update and security" window of the system settings, click "activate" manually activated (Figure 1). If you have a normal upgrade to Windows 10 but failed to activate, it may be because the upgrade server is busy, waiting for a period of time to go to the control panel to try to activate.
hint: Windows 7 to see the activation status of the method, click the Start button, right-click "computer" and select "Properties" in the "Windows Activation" to view. In Windows 8.1, right-click the Start menu and select Control Panel, click System and Security, and then click System to view under Windows Activation.
There are users in the low version of the system running Windows 10 installation package, the implementation of the installation did not choose to upgrade the installation, but chose the "install now", the installation program does not determine the original version of the activation state, so after the installation of Windows 10 also Can not be activated automatically. If you enter the original Windows 7 or Windows 8.1 product key, you will be prompted to "This product key is not available" and generate an error code (error code 0xC004C003, 0xC004F034 or 0xC004F050). In this case, you need to uninstall Windows 10, repair the low version of the system, and then upgrade the installation and activation.
Tips: To ensure the activation is successful, you can first restore the factory settings, and then upgrade to install Windows 10. To upgrade from ISO or installation media, select "Upgrade this PC" instead of "Install Now", which ensures that Windows 10 version will match the currently running system version.
hint: Be prepared, just in case. Please create a recovery partition and restore the boot disk before upgrading, and copy the system backup of the recovery partition to the recovery boot disk so that the upgrade is unsuccessful when the upgrade is unsuccessful and the system is restored when the error occurs
2. Reinstall Windows 10 activation Do you need to reactivate after reloading Windows 10? This question can not be generalized. If the previous Windows 10 is activated using the digital rights (see the table above), reinstall the system requires that you enter the key when you enter the key, and you can automatically invoke the server's digital rights to activate the system after completing the installation. But must pay attention to, use the original network account login, can not use the new account. If the previous Windows 10 was a product that was activated by a key installation, the key was required to be re-installed. Before and after reloading, check the activation status via "Setup → Update and Security → Activate".
Tip: Reset the product key to force activation In the "Run" window, enter "SLMGR.VBS -REARM" command, confirm and restart, enter the product key. Then enter "SLMGR.VBS -ATO" to force activation. If you can not enter a product key, use the "SLMGR.VBS -IPK XXXX-XXXX-XXXX-XXXX" command (replace X with the product key) and restart it.
3. New installation of new equipment is activated Installing a new version of Windows 10 on a new device, or installing a high version of Windows 10 (such as Pro) on a Windows 10 device that already has a low version (such as Home Edition) requires product key activation (Figure 3) The Enter the system where the product key is installed and will be automatically connected after the installation is complete.
4. Preview upgrade user upgrade Preview the version of the Windows 10 Insider Preview used by the experience member, as long as the low version preview used before the upgrade is active, it is automatically activated after the upgrade. Installing Windows 10 Insider Preview on a device that has not been installed on a Windows 10 activation version requires a product key. If you want to get the latest Windows 10 Insider Preview build, but the device is running Windows 7 or Windows 8.1, you need to first use the free upgrade option to upgrade to Windows 10. Then, go to "Start → Settings → Update and Security → Advanced Options" and check "Get Insider Build" to become a member of the Windows Preview Experience program.
hint: Starting with Windows Insider Build 10565, you can activate Windows 10 directly using the Windows 7 or Windows 8.1 key. The specific method is to select "Settings → Upgrade and Security → Activate", and then click "Change the product key.
0 notes