#difference between temporary table and global temporary table in oracle
Explore tagged Tumblr posts
Text
Question-81: Explain the difference between a global temporary table and a regular table in Oracle.
Interview Questions on Oracle SQL & PLSQL Development: For more questions like this: Do follow the main blog #oracledatabase #interviewquestions #freshers #beginners #intermediatelevel #experienced #eswarstechworld #oracle #interview #development #sql
A global temporary table and a regular table in Oracle serve different purposes and have distinct characteristics. Regular Table: A regular table in Oracle is a permanent database object that stores data persistently. It is used to hold data that is retained for a longer duration and can be accessed by multiple sessions. Global Temporary Table: A global temporary table is a special type of…
View On WordPress
#beginners#development#eswarstechworld#Experienced#freshers#intermediatelevel#interview#interviewquestions#oracle#oracledatabase#sql
0 notes
Text
Global Temporary Tables
Global Temporary Tables #oracle #oracledba #oracledatabase
Applications regularly utilize some type of impermanent information store for processes that are too convoluted to finish in a solitary pass. Regularly, these transitory stores are characterized as data set tables or PL/SQL tables. From Oracle 8i ahead, the upkeep and the board of transitory tables can be assigned to the waiter by utilizing Global Temporary Tables. Types of Temporary Tables The…
View On WordPress
#advantages of global temporary table in oracle#declare global temporary table#difference between temporary table and global temporary table in oracle#global temporary table in oracle apps r12#global temporary table in sql#global temporary tables in oracle#how to check global temporary table in oracle#oracle global temporary table performance
0 notes
Text
Will the real rules please stand up?
One of my pandemic projects is to map the differences between a pointed and an unpointed text of the Hebrew Bible. I am doing this using the very memorable and capable left to right version of the square text called SimHebrew.
I had 'my own version' of a left-to right abbreviation in capital letters and with a few punctuation marks for the gutturals aleph and ayin. But I converted to the lower case version developed by Jonathan Orr-Stav. (Such a Latin-letter code is an abbreviation because it takes one byte as opposed to 7 bytes in rendered Unicode, and one byte as opposed to 2 internally. Using a Unicode database is really awkward for me and the technology was new 10 years ago so it was a non-starter.)
My method of data capture is to use the unpointed Mechon-Mamre text that can be downloaded from their site (one book at a time) and to run it through the SimHebrew converter here. Then to manipulate the text in notepad until I have legitimate insert statements for my database. This is somewhat prone to error but eventually I get a clean script. (I use a combination of Word and Notepad for the necessary global changes.)
After this I create a temporary table that matches: book, chapter, a conversion of the alef-betic verse number to a real verse number, and a word by word Hebrew word in SimHebrew, full pointed text word from the Leningrad codex, the stem code, raw word form, and semantic domain from my database, the word number relative to the start of the verse, and the word id (assigned by an Oracle sequence and connecting to my word table).
I recently added 2 Kings to my data. At first I assigned the wrong chapter numbers - a little oversight. That resulted in over 5000 differences in my calculations. When I fixed the issue, the differences went down to about 230. I then discovered that M-M is treating דִּבְיוֹנִ֖ים (dbivn) which they render as dung of a dove as two words (db ivnh - apparently a poor substitute for salt). Fixing those 4 instances dropped my mismatches by 100. Then word by word I work through the remainder until the differences are accounted for.
My program that calculates the words also suggests a code snippet that will fix any discrepancy (as long as I put it in the right place in the code.) The program as it stands gets 99% of the words right on first pass now. So my predicted SimHebrew Bible is about 99% right. Sorry - for the Bible, that's not good enough.
About this time in the history of the project, I ask, Is there a better way? I know I can finish the way I have begun and my brute force spelling changes are next to 0. I have over a third of the words in my test data now and over 75% of the stems represented. But the code is specific to prefix and suffix and sometimes to particular vowel combinations in the WLC. What are the real rules?
Native speakers who 'just know' the pronunciation - what are they really doing? Certainly they have retention in memory by word form, by context, and by stem. But can I get the program to discover the shortcuts that people use? And in some ways see what they are doing. (And thereby discover the nature of the evolution of language usage.)
Here's an example: the 31 uses so far of עצר (yxr)
For yxr, the following rules are noted.
First you see that the holem is rendered as /o/ in lines 1 and 2 and others. This is default, but some words with some exceptions do not render the o.
Next (line 3) you will note that the qamats is not rendered as /o/ (or vav for rtl Hebrew) but it is in line 4, but the qamats under the second letter of the stem is rendered with prefix /i/ and /t/ in each case without suffix.
Tsere is rendered as /i/ in line 6 and 12 - but only the tsere under the last letter of the prefix. (I would like to color code these but the new editor absolutely ruins rtl display with embedded color coding.)
Notice that the final h is dropped in line 19.
Lines 26 and 27 would have rendered the qamats in the closed syllable as /o/ if the suffix had not been /u/. That's a general rule - but there are exceptions here too and exceptions to the exceptions.
Ref (book: chap: vs: (word within vs)
Stem
Word Form
Morph
Sim Source
Sim Calc.
WLC Word
Domain
Rendering
2 Chronicles 14:10(30)
yxr
iyxr
i/yxr
-iyxvr
-iyxor
־יַעְצֹר
BOUND
יעצר will coerce
1 Chronicles 29:14(7)
yxr
nyxr
n/yxr
-nyxvr
-nyxor
־נַעְצֹר
BOUND
נעצר contained of
2 Chronicles 13:20(2)
yxr
yxr
yxr
-yxr
-yxr
־עָצַר
BOUND
עצר did coerce
2 Kings 4:24(9)
yxr
tyxr
t/yxr
-tyxvr
-tyxor
־תַּעֲצָר
BOUND
תעצר do detain
2 Chronicles 7:13(2)
yxr (5)
ayxr
a/yxr
ayxvr
ayxor
אֶעֱצֹר
BOUND
אעצר I contain
1 Kings 8:35(1)
yxr
bhyxr
bh/yxr
bhiyxr
bhiyxr
בְּהֵעָצֵר
BOUND
בהעצר when is contained
2 Chronicles 6:26(1)
yxr
bhyxr
bh/yxr
bhiyxr
bhiyxr
בְּהֵעָצֵר
BOUND
בהעצר when are contained
Amos 5:21(6)
yxr
byxrticm
b/yxr\ticm
byxrvticm
byxroticm
בְּעַצְּרֹתֵיכֶם
COVENANT
בעצרתיכם in your conclaves
2 Kings 9:8(9)
yxr
vyxvr
v/yxvr
vyxvr
vyxur
וְעָצוּר
BOUND
ועצור the contained
1 Kings 21:21(11)
yxr (10)
vyxvr
v/yxvr
vyxvr
vyxur
וְעָצוּר
BOUND
ועצור the contained
Proverbs 30:16(2)
yxr
vyxr
v/yxr
vyvxr
vyoxr
וְעֹצֶר
BOUND
ועצר and contained
1 Chronicles 21:22(17)
yxr
vtyxr
vt/yxr
vtiyxr
vtiyxr
וְתֵעָצַר
BOUND
ותעצר that may be contained
2 Kings 17:4(20)
yxr
viyxrhv
vi/yxr\hv
viyxrhv
viyxrhu
וַיַּעַצְרֵהוּ
BOUND
ויעצרהו so detained him
Job 4:2(5)
yxr
vyxr
v/yxr
vyxvr
vyxor
וַעְצֹר
BOUND
ועצר but contain
Psalms 106:30(4)
yxr (15)
vtyxr
vt/yxr
vtiyxr
vtiyxr
וַתֵּעָצַר
BOUND
ותעצר and was contained
Job 12:15(2)
yxr
iyxr
i/yxr
iyxvr
iyxor
יַעְצֹר
BOUND
יעצר he contains
2 Chronicles 2:5(2)
yxr
iyxr
i/yxr
iyxvr
iyxor
יַעֲצָר
BOUND
יעצר contains
1 Kings 18:44(19)
yxr
iyxrch
i/yxr\ch
iyxvrç
iyxorc
יַעַצָרְכָה
BOUND
יעצרכה detain you
2 Chronicles 22:9(27)
yxr
lyxr
l/yxr
lyxvr
lyxor
לַעְצֹר
BOUND
לעצר to coerce
Psalms 107:39(3)
yxr (20)
myxr
m/yxr
myvxr
myoxr
מֵעֹצֶר
BOUND
מעצר through coercion of
Proverbs 25:28(8)
yxr
myxr
m/yxr
myxr
myxr
מַעְצָר
BOUND
מעצר containment
2 Chronicles 7:9(4)
yxr
yxrt
yxr\t
yxrt
yxrt
עֲצָרֶת
COVENANT
עצרת a conclave
Joel 1:14(4)
yxr
yxrh
yxr\h
yxrh
yxrh
עֲצָרָה
COVENANT
עצרה a conclave
Joel 2:15(7)
yxr
yxrh
yxr\h
yxrh
yxrh
עֲצָרָה
COVENANT
עצרה a conclave
2 Kings 10:20(4)
yxr (25)
yxrh
yxr\h
yxrh
yxrh
עֲצָרָה
COVENANT
עצרה a conclave
Job 29:9(2)
yxr
yxrv
yxr\v
yxrv
yxru
עָצְרוּ
BOUND
עצרו contained
2 Chronicles 20:37(19)
yxr
yxrv
yxr\v
yxrv
yxru
עָצְרוּ
BOUND
עצרו could be coerced
2 Kings 14:26(10)
yxr
yxvr
yxvr
yxvr
yxur
עָצוּר
BOUND
עצור coercion
Jeremiah 36:5(7)
yxr
yxvr
yxvr
yxur
yxur
עָצוּר
BOUND
עצור am detained
1 Chronicles 12:1(7)
yxr (30)
yxvr
yxvr
yxvr
yxur
עָצוּר
BOUND
עצור he contained himself
1 Kings 14:10(12)
yxr
yxvr
yxvr
yxvr
yxur
עָצוּר
BOUND
עצור those who are contained
All that is pretty straightforward. But what are the real rules?
Some of my rules are miles long conditions; strings of stems with prefix and suffix combinations and the occasional appeal to an odd Unicode value.
The 'rules' are long for when to render hireq as /i/.
200 distinct stems which do not contain a yod are allowed to render hireq as yod without exception.
Another 281 allow hireq as yod on an exception basis.
Only 7 stems containing a yod disallow hireq as yod.
Another 46 disallow it on an exception basis.
Hireq is usually ignored in a closed syllable - but there are exceptions and exceptions by word form to the exceptions (only 8 stems).
Hireq is rendered as /ii/ for several reasons. When the prefix is i, and for tsere, patah, and qamats occasionally. What! these also are rendered as o or v sometimes. Who can know?
Any ideas?
I have been debating whether to design a data table for the rule combinations. I don't want to do it unless I cannot simplify the code. Last time I asked I discovered several simplifications. Looking for more...
Here's the rule for qamats becoming vav (RTL) or /o/ (SIM): I have converted the code to 'English'
Unconditionally for stems ahl, anih, arc, avn, bvw, ctl, grn, iq+n, iqwn, irqym, nvh, rnn, yziali, zvh,
(+ is my single letter internal abbreviation for tet ט.)
or the stem is +rk and prefix suffix combination in (none, cm)
or the stem is +hr and prefix suffix combination in (none, t, c, t)
or the stem is abd and prefix suffix combination in (b, n)
or the stem is acl and prefix suffix combination in (l, h, b, nu)
or the stem is adm and v_prefix = m and the first part of the word is m with a schwa
or the stem is amn and the word form is amn and there is no tsere under the second stem letter
or prefix suffix combination = b, h
or the stem is ark and v_suffix in (ti, vt)
or the stem is bit and v_suffix in (icm)
or the stem is bzz and prefix suffix combination in (i, vm)
or the stem is cvl and prefix suffix combination in (v, clu)
or the stem is csh and prefix suffix combination in (none, u)
or the stem is dbr and prefix suffix combination in (b, c)
or the stem is gal and prefix suffix combination in (l, c)
or the stem is gbh and prefix suffix combination = l, h
or the stem is gml and prefix suffix combination in (none, h,none, c)
or the stem is hrh and prefix suffix combination in (vh, tihm)
or the stem is ivn and prefix suffix combination in (none, none)
or the stem is iwr and prefix suffix combination in (none, o, b, o)
or the stem is imn and prefix suffix combination in (m, none)
or the stem is isd and prefix suffix combination in (b, i)
or the stem is knn and not (suffix in (u) and there's a qamats under the second letter) and the domain is not in (PERSON, LOCATION)
or the stem is kq and v_suffix not in (u)
or the stem is kgg and prefix suffix combination in (none, i)
or the stem is krb and prefix suffix combination in (none, h, l, h)
or the stem is krm and prefix suffix combination in (none, h, vb, h)
or the stem is kwc and prefix suffix combination in (none, i)
or the stem is kzq and prefix suffix combination in (b, h, b, nu)
or the stem is lcd and prefix suffix combination in (l, h)
or the stem is mlc and prefix suffix combination in (l, o, b, o, c, o)
or the stem is mvt and prefix suffix combination in (h, h)
or the stem is pyl and prefix suffix combination in (c, h)
or the stem is q+n and prefix suffix combination in (none, i)
or the stem is qdw and v_suffix in (im, iv)
or the stem is qra and prefix suffix combination in (none, nu, none, ic, none, im)
or the stem is qrk and prefix suffix combination is none, h
or the stem is rb and prefix suffix combination in (b, none)
or the stem is sll and prefix suffix combination in (none, vh)
or the stem is rbb and prefix suffix combination in (v, none)
or the stem is rbb and prefix suffix combination in (hb, none)
or the stem is rkb and prefix suffix combination not in (none, none)
or the stem is rkx and prefix suffix combination in (l, h)
or the stem is tmm and prefix suffix combination in (b, none) and the first two letters of the word are not m
or the stem is wby and prefix suffix combination in (m, h, l, h)
or the stem is wcb and prefix suffix combination in (b, o)
or the stem is wcr and prefix suffix combination is l, h
or the stem is wdd and prefix suffix combination in (i, m)
or the stem is wmr and prefix suffix combination in (a, none,vl, h,none, h)
or the stem is wp+ and prefix suffix combination in (none, h)
or the stem is wr and suffix in (c, rc)
or the stem is wrw and prefix suffix combination in (none, ih)
or the stem is xrp and prefix suffix combination in (none, h)
or the stem is ybd and prefix suffix combination in (l, h)
or the stem is yni and prefix suffix combination in (none, nu)
or the stem is yzz and prefix suffix combination in (none, i)
or the stem is zcr and prefix suffix combination in (b, nu)
or the stem is yzr and prefix suffix combination in (l, ni, l, u,none, ni,none, nu)
then
the qamats under the first letter of the stem becomes /o/.
And by the way if the stem is qdw and v_suffix in (im) then the schwa under the first letter becomes /o/!
(or maybe only v)
from Blogger https://ift.tt/360WXl0 via IFTTT
0 notes
Text
300+ TOP Memory Management Interview Questions and Answers
Memory Management Interview Questions for freshers experienced :-
1. What is the significance of having storage clause? We can plan the storage for a table as how much initial extents are required, how much can be extended next, how much % should leave free for managing row updations etc. 2. What is the functionality of SYSTEM table space? To manage the database level transactions such as modifications of the data dictionary table that record information about the free space usage. 3. How does Space allocation table place within a block? Each block contains entries as follows Fixed block header Variable block header Row Header,row date (multiple rows may exists) PCTEREE (% of free space for row updation in future). 4. What is the role of PCTFREE parameter is storage clause? This is used to reserve certain amount of space in a block for expansion of rows. 5. What is the OPTIMAL parameter? To avoid the space wastage we use OPTIMAL parameter. 6. What is a shared pool? The Shared Pool environment contains both fixed and variable structures. The Fixed structures remain relatively the same size, whereas the variable structures grow and shrink based on user and program requirements. Used To Store Most Recently Executed SQL Statements Most Recently used Data definitions It Consists of two Key performance - related memory structures Library Cache & Data Dictionary Cache Shared Pool is sized by SHARED_POOL_SIZE 7. What is mean by Program Global Area (PGA)? It is area in memory that is used by a Single Oracle User Process. 8. What is a data segment? Data segment are the physical areas within a database block in which the data associated with tables and clusters are stored. 9. What are the factors causing the reparsing of SQL statements in SGA? Due to insufficient Shared SQL pool size. Monitor the ratio of the reloads takes place while executing SQL statements. If the ratio is greater than 1 then increase the SHARED_POOL_SIZE. LOGICAL & PHYSICAL ARCHITECTURE OF DATABASE. 10. What is Database Buffers? Database buffers are cache in the SGA used to hold the data blocks that are read from the data segments in the database such as tables, indexes and clusters DB_BLOCK_BUFFERS parameter in INIT.ORA decides the size.
Memory Management Interview Questions 11. What is dictionary cache? Dictionary cache is information about the databse objects stored in a data dictionary table. 12. Which parameter in Storage clause will reduce number of rows per block? PCTFREE parameter Row size also reduces no of rows per block. 13. What is meant by free extent? A free extent is a collection of continuous free blocks in tablespace. When a segment is dropped its extents are reallocated and are marked as free. 14. How will you force database to use particular rollback segment? For perticular transaction Alter system set rollback segment 'name'; For database, we can set in pfile. Rollback_segment='name' . 15. How can we organize the tablespaces in Oracle database to have maximum performance? Store data in tablespaces to avoid disk contension.system tablespace-recursive callsuserdata-users objectsindex tablespace-for indexesrollback segmnets-undo tablespace or manual rollback segmentsplace application specific data in respective tablespaces.Place all these tablespaces in saperate disks.Try to implement raid-0 (striping) for better performance. 16. How will you swap objects into a different table space for an existing database? Export the user Perform import using the command imp system/manager file=export.dmp indexfile=newrite.sql. This will create all definitions into newfile.sql. Drop necessary objects. Run the script newfile.sql after altering the tablespaces. Import from the backup for the necessary objects. 17. What is redo log buffer? Changes made to entries are written to the on-line redo log files. So that they can be used in roll forward operations during database recoveries. Before writing them into the redo log files, they will first brought to redo log buffers in SGA and LGWR will write into files frequently. LOG_BUFFER parameter will decide the size. 18. What is meant by recursive hints? Number of times processes repeatedly query the dictionary table is called recursive hints. It is due to the data dictionary cache is too small. By increasing the SHARED_POOL_SIZE parameter we can optimize the size of Data Dictionary Cache. 19. How can we plan storage for very large tables? Limit the number of extents in the table Separate Table from its indexes. Allocate Sufficient temporary storage. 20. How will you estimate the space required by a non-clustered tables? Calculate the total header size Calculate the available dataspace per data block Calculate the combined column lengths of the average row Calculate the total average row size. Calculate the average number rows that can fit in a block Calculate the number of blocks and bytes required for the table. After arriving the calculation, add 10 % additional space to calculate the initial extent size for a working table. 21. It is possible to use raw devices as data files and what are the advantages over file system files? Yes. The advantages over file system files. I/O will be improved because Oracle is bye-passing the kernnel which writing into disk. Disk Corruption will be very less. 22. What is a Control file? The Control File is a small binary file necessary for the database to start and operate successfully. Each Control file is associated with only one Oracle database. Before a database is opened, the control file is read to determine if the database is in a valid state to USE. The Control file is not accessible, the database does not function properly. 23. How will you monitor rollback segment status? By using dictionaray view's called v$rollstat,dba_rollback_segs. 24. How will you monitor the space allocation? This can be monitored in DB_data_files. 25. Why query fails sometimes? Due to syntax errors. 26. How the space utilization takes place within rollback segments? By correctly fixing optimal size. 27. How will you create multiple rollback segments in a database? create rollback segment roll1tablespace roll1. 28. What is a rollback segment entry? When ever changes happend to the database previous change will be there in the rollback segment. 29. What is hit ratio? Hit Ratio is the ratio of shared SQL and PL/SQL items found in the Library Cache versus physical storage.It can also be defined in a mathematical expression as 1 - ((physical reads) / (db block gets + consistent reads)). 30. What are disadvantages of having raw devices? We should depend on export/import utility for backup/recovery (fully reliable) The tar command cannot be used for physical file backup, instead we can use dd command which is less flexible and has limited recoveries. StumbleUpon Digg Delicious Twitter FaceBook LinkedIn Google Yahoo MySpace Tell Your Friend 31. What is use of rollback segments in Oracle database? When a user updated a particular table (for example 100 rows) the old value will be retained in the roll back segments(Oracle 8) and now it is Undo segment (oracle 9i). If the user issue a rollback command the old value will be taken from the rollback segment(that too if undo_retention parameter set properly in the parameter file). 32. What is advantage of having disk shadowing / mirroring? Shadow set of disks save as a backup in the event of disk failure. In most Operating System if any disk failure occurs it automatically switchover to place of failed disk. Improved performance because most OS support volume shadowing can direct file I/O request to use the shadow set of files instead of the main set of files. This reduces I/O load on the main set of disks. 33. How redo logs can be achieved? LGWR process wirtes all change vectors from theredo log buffer to online redo log file sequentially. 34. What is redo log file mirroring? Multiplexing Redo log file called Mirroing. ( Keeping multiple copies in different disks) 35. How to implement the multiple control files for an existing database? Edit init.ora file set control_files parameter with multiple location shutdown immediate copy control file to multiple locations & confirm from init.ora contol_files parameter start the database. run this query for changes confirmation - select name from v$controlfile; 36. What is SGA? How it is different from Ver 6.0 and Ver 7.0? The System Global Area in a Oracle database is the area in memory to facilitates the transfer of information between users. It holds the most recently requested structural information between users. It holds the most recently requested structural information about the database. The structure is Database buffers, Dictionary cache, Redo Log Buffer and Shared SQL pool (ver 7.0 only) area. 37. What is a Shared SQL pool? The data dictionary cache is stored in an area in SGA called the Shared SQL Pool. This will allow sharing of parsed SQL statements among concurrent users. 38. What is mean by Program Global Area (PGA)? It is area in memory that is used by a Single Oracle User Process. 39. List the factors that can affect the accuracy of the estimations? The space used transaction entries and a deleted record does not become free immediately after completion due to delayed cleanout. Trailing nulls and length bytes are not stored. Inserts of, updates to and deletes of rows as well as columns larger than a single datablock, can cause fragmentation and chained row pieces. 40. What are the different kind of export backups? Full back - Complete database Incremental - Only affected tables from last incremental date/full backup date. Cumulative backup - Only affected table from the last cumulative date/full backup date. 41. What is cold backup? What are the elements of it? Cold backup is taking backup of all physical files after normal shutdown of database. We need to take. All Data files. All Control files. All on-line redo log files. The init.ora file (Optional) 42. What is a logical backup? Logical backup involves reading a set of database records and writing them into a file. Export utility is used for taking backup and Import utility is used to recover from backup. 43. What is hot backup and how it can be taken? Taking backup of archive log files when database is open. For this the ARCHIVELOG mode should be enabled. The following files need to be backed up. All data files. All Archive log, redo log files. All control files. 44. What is the use of FILE option in EXP command? To give the export file name. 45. What is the use of GRANT option in EXP command? A flag to indicate whether grants on databse objects will be exported or not. Value is 'Y' or 'N'. 46. What is the use of INDEXES option in EXP command? A flag to indicate whether indexes on tables will be exported. 47. What is the use of ROWS option in EXP command? Flag to indicate whether table rows should be exported. If 'N' only DDL statements for the database objects will be created. 48. What is the use of PARFILE option in EXP command? Name of the parameter file to be passed for export. 49. What is the use of ANALYSE ( Ver 7) option in EXP command? A flag to indicate whether statistical information about the exported objects should be written to export dump file. 50. What is the use of FULL option in EXP command? A flag to indicate whether full databse export should be performed. 51. What is the use of OWNER option in EXP command? List of table accounts should be exported. 52. What is the use of TABLES option in EXP command? List of tables should be exported. 53. What is the use of RECORD LENGTH option in EXP command? Record length in bytes. 54. What is the use of INCTYPE option in EXP command? Type export should be performed COMPLETE,CUMULATIVE,INCREMENTAL 55. What is the use of RECORD option in EXP command? For Incremental exports, the flag indirects whether a record will be stores data dictionary tables recording the export. 56. What is the use of ROWS option in IMP command? A flag to indicate whether rows should be imported. If this is set to 'N' then only DDL for database objects will be executed. 57. What is the use of INDEXES option in IMP command? A flag to indicate whether import should import index on tables or not. 58. What is the use of GRANT option in IMP command? A flag to indicate whether grants on database objects will be imported. 59. What is the use of SHOW option in IMP command? A flag to indicate whether file content should be displayed or not. 60. What is the use of FILE option in IMP command? The name of the file from which import should be performed. 61. What is use of LOG (Ver 7) option in EXP command? The name of the file which log of the export will be written. Memory Management Questions and Answers Pdf Download Read the full article
0 notes
Text
MySQL InnoDB Cluster Disaster Recovery contingency via a Group Replication Replica
Just recently, I have been asked to look into what a Disaster Recovery site for InnoDB Cluster would look like. If you’re reading this, then I assume you’re familiar with what MySQL InnoDB Cluster is, and how it is configured, components, etc. Reminder: InnoDB Cluster (Group Replication, Shell & Router) in version 8.0 has had serious improvements from 5.7. Please try it out. So, given that, and given that we want to consider how best to fulfill the need, i.e. create a DR site for our InnoDB Cluster, let’s get started. Basically I’ll be looking at the following scenario: InnoDB Cluster Source site with a Group Replication Disaster Recovery Site.Now, just before we get into the nitty-gritty, here’s the scope. Life is already hard enough, so we want as much automated as possible, so, yes, InnoDB Cluster gets some of that done, but there are other parts we will still have to assume control over, so here’s the idea: Source Site:– InnoDB Cluster x3– Router– Accesses Source (Dynamic) & DR (Static)– Full Automation DR Site:– Group Replication x2– Router– Accesses Source (Dynamic) & DR (Static)– Configured Asynchronous Replication from Primary node. External / LBR 3rd Site:– MySQL Router (static) routing connections to Source Router– Allowing for 2 instance automatic failovers at Source site.– Manually reroute connections to DR site once Source Site outage confirmed or use different Router port.– Redundancy recommended to reduce SPOF. Let’s get it all started then. First things first, if you’re not quite sure what you’re doing, then I highly suggest looking at the following: https://thesubtlepath.com/mysql/innodb-cluster-managing-async-integration/ https://scriptingmysql.wordpress.com/2019/03/29/replicating-data-between-two-mysql-group-replication-sets-using-regular-asynchronous-replication-with-global-transaction-identifiers-gtids (And thanks to Andrew & Tony for their continuous help!) Let’s set up our Source Site 3x CentOS 7 servers. 8.0.20: Server, Shell & Router. :: All servers have had their SELinux, firewalls, ports, /etc/hosts and so on checked and validated, haven’t they? We will want to be using a private IP between all hosts, for obvious reasons.. I downloaded all the MySQL Enterprise rpm’s from https://edelivery.oracle.com and run the following on all 3 servers (getting mystic here: “centos01”, “centos02” & “centos03”): sudo yum install -y mysql-*8.0.20*rpm sudo systemctl start mysqld.servicesudo systemctl enable mysqld.service sudo grep 'A temporary password is generated for root@localhost' /var/log/mysqld.log |tail -1mysql -uroot -p Once we’re in, we want to control what command get into the binlogs, and the ”alter user’ and create user’ will cause us issues later on, hence “sql_log_bin=OFF” here: SET sql_log_bin = OFF; alter user 'root'@'localhost' identified by 'passwd'; create user 'ic'@'%' identified by 'passwd'; grant all on . to 'ic'@'%' with grant option; flush privileges; SET sql_log_bin = ON; This “ic@%” user will be used throughout, as we just don’t know when and which instance will end up being the source primary and/or a clone so by using this single user setup, I keep things easier for myself. For your production setup, please look deeper into specific privs ‘n’ perms. mysqlsh --uri root@localhost:3306 dba.checkInstanceConfiguration('ic@centos01:3306') dba.configureInstance('ic@centos01:3306'); Say “Y” to all the restarts n changes. On just one of the servers (this will be our Single Primary instance): connect ic@centos01:3306 cluster=dba.createCluster("mycluster") cluster.status() cluster.addInstance("ic@centos02:3306") cluster.addInstance("ic@centos03:3306") cluster.status(); Now, you will see that, although we have installed the binaries and started up the instances on centos02 & centos03, “addInstance” goes and clones our primary. This makes life soooo much easier. HINT: Before cloning, if you need to provision data, do it prior and we kill a couple of proverbial birdies here. Now we have a 3 instance Single Primary InnoDB Cluster. Setting up the local “InnoDB Cluster aware” Router: mkdir -p /opt/mysql/myrouter chown -R mysql:mysql /opt/mysql/myrouter cd /opt/mysql mysqlrouter --bootstrap ic@centos02:3306 -d /opt/mysql/myrouter -u mysql ./myrouter/start.sh Now Router is up and running. Let’s set up our Group Replication Disaster Recovery site now. Here, I’m doing a naughty, and just setting up a 2 node Group Replication group. If you want to see how you should be doing it, then let me reference Tony again: https://scriptingmysql.wordpress.com/2019/03/28/mysql-8-0-group-replication-three-server-installation/ :: Again, ports, firewalls, SELinux, /etc/hosts n similar have been validated again? Please? The environment: 2x Oracle Linux 7 servers. Version 8.0.20: MySQL Server, Router & Shell Do the following on both replica servers (olrep01 & olrep02): sudo yum install -y mysql-8.0.20rpmsudo systemctl start mysqld.servicesudo systemctl enable mysqld.servicesudo grep 'A temporary password is generated for root@localhost' /var/log/mysqld.log |tail -1mysql -uroot -p SET sql_log_bin = OFF; alter user 'root'@'localhost' identified by 'passwd'; create user 'ic'@'%' identified by 'passwd'; grant all on . to 'ic'@'%' with grant option; flush privileges; SET sql_log_bin = ON; No we have basic servers created, we need to clone the data from the Primary instance in our InnoDB Cluster. This means installing the mysql_clone plugin. On cluster Primary node: SET sql_log_bin = OFF;INSTALL PLUGIN CLONE SONAME "mysql_clone.so";GRANT BACKUP_ADMIN ON . to 'ic'@'%';GRANT SELECT ON performance_schema.* TO 'ic'@'%';GRANT EXECUTE ON . to 'ic'@'%';SET sql_log_bin = ON; Now on each replica: INSTALL PLUGIN CLONE SONAME "mysql_clone.so"; INSTALL PLUGIN group_replication SONAME 'group_replication.so'; SET GLOBAL clone_valid_donor_list = 'centos02:3306'; GRANT CLONE_ADMIN ON . to 'ic'@'%'; # Keep this, if we ever want to clone from any of the replicas GRANT BACKUP_ADMIN ON . to 'ic'@'%'; GRANT SELECT ON performance_schema.* TO 'ic'@'%'; GRANT EXECUTE ON . to 'ic'@'%'; Now to execute the clone operation: set global log_error_verbosity=3; CLONE INSTANCE FROM 'ic'@'centos02':3306 IDENTIFIED BY 'passwd'; View how the clone process is going: select STATE, ERROR_NO, BINLOG_FILE, BINLOG_POSITION, GTID_EXECUTED,CAST(BEGIN_TIME AS DATETIME) as "START TIME",CAST(END_TIME AS DATETIME) as "FINISH TIME",sys.format_time(POWER(10,12) * (UNIX_TIMESTAMP(END_TIME) - UNIX_TIMESTAMP(BEGIN_TIME))) as DURATIONfrom performance_schema.clone_status G As we’re cloning, we might run into the duplicate UUID issue, so, on both of the replicas, force the server to have a new UUID: rm /var/lib/mysql/auto.cnfsystemctl restart mysqld Setting up the Group Replication config In node 1: vi /etc/my.cnf # GR setup server-id =11 log-bin =mysql-bin gtid-mode =ON enforce-gtid-consistency =TRUE log_slave_updates =ON binlog_checksum =NONE master_info_repository =TABLE relay_log_info_repository =TABLE transaction_write_set_extraction=XXHASH64 plugin_load_add ="group_replication.so" group_replication = FORCE_PLUS_PERMANENT group_replication_bootstrap_group = OFF #group_replication_start_on_boot = ON group_replication_group_name = 8E2F4761-C55C-422F-8684-D086F6A1DB0E group_replication_local_address = '10.0.0.41:33061' # Adjust the following according to IP's and numbers of hosts in group: group_replication_group_seeds = '10.0.0.41:33061,10.0.0.42:33061' On 2nd node: server-id =22 log-bin =mysql-bin gtid-mode =ON enforce-gtid-consistency =TRUE log_slave_updates =ON binlog_checksum =NONE master_info_repository =TABLE relay_log_info_repository =TABLE transaction_write_set_extraction=XXHASH64 plugin_load_add ="group_replication.so" group_replication = FORCE_PLUS_PERMANENT group_replication_bootstrap_group = OFF #group_replication_start_on_boot = ON group_replication_group_name = 8E2F4761-C55C-422F-8684-D086F6A1DB0E group_replication_local_address = '10.0.0.42:33061' # Adjust the following according to IP's and numbers of hosts in group: group_replication_group_seeds = '10.0.0.41:33061,10.0.0.42:33061' Restart both servers: systemctl restart mysqld Check they’re in a GR group and the plugin is ok: mysql -uroot SELECT * FROM performance_schema.replication_group_members; SELECT * FROM performance_schema.replication_group_membersG select * from information_schema.plugins where PLUGIN_NAME = 'group_replication'G Now to create the recovery replication channel on all servers (although this is for single primary setup, the source could fail and then come back as a Read-Only replica, so we need to set this up): CHANGE MASTER TO MASTER_USER='ic', MASTER_PASSWORD='passwd' FOR CHANNEL 'group_replication_recovery'; On Server 1: SET GLOBAL group_replication_bootstrap_group=ON;START GROUP_REPLICATION;SET GLOBAL group_replication_bootstrap_group=OFF; On Server 2: START GROUP_REPLICATION; Check all the servers super_read_only mode and if they’re in a group: select @@super_read_only; SELECT * FROM performance_schema.replication_group_members; Set up Router on one of the replicas bootstrapping against the InnoDB Cluster source: mkdir -p /opt/mysql/myrouterchown -R mysql:mysql /opt/mysql/myroutermysqlrouter --bootstrap ic@centos02:3306 -d /opt/mysql/myrouter -u mysqlcd /opt/mysql/myrouter./start.sh Check connectivity: mysql -uic -P6446 -h olrep01 -N -e "select @@hostname, @@port" mysql -uic -P6446 -h centos02 -N -e "select @@hostname, @@port" Replication config for replicas Setting up replication for the replica from the primary means that we MUST be prepared to script the following for when the primary fails in the group replication setup, or keep the following command at hand, as it isn’t automatic so we’ll need to control this ourselves. Also, albeit a Group Replication group, when the primary fails n the DR site, we will also need to check all connections and sessions to the GR DR site to make sure we can set this up safely. So, from the PRIMARY instance only, execute the following: CHANGE MASTER TO MASTER_HOST = 'olrep01', MASTER_PORT = 6446, MASTER_USER = 'ic', MASTER_PASSWORD = 'passwd', MASTER_AUTO_POSITION = 1 FOR CHANNEL 'idc_gr_replication' ; As you can see, this is replicating via the GR DR site Router that is bootstrapped against the InnoDB Cluster. Hence when the primary on the cluster moves, Router will take us to the new primary without having to re-create replication or similar. Now let’s start it all up: start slave ; And have a look at our InnoDB Cluster source site replicating data to our DR Group Replication site: show slave status for channel 'idc_gr_replication'G Application Connectivity Now we have a HA solution for our data, what about connecting to one and the other. We have 2 routers, each local to each data-saving site, fine. But, if we’re being realistic, we have 1 data entry point, the InnoDB Cluster Primary instance on centos01, and then 4 copies of the data, syncing nicely between a (dedicated? please?) local network to ensure we’re as safe as possible. As you’ll probably have noticed, upon bootstrapping Router on both sides, it chooses the routing_strategy=first_available on the default port 6446 which is fine. And yes, this port can be changed if we want to, so feel free to adjust as required. But, depending on where the application is, and the user entry point, you’ll have a VIP, floating IP or something similar halping to load balance maybe. But here is where we start to ask more questions: What happens when the Source site goes down? eg: Well, there are a number of things to keep in mind. The local Routers will take care of instance failures, and here, I feel obliged to forewarn you all about what happens when we loose 2 out of 3 instances on the source site. Here, the last instance could be configured to have group_replication_exit_state_action=’OFFLINE_MODE’ or ABORT for example. That way, it doesn’t get used as a valid node via Router. Now, once that has happened, i.e. the source site goes down, we need to manually intervene, and make sure that everything is really down, and kick into effect the DR site. Now both Routers on Source & DR site that were bootstrapped to the InnoDB Cluster will fail of course. We are left with the Group Replication DR site. So what do we do here? Well, create an additional Router, configured with a static (non-dynamic) setup, as we don’t have the metadata that the IdC-aware Router had via the mysql_innodb_cluster_metadata schema that gets stored in “dynamic_state=/opt/mysql/myrouter/data/state.json”. What I did was create a new Router process on one of the other servers (instead of centos02 & olrep01, on centos01 & olrep02). Now this isn’t something you’d do in Production as we want Router close to the app, not the MySQL instances. So in effect you could just use a new path ,eg. mkdir -p /opt/mysql/myrouter7001chown -R mysql:mysql /opt/mysql/myrouter7001cd /opt/mysql/myrouter7001 vi mysqlrouter.conf [routing:DR_rw] bind_address=0.0.0.0 bind_port=7001 destinations=olrep01:3306,olrep02:3306 routing_strategy=first-available protocol=classic Once configured, start them up: mysqlrouter --user=mysql --config /opt/mysql/myrouter7001/mysqlrouter.conf & test: mysql -uic -P7001 -h olrep02 -N -e "select @@hostname, @@port" mysql -uic -P7001 -h centos01 -N -e "select @@hostname, @@port" And this will give us back olrep01, then when that fails, olrep02. When the InnoDB Cluster-bootstrapped-Router that listens on 6446 fails, we know to redirect all requests to port 7001 on our production IP’s (in my case, olrep02 & centos01). Summing up, we’ve now got 4 routers running, 2 are InnoDB Cluster aware, listening on 6446, and the static GR DR Routers are listening on 7001. That’s a lot of ports and a long connect string. Maybe we could make this simpler? Ok, well I added yet another Router process, high level. This Router process, again, has a hard-coded static configuration, because we don’t have InnoDB Cluster-aware Routers on both sides, which means having a mysql_innodb_cluster_metadata schema on both sides, fully aware of both InnoDB Clusters, and also fully aware that metadata changes on the source are NOT to be replicated across. Who knows.. maybe in the future… So in my case I created another Router process configuration on centos03: mkdir -p /opt/mysql/myrouter chown -R mysql:mysql /opt/mysql/myrouter cd /opt/mysql/myrouter (You might want to use something different, but I did this. I could have all of these on the same server, using different paths. There isn’t any limit to how many Router processes we have running on the same server. It’s more a “port availability” or should I say “network security” concern) vi mysqlrouter.conf: [routing:Source_rw] bind_address=0.0.0.0 bind_port=8008 destinations=centos02:6446,olrep01:6446 routing_strategy=first-available protocol=classic [routing:DR_rw] bind_address=0.0.0.0 bind_port=8009 destinations=centos01:7001,olrep02:7001 routing_strategy=first-available protocol=classic Just in case I’ve lost you a little, what I’m doing here is providing a Router access point via centos03:8008 to the InnoDB-Cluster-aware Routers on centos02:6446 and olrep01:6446. And then an additional / backup access point to the static Routers on centos01: 7001 and olrep02:7001. The app would then have a connect string similar to “centos03:8008,centos03:8009”. Now, I know to send Source site connections to centos03:8008 always. And when I get errors and notifications from monitoring that this is giving problems, I can use my back up connection to centos03:8009. Side note: I originally setup the single Router process on each side, but with a mix of InnoDB Cluster aware entry and an additional static configuration. This didn’t work well as really, having bootstrapped against the IdC, the dynamic_state entry only allows for the members taken from the mysql_innodb_cluster_metadata schema. So it won’t work. Hence, the additional router processes. But if you think about it, making each Router more modular and dedicated, it makes for a more atomic solution. Albeit a little more to administer. However, maybe instead of this high-level Router process listening on 8008 & 8009, you want to reuse your keepalived solution. Have a look at Lefred’s “MySQL Router HA with Keepalived” solution for that. And there you have it! This was all done on Oracle Cloud Infrastructure compute instances, opening ports internally within the Security List and not much else. https://mysqlmed.wordpress.com/2020/06/11/mysql-innodb-cluster-disaster-recovery-contingency-via-a-group-replication-replica/
0 notes
Text
Twilight zone Transition period offers a temporary safe harbour
Brexit day will transform the relationship with Europe, shift the UK’s place in the world and herald a new era in international relations. But little to nothing willchangeimmediately. The post-Brexit transition period will pick up where EU membership left off, offering a temporary safe harbour while Brussels and London haggle over their long-termrelationship. During that 11-month spell, the UK will keep many of the benefits and obligations of membership: British nationals will no longer be EU citizens, but they will be able to travel around the union as freely as before; British members of the European Parliament will leave, but the UK will remain a fully integratedpartofthesinglemarket. The realities of Brexit will come sharply into focus as the clock ticks down to the end of this year, when the transition period will expire. In the meantime, migrant workers have an opportunity to navigate the application processesthatwillsafeguardtheirrights for life, and businesses have some time toadaptsupplychains. What happens on February 1? The UK will enter a twilight period during which it will continue to apply and be bound by all EU laws but will be ejected from the EU’s political institutions: no MEPs, no British seat at the EU leaders’ table, no UK voice on the boards of the union’s myriad technical agencies. Britain will have zero say over the EU rules that will still apply for the next 11 months.TheEuropeanCommissionwill have the power to investigate breaches of the bloc’s laws, and the European Court of Justice will have the power to impose fines. The UK will contribute to theEUbudget. UK nationals who have joined the European civil service will have the right to work in it for the rest of their careers. Britain’s diplomatic presence in Brussels, known as the UK Permanent Representation to the EU, will be rebranded and British officials’ access to EU premises and information will be sharplycurtailed. UsedtofreelyroamingintheEU’scorridors of power, British diplomats will now need to apply for permission even to enter the Brussels institutions’ bars andcafés. What does it mean for citizens? British and EU citizens will continue to benefit from free movement during the transitionperiod. After transition, Britons living in the 27 EU member states will have their residency rights safeguarded, subject to completing whatever administrative procedures are imposed by the national government. It is up to each EU country to decide how to carry out that exercise, including whether to create a new type of residence status for UK nationals. EU27 countries have to provide a digital residence document to those with the right toremain. The divorce deal does not grant British expats full freedom of movement rights within the union. It guarantees rights for UK citizens in the EU27 countrywheretheyreside. This means that a British national living in, for example, Portugal will not necessarily be able to move to Poland to take up a new job. But the European Parliament has urged that UK citizens living in the EU should be granted free movementrights. What about EU nationals living in the UK? The 3m EU citizens resident in the UK have until June 2021 to register for Britain’ssettledstatusscheme,whichallows them to stay in the UK with existing rights after Brexit. The scheme is open to people who have been living in the countryforatleastfiveyears. Those who have not been there long enough can apply for a pre-settled status, which grants the right to live and workinBritainforuptofiveyears.Tobe eligible, someone needs to arrive in the UK before the end of the transition period. The picture changes from the beginning of 2021, when Britain becomes a third country. From then, EU citizens coming to the UK may need visas and work permits if they plan to build a life here. Both Britain and the EU have said they want to put in place a visa-waiver schemeforshortstaysofupto90days. What are the implications for business? Free movement of goods will continue during transition. But Britain will no longer be represented in the crucial technical work the EU carries out to decide which products can be sold in its marketandunderwhatconditions. The European Chemicals Agency, for example, has the power to decide whether a new substance is safe for consumers, while the European Banking Authority sets rules that affect compliancecostsforfinancialservices. Brexit day also sets the clock ticking for business. Companies have no idea what awaits them after the end of this year, because the UK and EU have yet to negotiate a new trade deal. Whatever is finally agreed, there will be a hard border for trade in goods between Britain and continental Europe that has not existedfordecades. That puts pressure on business to adjust supply chains, relocate operationsandanalysecosts.
Scotland’s first minister has called on the UK government to help create a post-Brexit “Scottish visa” that would allow easier entry for immigrants willing to commit to living north of the English border. The call from Nicola Sturgeon yesterday reflects concern about the economic and social implications of the end to freedom of movement for EU citizens, with experts saying it could lead to a fall in the Scottish working age population over the next 25 years because of the slowingbirthrate. Politicians in Scotland have been much more positive about the benefits ofimmigrationthaninotherpartsofthe UK. Ms Sturgeon said she wanted a tailored approach for Scotland that would beas“openandflexibleaspossible”. A one-size-fits-all UK system that sought to cut overall immigration would be “pretty disastrous” for Scotland, the firstministersaid. “I hope the UK government will be prepared to work with us to deliver a Scottishvisa,”shesaid. In proposals published yesterday, the Scottish government said the UK could retain overall control of immigration and border controls, but devolve to Edinburgh the power to create a new visa process subject to Home Office identity and security checks. The visa would allow immigrants to live and workinScotland. The Home Office, which yesterday unveiled a “Global Talent” route into the UK for scientists and mathematicians, dismissed the suggestion that influenceovervisasmightbedevolved. “Immigration will remain a reserved matter,” a Home Office spokesman said. “The UK government will introduce a points-based immigration system that works in the interests of the whole of the UnitedKingdom,includingScotland.” UK officials are concerned that devolving or even merely tailoring policy for Scotland or other parts of the UK could massively complicate the immigrationsystem. However, the Scottish government says international experience and the past success of the Scotland-only poststudy work visa show that its proposals arepractical. An independent study last year concluded that Canada’s immigration system showed how policy could be tailored to help Scotland head off a looming crisis over the balance between pensionersandworking-ageScots. Andrew McRae, policy chair of the Federation of Small Businesses in Scotland, said: “The UK government should acknowledge that it is possible and desirable to enable its immigration system to respond to different regions and nations.”
Read Also:
Microsoft ERP Software
Microsoft ERP Provider
Microsoft 365 ERP System
Microsoft 365 ERP Software
Microsoft 365 ERP Provider
Oracle ERP System
Oracle ERP Software
Oracle Cloud ERP System
Oracle EBS finance
Oracle cloud financials
Oracle Fusion Financial Cloud
Oracle ERP Providers
Oracle ERP Partner
ERP Software
ERP System
Best ERP Provider
Best ERP Software
Best ERP Service
Best ERP Solution
ERP Solution
0 notes
Text
DBA Interview Questions with Answers Part19
Why we look for CHUNKS_FREE space while tracking fragmentation details query? The CHUNK_FREE return the number of chunks of contiguous free space based on dba_free_space table. The motive is to find the largest size chunks of free space within a tableapce. This is because as we know oracle server allocates space for segments in unit of one extent. When the existing extent of segment is full, the server allocates another extent for the segment. In order to do oracle searches free space in the tablespace (contiguous set of data block sufficient to meet the required extent). If sufficient space not found then an error is returned by the oracle server.What is the impact of NLS/Characterset in database?NLS is a National language support and encompasses how to display currency, whenever we use a comma or a dot to separate numbers, how the name of the day is spelled etc.Charactersets are how we store data. For Example: US7ASCII is a 7bit characterset and WE8ISO8859P18 bit character set. It can store 2 times as many characters as the 7bit characterset. If you try to export from 8 bit characterset database and import into 7bit database then there is chance to loose data in 7bit characterset that have the high bit set and if you try from 7bit to 8bit would not encounter any issues since the 7bit characterset is a subset of the 8bit characterset and can hold more types of characters and can support many countries.Can we perform RMAN level 1 backup without level 0?If no level 0 is available, then the behavior depends upon the compatibility mode setting (oracle version). If the compatibility mode less than 10.0.0, RMAN generates a level 0 backup of files contents at the time of backup. If compatibility is greater than 10.0.0 RMAN copies all block changes since the file was created, and stores the results as level 1 backup.What will happen if ARCHIVE process cannot copy an archive redolog to a mandatory archive log destination?Oracle will continue with cycle to the other online redolog groups until it return to the group that the ARCH process is trying to copy to the mandatory archive log destination. If the mandatory archive log destination copy has not occurred, the database operation will suspend until the copy is successful or the DBA has intervened to perform force log switching.Can you differentiate between HOTBACKUP and RMAN backup?For hotbackup we have to put database in begin backup mode, then take backup where as RMAN would not put database in begin backup mode. In fact RMAN has a number of advantages over general backup. For more information please check: Benefit of RMAN BackupHow to put Manual/User managed backup in RMAN?In case of recovery catalog, you can put by using catalog command:RMAN> CATALOG START WITH ‘/oraback/backup.ctl’;When you put any SQL statement how oracle responds them internally?First it will check the syntax and semantics in library cache, after that it will created execution plan. If already data in buffer cache (in case of identical query) it will directly return to the client. If not it write the fetch to the database buffer cache after that it will send server and finally server send to the client.Can we use Same target database as Catalog?No, the recovery catalog should not reside in the target database (database to be backed up) because the database can not be recovered in the mounted state.Differentiate the use of what are PGA and UGA?When you are running dedicated server then process information stored inside the process global area (PGA) and when you are using shared server then the process information stored inside user global area (UGA).How do you automatically force the oracle to perform a checkpoint?The following are the parameter that will be used by DBA to adjust time or interval of how frequently its checkpoint should occur in database.LOG_CHECKPOINT_TIMEOUT = 3600; # Every one hourLOG_CHECKPOINT_INTERVAL = 1000; # number of OS blocks.What is Cluster table in Oracle database?A Cluster is a schema object that contains one or more tables that all have one or more common columns. Rows of one or more tables that share the same value in these common columns are physically stored together within the database. Generally, you should only cluster tables that are frequently joined on the cluster key columns in SQL statements. Clustering multiple tables improves the performance of joins, but it is likely to reduce the performance of full table scans, INSERT and UPDATE statements that modify cluster key values.Can you differentiate between complete and incomplete recovery?An incomplete database recovery is a recovery that it does not reach to the point of failure. The recovery can be either point of time or particular SCN or Particular archive log specially incase of missing archive log or redolog failure where as a complete recovery recovers to the point of failure possibly when having all archive log backup.What is difference between RMAN and Traditional Backup?RMAN is faster can perform incremental (changes only) backup, and does not place tablespace in hotbackup mode. Check: Benefit of RMAN BackupWhat are bind variables and why are they important?With bind variable in SQL, oracle can cache queries in a single time in the SQL cache area. This avoids a hard parse each time, which saves on various locking and latching resource we use to check object existence and so on.How to recover database without backup?If flash recovery is enabled then we can recover database without having backup? Otherwise we cannot recover database without backup.How to write explicit cursor to avoid oracle exception: no_data_found and too_many_rows?In PL/SQL if you try to write select statement with into clause it may return two exception no_data_found and too_many_rowsto avoid this exception you have to write explicit cursor.Exception Block,When no_data_found// Put your codeWhen_too_many_rows// put your codeWhen others then// put your codeEnd;What are differences between Reference cursor and Normal cursor?Reference cursor gives the address of the location instead of putting item directly. It holds the different type of structures. Normal cursor holds one structure of table.Reference cursor is a dynamic cursor where as normal cursor is static cursor. In dynamic cursor single statement are process multiple select statement dynamically at run time where as in normal cursor we process only one select statement.What is Pipeline view?In case of normal views whenever you call the view it will get data from the base table where as in case of pipeline view if you call the view it will get data from another intermediate view.How would you find the performance issue of SQL queries?– Enable the trace file before running your queries– Then check the trace file using tkprofcreate output file.– According to explain plan check the elapsed time for each query– Then tune them respectively.What is difference between Recovery and Restoring of database?Restoring means copying the database object from the backup media to the destination where actually it is required where as recovery means to apply the database object copied earlier (roll forward) in order to bring the database into consistent state.What are the Jobs of SMON and PMON processes?SMON – System Monitor performs recovery after instance failure, monitor temporary segments and extents; clean temp segment, coalesce free space. It is mandatory process of DB and starts by default.PMON – Process Monitor failed process resources. In shared server architecture monitor and restarts any failed dispatcher or server process. It is mandatory process of DB and starts by default.When you should rebuild index?In fact in 90% case never. When the data in index is sparse (lot of holes in index, due to delete and updates) and your query is usually ranged based. Also index BLEVEL is one of the key indicators of performance of SQL queries doing index range scan.What is key preserved table?A table is set to be key preserved table if every key of the table can also be the key of the result of the join. It guarantees to return only one copy of each row from the base table.Which of the following is NOT an oracle supported trigger?BeforeDuringAfterInstead ofAnswer: BWhich of the following is NOT true about modifying table column?You can drop a column at any time.You can add a column at any time as long as it is a NULL column.You can increase the number of characters in character columns or number of digits in numeric columns.You can not increase or decrease the number of decimal places.Answer: DHow can you find SQL of the Currently Active Sessions? Compare tables shutdown abort.How do you move table from one tablespace to another tablespace?You can use any of the below method to do this:1. Export the table, drop the table, create definition of table in new tablespace and then import the data using (imp ignore=y).2. Create new table in new tablespace then drop the original table and rename temporary table with original table name.CREATE TABLE temp_name TABLESPACE new_tablespace as select * from ‘source_table’;DROP TABLE real_table;RENAME temp_name to real_table;For More Interview Preparation Click on Link:DBA Interview Questions with Answers Part1, DBA Interview Questions with Answers Part2DBA Interview Questions with Answers Part3, DBA Interview Questions with Answers Part4DBA Interview Questions with Answers Part5, DBA Interview Questions with Answers Part6DBA Interview Questions with Answers Part7, DBA Interview Questions with Answers Part8DBA Interview Questions with Answers Part9, DBA Interview Questions with Answers Part10DBA Interview Questions with Answers Part11, DBA Interview Questions with Answers Part12DBA interview Questions with Answers Part13, DBA Interview Questions with Answers Part14DBA Interview Questions with Answers Part 15, DBA Interview Questions with Answer Part 16DBA Interview Questions with Answer Part17, Basic SQL Question & Answer SessionDiscussion on SCN and Checkpoint, Discussion: Import/Export Utility?Points to be Remember before Online Interview
0 notes
Text
Hybrid Cloud Market Emerging Technologies and Industry Growth by Forecast to 2023
The report on the global Hybrid Cloud market covers historical market trends, current market dynamics, market valuation by segmentation as well as region, country-level analysis for every segment, key player's market share analysis, competitive landscape and supply chain analysis.
Market Highlights:
A hybrid cloud is a computing environment which is a composition of a public and a private cloud. It works by allowing data and applications to be shared between them. In general, hybrid cloud mainly works on application level and infrastructure level. On the infrastructure layer, a hybrid cloud is formed by the combination of virtual machines from different cloud services. In the case of the application layer, a hybrid cloud is formed with components in existing applications or different SaaS offerings within the data center of an enterprise.
Hybrid cloud benefits the businesses to scale their existing on-premises infrastructure which helps in handling excess data. For basic and non-sensitive computing tasks, enterprises gain the agility of public cloud, whereas keeping the crucial data and applications on-premises safely secured by a company firewall. Hybrid cloud is highly valuable in variable workload environments with dynamic workloads, big data processing, and temporary processing. In May 2018, Red Hat, the world’s leading provider of open source solutions partnered with Microsoft and IBM for strengthening their multi-cloud portfolio. The joint platform will allow enterprise developers to run container-based applications across Microsoft Azure and on-premises.
Get a Free Sample report @ https://www.marketresearchfuture.com/sample_request/1018
Major Key Players
Akamai Technologies Inc. (U.S.),
Amazon Web Services LLC (U.S.),
CA Technologies (U.S.),
Dell Inc. (U.S.),
Flexiant Ltd. (U.K.),
Google Inc. (U.S.),
Hewlett-Packard Enterprise (U.S.),
IBM Corporation (U.S.),
Equinix Inc. (U.S.)
Microsoft Corporation (U.S.)
Some of the key innovators are Vmware, Inc.(U.S.), Citrix Systems, Inc. (U.S.), Equinix, Inc. (U.S.), Oracle Corporation (U.S.), Rackspace Hosting, Inc.(U.S.), Verizon Communications, Inc (U.S.), Panzura (U.S.), Terago Networks, Inc. (Canada), Fujitsu Ltd. (Japan), NTT Communications (Japan), NEC Corporation (Japan), Salesforce.Com (U.S.), Century Link Inc., (U.S.), Turbonomic (U.S.), and many others.
According to MRFR, The global Hybrid Cloud Market is estimated to reach USD140 billion at CAGR 24% through the forecast period 2023
Industry News
According to a new study by Nutanix, a new global study by enterprise cloud computing company, India is all set to lead the world in hybrid cloud adoption and usage.
Microsoft Corporation announced that it has acquired Avere Systems, a start-up specializing in data storage. The technology could ultimately help Microsoft pick up more Azure revenue by serving customers who want to use cloud computing while still keeping data in their own facilities. Microsoft has taken more of a hybrid approach than Amazon or Google in the cloud business. It offers the Azure Stack private cloud software that mirrors what's available from the Azure public cloud. And Microsoft continues to sell server and database software for corporate data centres.
Segmentation
The global hybrid cloud market is segmented on the basis of solution, service model, service, organization size, vertical, and region. On the basis of solution, the segment is further classified into cloud management, disaster recovery, hybrid hosting, and security & compliance. On the basis of service model, the market is further classified into SaaS (Software as a Service), IaaS (Information as a Service) and PaaS (Platform as a Service). On the basis of service, the segment is classified into managed and professional services. On the basis of organization size, the segment is further classified into (Small and medium-sized enterprises) SMEs and large enterprises. On the basis of vertical, the market is further segmented into BFSI, healthcare, retail, manufacturing, media & entertainment and others.
Regional Analysis:
The global hybrid cloud market is observed for North America, Europe, Asia Pacific and rest of the world. North America holds major market share through the forecast period. The presence of major players in the region and rising demand for scalable and cost-effective computing is driving the market in this region. Asia Pacific is estimated to show high growth rate during the forecast period. The rise in usage of cloud-based services and increase in deployment of data centers are primarily driving the market in this region. Considering the rise in usage of analytics, Europe shows decent growth during the forecast period.
Browse Complete Report @ https://www.marketresearchfuture.com/reports/hybrid-cloud-market-1018
Intended Audience
Cloud vendors
Application providers
System integrators
Consultancy firms
Service providers
Data integration service providers
Managed services providers
LIST OF TABLES
Table 1 Global Hybrid Cloud Market: By Region, 2017-2023
Table 2 North America Hybrid Cloud Market: By Country, 2017-2023
Table 3 Europe Hybrid Cloud Market: By Country, 2017-2023
Table 4 Asia Pacific Hybrid Cloud Market: By Country, 2017-2023
Table 5 The Middle East & Africa Hybrid Cloud Market: By Country, 2017-2023
Continued….
LIST OF FIGURES
FIGURE 1 Global Hybrid Cloud Market segmentation
FIGURE 2 Forecast Methodology
FIGURE 3 Porter’s Five Forces Analysis of Global Hybrid Cloud Market
FIGURE 4 Value Chain of Global Hybrid Cloud Market
FIGURE 5 Share of Global Hybrid Cloud Market in 2017, by country (in %)
FIGURE 6 Global Hybrid Cloud Market, 2017-2023,
Continued….
About Market Research Future:
At Market Research Future (MRFR), we enable our customers to unravel the complexity of various industries through our Cooked Research Report (CRR), Half-Cooked Research Reports (HCRR), Raw Research Reports (3R), Continuous-Feed Research (CFR), and Market Research & Consulting Services.
MRFR team have supreme objective to provide the optimum quality market research and intelligence services to our clients. Our market research studies by products, services, technologies, applications, end users, and market players for global, regional, and country level market segments, enable our clients to see more, know more, and do more, which help to answer all their most important questions.
Contact:
Market Research Future
+1 646 845 9312
Email: [email protected]
0 notes
Text
Temporary Table in Oracle
Temporary Table in Oracle
Temporary Table in Oracle
Temporary table is the table exists in memory for the duration of session or transaction as declared. Data in temporary table is private to the session
You can create temporary table as 1. Global temporary table 2. Private temporary table
Difference between Global And Private temporary table 1. Global name same as permanent table V/S Private table have prefix “ORA$PTT_”.
View On WordPress
0 notes
Text
300+ TOP SQL Server Interview Questions and Answers
SQL Server Interview Questions for freshers experienced :-
1. What are the two authentication modes in SQL Server? There are two authentication modes Windows Mode Mixed Mode Modes can be changed by selecting the tools menu of SQL Server configuration properties and choose security page. 2. What Is SQL Profiler? SQL Profiler is a tool which allows system administrator to monitor events in the SQL server. This is mainly used to capture and save data about each event of a file or a table for analysis. 3. What is recursive stored procedure? SQL Server supports recursive stored procedure which calls by itself. Recursive stored procedure can be defined as a method of problem solving wherein the solution is arrived repetitively. It can nest up to 32 levels. CREATE PROCEDURE . ( @Number Integer, @RetVal Integer OUTPUT ) AS DECLARE @In Integer DECLARE @Out Integer IF @Number != 1 BEGIN SELECT @In = @Number – 1 EXEC Fact @In, @Out OUTPUT - Same stored procedure has been called again(Recursively) SELECT @RetVal = @Number * @Out END ELSE BEGIN SELECT @RetVal = 1 END RETURN GO 4. What are the differences between local and global temporary tables? Local temporary tables are visible when there is a connection, and are deleted when the connection is closed. CREATE TABLE # Global temporary tables are visible to all users, and are deleted when the connection that created it is closed. CREATE TABLE ## 5. What is CHECK constraint? A CHECK constraint can be applied to a column in a table to limit the values that can be placed in a column. Check constraint is to enforce integrity. 6. Can SQL servers linked to other servers? SQL server can be connected to any database which has OLE-DB provider to give a link. Example: Oracle has OLE-DB provider which has link to connect with the SQL server group. 7. What is sub query and its properties? A sub-query is a query which can be nested inside a main query like Select, Update, Insert or Delete statements. This can be used when expression is allowed. Properties of sub query can be defined as A sub query should not have order by clause A sub query should be placed in the right hand side of the comparison operator of the main query A sub query should be enclosed in parenthesis because it needs to be executed first before the main query More than one sub query can be included 8. What are the types of sub query? There are three types of sub query Single row sub query which returns only one row Multiple row sub query which returns multiple rows Multiple column sub query which returns multiple columns to the main query. With that sub query result, Main query will be executed. 9. What is SQL server agent? The SQL Server agent plays a vital role in day to day tasks of SQL server administrator(DBA). Server agent's purpose is to implement the tasks easily with the scheduler engine which allows our jobs to run at scheduled date and time. 10. What are scheduled tasks in SQL Server? Scheduled tasks or jobs are used to automate processes that can be run on a scheduled time at a regular interval. This scheduling of tasks helps to reduce human intervention during night time and feed can be done at a particular time. User can also order the tasks in which it has to be generated.
SQL Server Interview Questions 11. What is COALESCE in SQL Server? COALESCE is used to return first non-null expression within the arguments. This function is used to return a non-null from more than one column in the arguments. Example – Select COALESCE(empno, empname, salary) from employee; 12. How exceptions can be handled in SQL Server Programming? Exceptions are handled using TRY----CATCH constructs and it is handles by writing scripts inside the TRY block and error handling in the CATCH block. 13. What is the purpose of FLOOR function? FLOOR function is used to round up a non-integer value to the previous least integer. Example is given FLOOR(6.7) Returns 6. 14. Can we check locks in database? If so, how can we do this lock check? Yes, we can check locks in the database. It can be achieved by using in-built stored procedure called sp_lock. 15. What is the use of SIGN function? SIGN function is used to determine whether the number specified is Positive, Negative and Zero. This will return +1,-1 or 0. Example – SIGN(-35) returns -1 16. What is a Trigger? Triggers are used to execute a batch of SQL code when insert or update or delete commands are executed against a table. Triggers are automatically triggered or executed when the data is modified. It can be executed automatically on insert, delete and update operations. 17. What are the types of Triggers? There are four types of triggers and they are: Insert Delete Update Instead of 18. What is an IDENTITY column in insert statements? IDENTITY column is used in table columns to make that column as Auto incremental number or a surrogate key. 19. What is Bulkcopy in SQL? Bulkcopy is a tool used to copy large amount of data from Tables. This tool is used to load large amount of data in SQL Server. 20. What will be query used to get the list of triggers in a database? Query to get the list of triggers in database- Select * from sys.objects where type='tr' 21. What is the difference between UNION and UNION ALL? UNION: To select related information from two tables UNION command is used. It is similar to JOIN command. UNION All: The UNION ALL command is equal to the UNION command, except that UNION ALL selects all values. It will not remove duplicate rows, instead it will retrieve all rows from all tables. 22. How Global temporary tables are represented and its scope? Global temporary tables are represented with ## before the table name. Scope will be the outside the session whereas local temporary tables are inside the session. Session ID can be found using @@SPID. 23. What are the differences between Stored Procedure and the dynamic SQL? Stored Procedure is a set of statements which is stored in a compiled form. Dynamic SQL is a set of statements that dynamically constructed at runtime and it will not be stored in a Database and it simply execute during run time. 24. What is Collation? Collation is defined to specify the sort order in a table. There are three types of sort order Case sensitive Case Insensitive Binary 25. How can we get count of the number of records in a table? Following are the queries can be used to get the count of records in a table - Select * from Select count(*) from Select rows from sysindexes where id=OBJECT_ID(tablename) and indid Read the full article
0 notes
Text
MySQL InnoDB Cluster Disaster Recovery contingency via a Group Replication Slave
Just recently, I have been asked to look into what a Disaster Recovery site for InnoDB Cluster would look like. If you’re reading this, then I assume you’re familiar with what MySQL InnoDB Cluster is, and how it is configured, components, etc. Reminder: InnoDB Cluster (Group Replication, Shell & Router) in version 8.0 has had serious improvements from 5.7. Please try it out. So, given that, and given that we want to consider how best to fulfill the need, i.e. create a DR site for our InnoDB Cluster, let’s get started. Basically I’ll be looking at the following scenario: InnoDB Cluster Master site with a Group Replication Disaster Recovery Site.Now, just before we get into the nitty-gritty, here’s the scope. Life is already hard enough, so we want as much automated as possible, so, yes, InnoDB Cluster gets some of that done, but there are other parts we will still have to assume control over, so here’s the idea: Master Site:– InnoDB Cluster x3– Router– Accesses Master (Dynamic) & DR (Static)– Full Automation DR Site:– Group Replication x2– Router– Accesses Master (Dynamic) & DR (Static)– Configured Asynchronous Replication from Primary node. External / LBR 3rd Site:– MySQL Router (static) routing connections to Master Router– Allowing for 2 instance automatic failovers at Master site.– Manually reroute connections to DR site once Master Site outage confirmed or use different Router port.– Redundancy recommended to reduce SPOF. Let’s get it all started then. First things first, if you’re not quite sure what you’re doing, then I highly suggest looking at the following: https://thesubtlepath.com/mysql/innodb-cluster-managing-async-integration/ https://scriptingmysql.wordpress.com/2019/03/29/replicating-data-between-two-mysql-group-replication-sets-using-regular-asynchronous-replication-with-global-transaction-identifiers-gtids (And thanks to Andrew & Tony for their continuous help!) Let’s set up our Master Site 3x CentOS 7 servers. 8.0.20: Server, Shell & Router. :: All servers have had their SELinux, firewalls, ports, /etc/hosts and so on checked and validated, haven’t they? We will want to be using a private IP between all hosts, for obvious reasons.. I downloaded all the MySQL Enterprise rpm’s from https://edelivery.oracle.com and run the following on all 3 servers (getting mystic here: “centos01”, “centos02” & “centos03”): sudo yum install -y mysql-*8.0.20*rpm sudo systemctl start mysqld.servicesudo systemctl enable mysqld.service sudo grep 'A temporary password is generated for root@localhost' /var/log/mysqld.log |tail -1mysql -uroot -p Once we’re in, we want to control what command get into the binlogs, and the ”alter user’ and create user’ will cause us issues later on, hence “sql_log_bin=OFF” here: SET sql_log_bin = OFF; alter user 'root'@'localhost' identified by 'passwd'; create user 'ic'@'%' identified by 'passwd'; grant all on . to 'ic'@'%' with grant option; flush privileges; SET sql_log_bin = ON; This “ic@%” user will be used throughout, as we just don’t know when and which instance will end up being the master primary and/or a clone so by using this single user setup, I keep things easier for myself. For your production setup, please look deeper into specific privs ‘n’ perms. mysqlsh --uri root@localhost:3306 dba.checkInstanceConfiguration('ic@centos01:3306') dba.configureInstance('ic@centos01:3306'); Say “Y” to all the restarts n changes. On just one of the servers (this will be our Single Primary instance): connect ic@centos01:3306 cluster=dba.createCluster("mycluster") cluster.status() cluster.addInstance("ic@centos02:3306") cluster.addInstance("ic@centos03:3306") cluster.status(); Now, you will see that, although we have installed the binaries and started up the instances on centos02 & centos03, “addInstance” goes and clones our primary. This makes life soooo much easier. HINT: Before cloning, if you need to provision data, do it prior and we kill a couple of proverbial birdies here. Now we have a 3 instance Single Primary InnoDB Cluster. Setting up the local “InnoDB Cluster aware” Router: mkdir -p /opt/mysql/myrouter chown -R mysql:mysql /opt/mysql/myrouter cd /opt/mysql mysqlrouter --bootstrap ic@centos02:3306 -d /opt/mysql/myrouter -u mysql ./myrouter/start.sh Now Router is up and running. Let’s set up our Group Replication Disaster Recovery site now. Here, I’m doing a naughty, and just setting up a 2 node Group Replication group. If you want to see how you should be doing it, then let me reference Tony again: https://scriptingmysql.wordpress.com/2019/03/28/mysql-8-0-group-replication-three-server-installation/ :: Again, ports, firewalls, SELinux, /etc/hosts n similar have been validated again? Please? The environment: 2x Oracle Linux 7 servers. Version 8.0.20: MySQL Server, Router & Shell Do the following on both slave servers (olslave01 & olslave02): sudo yum install -y mysql-8.0.20rpmsudo systemctl start mysqld.servicesudo systemctl enable mysqld.servicesudo grep 'A temporary password is generated for root@localhost' /var/log/mysqld.log |tail -1mysql -uroot -p SET sql_log_bin = OFF; alter user 'root'@'localhost' identified by 'passwd'; create user 'ic'@'%' identified by 'passwd'; grant all on . to 'ic'@'%' with grant option; flush privileges; SET sql_log_bin = ON; No we have basic servers created, we need to clone the data from the Primary instance in our InnoDB Cluster. This means installing the mysql_clone plugin. On cluster Primary node: SET sql_log_bin = OFF;INSTALL PLUGIN CLONE SONAME "mysql_clone.so";GRANT BACKUP_ADMIN ON . to 'ic'@'%';GRANT SELECT ON performance_schema.* TO 'ic'@'%';GRANT EXECUTE ON . to 'ic'@'%';SET sql_log_bin = ON; Now on each slave: INSTALL PLUGIN CLONE SONAME "mysql_clone.so"; INSTALL PLUGIN group_replication SONAME 'group_replication.so'; SET GLOBAL clone_valid_donor_list = 'centos02:3306'; GRANT CLONE_ADMIN ON . to 'ic'@'%'; # Keep this, if we ever want to clone from any of the slaves GRANT BACKUP_ADMIN ON . to 'ic'@'%'; GRANT SELECT ON performance_schema.* TO 'ic'@'%'; GRANT EXECUTE ON . to 'ic'@'%'; Now to execute the clone operation: set global log_error_verbosity=3; CLONE INSTANCE FROM 'ic'@'centos02':3306 IDENTIFIED BY 'passwd'; View how the clone process is going: select STATE, ERROR_NO, BINLOG_FILE, BINLOG_POSITION, GTID_EXECUTED,CAST(BEGIN_TIME AS DATETIME) as "START TIME",CAST(END_TIME AS DATETIME) as "FINISH TIME",sys.format_time(POWER(10,12) * (UNIX_TIMESTAMP(END_TIME) - UNIX_TIMESTAMP(BEGIN_TIME))) as DURATIONfrom performance_schema.clone_status G As we’re cloning, we might run into the duplicate UUID issue, so, on both of the slaves, force the server to have a new UUID: rm /var/lib/mysql/auto.cnfsystemctl restart mysqld Setting up the Group Replication config In node 1: vi /etc/my.cnf # GR setup server-id =11 log-bin =mysql-bin gtid-mode =ON enforce-gtid-consistency =TRUE log_slave_updates =ON binlog_checksum =NONE master_info_repository =TABLE relay_log_info_repository =TABLE transaction_write_set_extraction=XXHASH64 plugin_load_add ="group_replication.so" group_replication = FORCE_PLUS_PERMANENT group_replication_bootstrap_group = OFF #group_replication_start_on_boot = ON group_replication_group_name = 8E2F4761-C55C-422F-8684-D086F6A1DB0E group_replication_local_address = '10.0.0.41:33061' # Adjust the following according to IP's and numbers of hosts in group: group_replication_group_seeds = '10.0.0.41:33061,10.0.0.42:33061' On 2nd node: server-id =22 log-bin =mysql-bin gtid-mode =ON enforce-gtid-consistency =TRUE log_slave_updates =ON binlog_checksum =NONE master_info_repository =TABLE relay_log_info_repository =TABLE transaction_write_set_extraction=XXHASH64 plugin_load_add ="group_replication.so" group_replication = FORCE_PLUS_PERMANENT group_replication_bootstrap_group = OFF #group_replication_start_on_boot = ON group_replication_group_name = 8E2F4761-C55C-422F-8684-D086F6A1DB0E group_replication_local_address = '10.0.0.42:33061' # Adjust the following according to IP's and numbers of hosts in group: group_replication_group_seeds = '10.0.0.41:33061,10.0.0.42:33061' Restart both servers: systemctl restart mysqld Check they’re in a GR group and the plugin is ok: mysql -uroot SELECT * FROM performance_schema.replication_group_members; SELECT * FROM performance_schema.replication_group_membersG select * from information_schema.plugins where PLUGIN_NAME = 'group_replication'G Now to create the recovery replication channel on all servers (although this is for single primary setup, the master could fail and then come back as a Read-Only slave, so we need to set this up): CHANGE MASTER TO MASTER_USER='ic', MASTER_PASSWORD='passwd' FOR CHANNEL 'group_replication_recovery'; On Server 1: SET GLOBAL group_replication_bootstrap_group=ON;START GROUP_REPLICATION;SET GLOBAL group_replication_bootstrap_group=OFF; On Server 2: START GROUP_REPLICATION; Check all the servers super_read_only mode and if they’re in a group: select @@super_read_only; SELECT * FROM performance_schema.replication_group_members; Set up Router on one of the slaves bootstrapping against the InnoDB Cluster master: mkdir -p /opt/mysql/myrouterchown -R mysql:mysql /opt/mysql/myroutermysqlrouter --bootstrap ic@centos02:3306 -d /opt/mysql/myrouter -u mysqlcd /opt/mysql/myrouter./start.sh Check connectivity: mysql -uic -P6446 -h olslave01 -N -e "select @@hostname, @@port" mysql -uic -P6446 -h centos02 -N -e "select @@hostname, @@port" Replication config for slaves Setting up replication for the slave from the primary means that we MUST be prepared to script the following for when the primary fails in the group replication setup, or keep the following command at hand, as it isn’t automatic so we’ll need to control this ourselves. Also, albeit a Group Replication group, when the primary fails n the DR site, we will also need to check all connections and sessions to the GR DR site to make sure we can set this up safely. So, from the PRIMARY instance only, execute the following: CHANGE MASTER TOMASTER_HOST = 'olslave01',MASTER_PORT = 6446,MASTER_USER = 'ic',MASTER_PASSWORD = 'passwd',MASTER_AUTO_POSITION = 1FOR CHANNEL 'idc_gr_replication' ; As you can see, this is replicating via the GR DR site Router that is bootstrapped against the InnoDB Cluster. Hence when the primary on the cluster moves, Router will take us to the new primary without having to re-create replication or similar. Now let’s start it all up: start slave ; And have a look at our InnoDB Cluster master site replicating data to our DR Group Replication site: show slave status for channel 'idc_gr_replication'G Application Connectivity Now we have a HA solution for our data, what about connecting to one and the other. We have 2 routers, each local to each data-saving site, fine. But, if we’re being realistic, we have 1 data entry point, the InnoDB Cluster Primary instance on centos01, and then 4 copies of the data, syncing nicely between a (dedicated? please?) local network to ensure we’re as safe as possible. As you’ll probably have noticed, upon bootstrapping Router on both sides, it chooses the routing_strategy=first_available on the default port 6446 which is fine. And yes, this port can be changed if we want to, so feel free to adjust as required. But, depending on where the application is, and the user entry point, you’ll have a VIP, floating IP or something similar halping to load balance maybe. But here is where we start to ask more questions: What happens when the Master site goes down? eg: Well, there are a number of things to keep in mind. The local Routers will take care of instance failures, and here, I feel obliged to forewarn you all about what happens when we loose 2 out of 3 instances on the master site. Here, the last instance could be configured to have group_replication_exit_state_action=’OFFLINE_MODE’ or ABORT for example. That way, it doesn’t get used as a valid node via Router. Now, once that has happened, i.e. the master site goes down, we need to manually intervene, and make sure that everything is really down, and kick into effect the DR site. Now both Routers on Master & DR site that were bootstrapped to the InnoDB Cluster will fail of course. We are left with the Group Replication DR site. So what do we do here? Well, create an additional Router, configured with a static (non-dynamic) setup, as we don’t have the metadata that the IdC-aware Router had via the mysql_innodb_cluster_metadata schema that gets stored in “dynamic_state=/opt/mysql/myrouter/data/state.json”. What I did was create a new Router process on one of the other servers (instead of centos02 & olslave01, on centos01 & olslave02). Now this isn’t something you’d do in Production as we want Router close to the app, not the MySQL instances. So in effect you could just use a new path ,eg. mkdir -p /opt/mysql/myrouter7001chown -R mysql:mysql /opt/mysql/myrouter7001cd /opt/mysql/myrouter7001 vi mysqlrouter.conf [routing:DR_rw]bind_address=0.0.0.0bind_port=7001destinations=olslave01:3306,olslave02:3306routing_strategy=first-availableprotocol=classic Once configured, start them up: mysqlrouter --user=mysql --config /opt/mysql/myrouter7001/mysqlrouter.conf & test: mysql -uic -P7001 -h olslave02 -N -e "select @@hostname, @@port" mysql -uic -P7001 -h centos01 -N -e "select @@hostname, @@port" And this will give us back olslave01, then when that fails, olslave02. When the InnoDB Cluster-bootstrapped-Router that listens on 6446 fails, we know to redirect all requests to port 7001 on our production IP’s (in my case, olslave02 & centos01). Summing up, we’ve now got 4 routers running, 2 are InnoDB Cluster aware, listening on 6446, and the static GR DR Routers are listening on 7001. That’s a lot of ports and a long connect string. Maybe we could make this simpler? Ok, well I added yet another Router process, high level. This Router process, again, has a hard-coded static configuration, because we don’t have InnoDB Cluster-aware Routers on both sides, which means having a mysql_innodb_cluster_metadata schema on both sides, fully aware of both InnoDB Clusters, and also fully aware that metadata changes on the master are NOT to be replicated across. Who knows.. maybe in the future… So in my case I created another Router process configuration on centos03: mkdir -p /opt/mysql/myrouter chown -R mysql:mysql /opt/mysql/myrouter cd /opt/mysql/myrouter (You might want to use something different, but I did this. I could have all of these on the same server, using different paths. There isn’t any limit to how many Router processes we have running on the same server. It’s more a “port availability” or should I say “network security” concern) vi mysqlrouter.conf: [routing:Master_rw] bind_address=0.0.0.0 bind_port=8008 destinations=centos02:6446,olslave01:6446 routing_strategy=first-available protocol=classic [routing:DR_rw] bind_address=0.0.0.0 bind_port=8009 destinations=centos01:7001,olslave02:7001 routing_strategy=first-available protocol=classic Just in case I’ve lost you a little, what I’m doing here is providing a Router access point via centos03:8008 to the InnoDB-Cluster-aware Routers on centos02:6446 and olslave01:6446. And then an additional / backup access point to the static Routers on centos01: 7001 and olslave02:7001. The app would then have a connect string similar to “centos03:8008,centos03:8009”. Now, I know to send Master site connections to centos03:8008 always. And when I get errors and notifications from monitoring that this is giving problems, I can use my back up connection to centos03:8009. Side note: I originally setup the single Router process on each side, but with a mix of InnoDB Cluster aware entry and an additional static configuration. This didn’t work well as really, having bootstrapped against the IdC, the dynamic_state entry only allows for the members taken from the mysql_innodb_cluster_metadata schema. So it won’t work. Hence, the additional router processes. But if you think about it, making each Router more modular and dedicated, it makes for a more atomic solution. Albeit a little more to administer. However, maybe instead of this high-level Router process listening on 8008 & 8009, you want to reuse your keepalived solution. Have a look at Lefred’s “MySQL Router HA with Keepalived” solution for that. And there you have it! This was all done on Oracle Cloud Infrastructure compute instances, opening ports internally within the Security List and not much else. https://mysqlmed.wordpress.com/2020/06/11/mysql-innodb-cluster-disaster-recovery-contingency-via-a-group-replication-slave/
0 notes
Text
Hybrid Cloud Market 2019 Trends, Size, Share, Segment and Industry Growth by Forecast to 2023
The report on the global Hybrid Cloud market covers historical market trends, current market dynamics, market valuation by segmentation as well as region, country-level analysis for every segment, key player's market share analysis, competitive landscape and supply chain analysis.
Market Highlights:
A hybrid cloud is a computing environment which is a composition of a public and a private cloud. It works by allowing data and applications to be shared between them. In general, hybrid cloud mainly works on application level and infrastructure level. On the infrastructure layer, a hybrid cloud is formed by the combination of virtual machines from different cloud services. In the case of the application layer, a hybrid cloud is formed with components in existing applications or different SaaS offerings within the data center of an enterprise.
Hybrid cloud benefits the businesses to scale their existing on-premises infrastructure which helps in handling excess data. For basic and non-sensitive computing tasks, enterprises gain the agility of public cloud, whereas keeping the crucial data and applications on-premises safely secured by a company firewall. Hybrid cloud is highly valuable in variable workload environments with dynamic workloads, big data processing, and temporary processing. In May 2018, Red Hat, the world’s leading provider of open source solutions partnered with Microsoft and IBM for strengthening their multi-cloud portfolio. The joint platform will allow enterprise developers to run container-based applications across Microsoft Azure and on-premises.
Major Key Players
Akamai Technologies Inc. (U.S.),
Amazon Web Services LLC (U.S.),
CA Technologies (U.S.),
Dell Inc. (U.S.),
Flexiant Ltd. (U.K.),
Google Inc. (U.S.),
Hewlett-Packard Enterprise (U.S.),
IBM Corporation (U.S.),
Equinix Inc. (U.S.)
Microsoft Corporation (U.S.)
Some of the key innovators are Vmware, Inc.(U.S.), Citrix Systems, Inc. (U.S.), Equinix, Inc. (U.S.), Oracle Corporation (U.S.), Rackspace Hosting, Inc.(U.S.), Verizon Communications, Inc (U.S.), Panzura (U.S.), Terago Networks, Inc. (Canada), Fujitsu Ltd. (Japan), NTT Communications (Japan), NEC Corporation (Japan), Salesforce.Com (U.S.), Century Link Inc., (U.S.), Turbonomic (U.S.), and many others.
According to MRFR, The global Hybrid Cloud Market is estimated to reach USD140 billion at CAGR 24% through the forecast period 2023
Industry News
According to a new study by Nutanix, a new global study by enterprise cloud computing company, India is all set to lead the world in hybrid cloud adoption and usage.
Microsoft Corporation announced that it has acquired Avere Systems, a start-up specializing in data storage. The technology could ultimately help Microsoft pick up more Azure revenue by serving customers who want to use cloud computing while still keeping data in their own facilities. Microsoft has taken more of a hybrid approach than Amazon or Google in the cloud business. It offers the Azure Stack private cloud software that mirrors what's available from the Azure public cloud. And Microsoft continues to sell server and database software for corporate data centres.
Segmentation
The global hybrid cloud market is segmented on the basis of solution, service model, service, organization size, vertical, and region. On the basis of solution, the segment is further classified into cloud management, disaster recovery, hybrid hosting, and security & compliance. On the basis of service model, the market is further classified into SaaS (Software as a Service), IaaS (Information as a Service) and PaaS (Platform as a Service). On the basis of service, the segment is classified into managed and professional services. On the basis of organization size, the segment is further classified into (Small and medium-sized enterprises) SMEs and large enterprises. On the basis of vertical, the market is further segmented into BFSI, healthcare, retail, manufacturing, media & entertainment and others.
Regional Analysis:
The global hybrid cloud market is observed for North America, Europe, Asia Pacific and rest of the world. North America holds major market share through the forecast period. The presence of major players in the region and rising demand for scalable and cost-effective computing is driving the market in this region. Asia Pacific is estimated to show high growth rate during the forecast period. The rise in usage of cloud-based services and increase in deployment of data centers are primarily driving the market in this region. Considering the rise in usage of analytics, Europe shows decent growth during the forecast period.
Browse Complete Report @ https://www.marketresearchfuture.com/reports/hybrid-cloud-market-1018
Intended Audience
Cloud vendors
Application providers
System integrators
Consultancy firms
Service providers
Data integration service providers
Managed services providers
LIST OF TABLES
Table 1 Global Hybrid Cloud Market: By Region, 2017-2023
Table 2 North America Hybrid Cloud Market: By Country, 2017-2023
Table 3 Europe Hybrid Cloud Market: By Country, 2017-2023
Table 4 Asia Pacific Hybrid Cloud Market: By Country, 2017-2023
Table 5 The Middle East & Africa Hybrid Cloud Market: By Country, 2017-2023
Continued….
LIST OF FIGURES
FIGURE 1 Global Hybrid Cloud Market segmentation
FIGURE 2 Forecast Methodology
FIGURE 3 Porter’s Five Forces Analysis of Global Hybrid Cloud Market
FIGURE 4 Value Chain of Global Hybrid Cloud Market
FIGURE 5 Share of Global Hybrid Cloud Market in 2017, by country (in %)
FIGURE 6 Global Hybrid Cloud Market, 2017-2023,
Continued….
About Market Research Future:
At Market Research Future (MRFR), we enable our customers to unravel the complexity of various industries through our Cooked Research Report (CRR), Half-Cooked Research Reports (HCRR), Raw Research Reports (3R), Continuous-Feed Research (CFR), and Market Research & Consulting Services.
MRFR team have supreme objective to provide the optimum quality market research and intelligence services to our clients. Our market research studies by products, services, technologies, applications, end users, and market players for global, regional, and country level market segments, enable our clients to see more, know more, and do more, which help to answer all their most important questions.
Contact:
Market Research Future
+1 646 845 9312
Email: [email protected]
0 notes
Text
Hybrid Cloud Market Global Development Industry Trends and Future Outlook to 2023
Market Highlights:
A hybrid cloud is a computing environment which is a composition of a public and a private cloud. It works by allowing data and applications to be shared between them. In general, hybrid cloud mainly works on application level and infrastructure level. On the infrastructure layer, a hybrid cloud is formed by the combination of virtual machines from different cloud services. In the case of the application layer, a hybrid cloud is formed with components in existing applications or different SaaS offerings within the data center of an enterprise.
Hybrid cloud benefits the businesses to scale their existing on-premises infrastructure which helps in handling excess data. For basic and non-sensitive computing tasks, enterprises gain the agility of public cloud, whereas keeping the crucial data and applications on-premises safely secured by a company firewall. Hybrid cloud is highly valuable in variable workload environments with dynamic workloads, big data processing, and temporary processing. In May 2018, Red Hat, the world’s leading provider of open source solutions partnered with Microsoft and IBM for strengthening their multi-cloud portfolio. The joint platform will allow enterprise developers to run container-based applications across Microsoft Azure and on-premises.
Get a Sample report @ https://www.marketresearchfuture.com/sample_request/1018
Major Key Players
Akamai Technologies Inc. (U.S.),
Amazon Web Services LLC (U.S.),
CA Technologies (U.S.),
Dell Inc. (U.S.),
Flexiant Ltd. (U.K.),
Google Inc. (U.S.),
Hewlett-Packard Enterprise (U.S.),
IBM Corporation (U.S.),
Equinix Inc. (U.S.)
Microsoft Corporation (U.S.)
Some of the key innovators are Vmware, Inc.(U.S.), Citrix Systems, Inc. (U.S.), Equinix, Inc. (U.S.), Oracle Corporation (U.S.), Rackspace Hosting, Inc.(U.S.), Verizon Communications, Inc (U.S.), Panzura (U.S.), Terago Networks, Inc. (Canada), Fujitsu Ltd. (Japan), NTT Communications (Japan), NEC Corporation (Japan), Salesforce.Com (U.S.), Century Link Inc., (U.S.), Turbonomic (U.S.), and many others.
According to MRFR, The global Hybrid Cloud Market is estimated to reach USD140 billion at CAGR 24% through the forecast period 2023
Industry News
January. 2018- Microsoft Corporation announced that it has acquired Avere Systems, a start-up specializing in data storage. The technology could ultimately help Microsoft pick up more Azure revenue by serving customers who want to use cloud computing while still keeping data in their own facilities. Microsoft has taken more of a hybrid approach than Amazon or Google in the cloud business. It offers the Azure Stack private cloud software that mirrors what's available from the Azure public cloud. And Microsoft continues to sell server and database software for corporate data centres.
January 2018 - The Irish cloud Communications Company Blueface is going to mergw with US Star2Star Communications to create a global competitor in the enterprise telephony market. The deal will see significant growth at the new company’s Dublin base. The combined company will be called StarBlue and will have its headquarters in the US to reflect the larger size of Star2Star, which is based in Sarasota, Florida. Dublin will remain the EMEA headquarters for the enlarged group.
Segmentation
The hybrid cloud Market can be segmented in to 6 key dynamics for the convenience of the report and enhanced understanding;
Segmentation by cloud service type: Comprises Software as a service (SaaS), Platform as a service (PaaS), and Infrastructure as a service (IaaS) among others.
Segmentation by solution: Comprises cloud management and orchestration, disaster recovery, security and compliance, hybrid hosting.
Segmentation by service: Comprises professional and managed services.
Segmentation by Application : Comprises BFSI, Education, Telecommunications, Government, and Healthcare among others.
Segmentation by organization size: Comprises SME and large eneterprises.
Segmentation by Regions: Comprises Geographical regions - North America, Europe, Asia Pacific and Rest of the World.
Browse Complete Report @ https://www.marketresearchfuture.com/reports/hybrid-cloud-market-1018
Regional Analysis:
North-America is dominating the Global hybrid cloud market with the largest market share and therefore accounting for astronomical amounts and is expected to grow over its previous growth records by 2027. Due to availability of large IT market and developed cloud data centres in the region, North America is anticipated to grow at a rapid rate. APAC region is projected to have great opportunities in this market, growing at the highest CAGR during the forecast period. The primary reason for the high growth rate in APAC region is the increasing utilization of data centres and cloud services and increasing trend of large enterprises combining private and public operations for hybrid cloud computing.
Intended Audience
Cloud vendors
Application providers
System integrators
Consultancy firms
Service providers
Data integration service providers
Managed services providers
LIST OF TABLES
Table 1 Global Hybrid Cloud Market: By Region, 2017-2023
Table 2 North America Hybrid Cloud Market: By Country, 2017-2023
Table 3 Europe Hybrid Cloud Market: By Country, 2017-2023
Table 4 Asia Pacific Hybrid Cloud Market: By Country, 2017-2023
Table 5 The Middle East & Africa Hybrid Cloud Market: By Country, 2017-2023
For More Information, Browse Press Release @ https://www.marketresearchfuture.com/press-release/hybrid-cloud-market
LIST OF FIGURES
FIGURE 1 Global Hybrid Cloud Market segmentation
FIGURE 2 Forecast Methodology
FIGURE 3 Porter’s Five Forces Analysis of Global Hybrid Cloud Market
FIGURE 4 Value Chain of Global Hybrid Cloud Market
FIGURE 5 Share of Global Hybrid Cloud Market in 2017, by country (in %)
FIGURE 6 Global Hybrid Cloud Market, 2017-2023,
About Market Research Future:
At Market Research Future (MRFR), we enable our customers to unravel the complexity of various industries through our Cooked Research Report (CRR), Half-Cooked Research Reports (HCRR), Raw Research Reports (3R), Continuous-Feed Research (CFR), and Market Research & Consulting Services.
MRFR team have supreme objective to provide the optimum quality market research and intelligence services to our clients. Our market research studies by products, services, technologies, applications, end users, and market players for global, regional, and country level market segments, enable our clients to see more, know more, and do more, which help to answer all their most important questions.
Contact:
Market Research Future
+1 646 845 9312
Email: [email protected]
0 notes