Tumgik
Surgical Management of Multiple Nevus Lipomatosis Superficialis: A Case Report
Tumblr media
Lupine Publishers| lupine publishers impact factor
Abstract
Nevus lipomatosis superficialis is a rare skin lesion characterized by the ectopic dermal deposition of adipose tissue. While benign, it typically presents in the second to third decade of life as regional discomfort. We present the case of multiple lesions requiring serial surgical management.
Presentation of Case: The patient was a 22-year-old male who presented with multiple, soft, warty lesions histologically consistent with nevus lipomatosis superficialis. He underwent excision of the largest mass and is undergoing multiple excisions for adjacent masses.
Discussion: Nevus lipomatosis superficialis is a rare connective tissue hamartoma of unclear etiology and multiple clinical subtypes. While management is typically surgical, efficacy of treatment options for multiple nearby lesions is not well characterized.
Conclusion: We review the literature for nevus lipomatosis superficialis and describe a case with multiple lesions of varying chronicity. Treatment must be chosen to optimize wound healing and cosmetic outcomes.
Keywords: Nevus lipomatosis superficialis; Multiple; Surgery; Skin; Excision
Introduction
Nevus lipomatosis superficialis (NLS) is a rare, benign hamartoma of the skin that was first described by Hoffmann and Zurhelle [1]. It is characterized by mature, ectopic adipose deposition in the dermis [2]. The classical form of NLS as described in 1921, is found on the buttocks, trunk, and thighs and described as multiple collections of soft, cerebriform, non-tender papules or nodules that are yellowish in color and may be pedunculated [3]. Solitary lesions have also been reported on the arms or scalp and are typically dome-shaped or sessile papules [4]. We report the surgical management in a young, otherwise healthy male with multiple pedunculated and sessile NLS lesions. Currently, there is no standardized surgical treatment for multiple nearby lesions.
Case Report
A 22-year-old white male presented to outpatient General Surgery with a 9-year history of a painless mass in the perianal region and left buttock. The mass gradually increased in size and was not associated with any skin changes over the surrounding areas. The mass had never been associated with bleeding, discharge, or itching. The patient reported a second mid-sized mass that had progressively increased in size over the past 4 years. Several smaller satellite masses had emerged in the past year. While the patient did have cosmetic concerns, the largest of the lesions was causing discomfort when sitting and requested the mass be surgically removed. Patients past medical history were unremarkable. Laboratory testing was wholly unremarkable and standard sexually transmitted infection screens were negative. He was not previously vaccinated against human papilloma virus. On physical examination, there was a 5.9 x 4 x 3.5 cm, fleshy, tanpink polypoid lesion arising at the 4 o’clock position approximately 2cm from the anal verge and extending 5cm along the radial axis of the left buttock. The largest lesion had a relatively narrow, pedunculated base and multiple papulo-nodules with a warty surface. Another, mid-sized sessile lesion similar in appearance was present 4cm lateral to the largest mass. Several smaller lesions were also present further lateral. All lesions were soft to palpation and nontender. There was no bleeding or discharge with manipulation. Differential diagnosis included anogenital warts and neurofibromatosis. The patient was offered surgical excision of the largest of the masses. Linear incision was made at the base of the mass, 2cm away from the anal margin radially out. Postoperative recovery was uncomplicated. Patient’s two-week post-operative check showed an appropriately healing wound (Figure 1). Pathology revealed nevus lipomatosis superficialis. There was no atypia or malignancy identified (Figure 2). Patient was subsequently offered serial-excisions for the remaining masses.
Figure 1:   (A) Post-excision of largest (5.9 x 4 x 3.5cm) polypoid lesion (B) Sessile, cerebriform masses of varying chronicity including a 4-year-old medium-sized mass. And © Multiple 1-year-old smaller, satellite masses
Figure 2:   Spotty Involvement of Dermal Adipose Deposition (Arrow) Creating an Irregular Zone of Fat Splaying Apart Dermal Collagen.
Discussion
NLS is a rare connective tissue hamartoma with altered epithelial elements with no sex predilection or familial trend [5]. While its incidence has not been reported, NLS descriptions are mostly limited to case reports. NLS may be congenitally present or develop later in life. There are two primary forms of NLS: the classical/multiple form and the solitary form. The lesions of the classical form are either congenital or fully form by the second or third decade of life [6], thought it has been reported to arise in the fifth decade as well [7]. Classical NLS is described as skin-colored papules that coalesce into plaques with cluster-like or linear distribution. Lesions are asymptomatic, unilateral, slow-growing over many years and may have smooth or cerebriform surface that is nontender [8]. Classical lesions most frequently distribute over the lower abdomen, the pelvic girdle, thigh, and gluteal surface. The second NLS subtype is the solitary form (pedunculated lipofibroma [9]) that may appear at any body surface and tends to affect older patients in the third to sixth decade of life [10,11]. The largest study of solitary form NLS is a retrospective case study of 13 cases in Tunisia [12]. A third, much less reported form of NLS is described as “Michelin tire baby syndrome,” an autosomal dominant deletion on chromosome 11 characterized by symmetric circumferential skin folds. These folds harbor underlying NLS and may affect the neck, legs, and arms; they are self-limited and resolve during childhood [13,14]. This is the first reported case of NLS with both sessile and pedunculated features with multiple lesions requiring serial surgery for optimal would healing and cosmetic outcome. Histologically, NLS lesions are classically characterized by ectopic mature adipocytes in the reticular dermis surrounded by dermal collagen fibers. Fat cells may extend to the papillary layer as larger fat lobules blur the boundary between dermis and hypodermis. Adipocytes may be either entirely mature or incomplete. The density of collagen, fibroblasts, and vasculature in the dermis is increased [15]. The epidermis can show acanthosis, elongation, rete ridge obliteration, hyperkeratosis, and hyper basilar pigmentation [16]. On electron microscopy, young adipocytes are of perivascular pericyte origin [17].
The etiology of NLS remains unclear. Originally, Hoffman and Zurhelle theorized that connective tissue degeneration incited dermal fat deposition [1]. In 1955, Holtz postulated that pericapillary lipoblast differentiated into mononuclear cells that grew into preadipose tissues. Others argue that focal heterotropic growths of aberrant adipose tissue occur during embryonic development [18]. Thus far, there have been no studies describing any cytogenic alterations in those with classical or solitary NLS, though Cardot-Leccia et al. [19] report a case of NLS with a 2p24 deletion. As in our case, NLS is typically asymptomatic. Ulceration has been reported secondary to external trauma or ischemia,11as well as café-au-lait macules, hypopigmented spots, leukodermic macules, and comedo-like changes may be present [15]. As such, the differential diagnosis for NLS includes condyloma acuminata, neurofibroma, lymphangioma, skin tag, Fordyce spots, granuloma annulare, and nevus sebaceous. Treatment for NLS is typically cosmetic, as malignant transformation is extremely rare [20]. Surgical excision, as in our patient, is the mainstay of treatment, as recurrence rate are low. Cryotherapy, ultrapulse CO2 laser ablation [21], topical fludroxycortide corticosteroids [7], and intralesional injections of phosphatidylcholine and sodium deoxycholate. Kim HS [22] have been reported as non-surgical options with positive clinical response. For optimal cosmetic results, patients with multiple, separated masses should be serially managed with adequate time for healing and scare formation after each excision.
Conclusion
NLS is a rare, benign skin tumor characterized by aberrant adipocyte deposition. While NLS is typically asymptomatic, lesions can cause discomfort and require surgical management. We report the case of a 22 year-old with multiple large lesions requiring serial surgery for optimal wound and cosmetic outcomes.
To know more about our Journal of Surgery click on https://lupinepublishers.com/surgery-case-studies-journal/index.php
To know more about our website click on lupine publishers impact factor
https://lupinepublishers.us/
Follow on Linkedin : https://www.linkedin.com/company/lupinepublishers Follow on Twitter   :  https://twitter.com/lupine_online
48 notes · View notes
Lupine Publishers | Where are the Real Trend Setters?
Tumblr media
Lupine Publishers | Journal of  Fashion and Textile Engineering
Editorial
Trend is word that has a fleeting meaning but is it? When a trend is seen coming on the surface of the society then it is more than what people need or desire. It is an expression of their culture, their social needs and their creative expression.
Trends Indeed is Unfolding of a Hidden Myth, Ritual or Culture
When the trend of corset came in 1910 it brought in a new culture in the society. It takes the idea that pain is beauty to a whole new level. A whole new industry was build round it. Not just this but a culture, a life style and a cosmology were created. But what happens when a trend dies. It has a particular poignancy. I wonder at times, why does not all trend stay in the society, is it not a loss of diversity, a heritage and a thought process.
When does trend move into being a Heritage and What do we do to that Trend as a Society
Some 880 trends died this century, I was reading somewhere. There is fatalism behind these numbers. Isn’t it a way of life lost? The danger is to the small industries that get set up to sustain that trend. Why isn’t this seed taken forward? When we do trend analysis, then what do we calculate. We search the reasons of its evolution and its growth and then fading out. But do we also take in consideration of the loss of culture, myth and storytelling when a trend dies. As a storyteller I have often wondered, would we have had the story of Ras leela of Lord Krishna, if the trend of community bathing was not there in ancient Indian culture. Because when that trend disappeared, well so did the stories of Ras leela (Figure 1). And we have to be aware of the fact that trends of fashion and print evolve from the community or social trends as we call it. For example during the Jazz Age, Cloche hat became increasingly popular because women loved the look of cloche hat paired with their Eton cropped hair.
But how do we not let off those trends which are heritage and iconic. It’s like death of languages, a culture and a context gone forever. Is keeping them in museums and encyclopaedia enough or should we not utilise them as platforms of innovation, growth and experimentation. Each and every trend is a sign that society is expressing itself and that social fabric is democratic enough to allow individuality to flourish and expand. Not only this, we need to create kaleidoscopic places where different trends are juxtaposed and one switch between them as we shift contexts. Translation of a trend into a new interface is an important act of holding the myth of that trend together and giving it a life so as to organically evolve into a new user experience, keeping the natural flow and essence of trend alive.
We need to invent Cultural Citizenships as a part of our Creative Imagination
There is a tribe in hills of Kumaon, in India which has its own tantric prints for the bridal dresses. I clearly remember it being a part of everyday ritual till I was a young design research student but now we see people making it into a heritage property and those iconic prints are now on the verge of extinction. (Figure 2). A world that fails to be inventive about its own culture and stories eventually will be reduced to rudimentary, technical and structured living where fashion, beauty and myths have no place. Another thing that has guaranteed the disappearance of organic growth of trends is the structured curriculums of the design institutes. It is very interesting to note that a group of design students might be seen wearing same trends aspired by vogue and Bollywood where as a illiterate worker even with his meagre sum will be seen more differently and culturally dressed in many a different ways. Why is it so?
Figure 2: 3 Kumaon Aipans.
India and many such countries that have more than 3000 years of cultural history need to create a new social and creative consolidation to keep the pluralism of trends and self expression alive. Else trends coming and going will no longer be multi dimensional and multi cultural but will be centralised and of a single context and lifestyle. And the world will lose its creative spectrum. One needs to move to more polyglot world to sustain diversity of expression, lifestyle and experiences. I have also noted that death of trend and its social culture often go together. In India, many textile design trends are dying out as their cultures and villages are getting engulfed by cities, Television and internet. The intrusion of the corporate world is pushing them inward and in that processes each community is losing its expression and hence stopping the trend to evolve and come into surface of the society. No corporate or government thinks that loss of self expression (trend) by a group of people is something to be worried about.
People need to feel free to create their own trends and be able to nurture it. They need to feel that self expression in many different ways is relevant, posses’ dignity, defines competence and is an important way of survival and growth. A trend disappears as new generation enters modernity, abandoning memory and older ways of life and living. So we in our own ways of development and growth need to keep the diversity alive and form a social and cultural fabric where people who are edgy, trend creators can become more expressive and prolific. Come let.com become a trendy world, in the real sense of culture, diversity and history- A world where trends evolve from the real self expression of people, culture and experiences.
For more Lupine Publishers Open Access Journals Please visit our website: http://lupinepublishers.us/ For more Journal of Textile Science and Technology articles Please Click Here: https://lupinepublishers.com/fashion-technology-textile-engineering/
To Know More About Open Access Publishers Please Click on Lupine Publishers
Follow on Linkedin : https://www.linkedin.com/company/lupinepublishers Follow on Twitter   :  https://twitter.com/lupine_online
39 notes · View notes
Surgical Management of Multiple Nevus Lipomatosis Superficialis: A Case Report
Tumblr media
Lupine Publishers| lupine publishers impact factor
Abstract
Nevus lipomatosis superficialis is a rare skin lesion characterized by the ectopic dermal deposition of adipose tissue. While benign, it typically presents in the second to third decade of life as regional discomfort. We present the case of multiple lesions requiring serial surgical management.
Presentation of Case: The patient was a 22-year-old male who presented with multiple, soft, warty lesions histologically consistent with nevus lipomatosis superficialis. He underwent excision of the largest mass and is undergoing multiple excisions for adjacent masses.
Discussion: Nevus lipomatosis superficialis is a rare connective tissue hamartoma of unclear etiology and multiple clinical subtypes. While management is typically surgical, efficacy of treatment options for multiple nearby lesions is not well characterized.
Conclusion: We review the literature for nevus lipomatosis superficialis and describe a case with multiple lesions of varying chronicity. Treatment must be chosen to optimize wound healing and cosmetic outcomes.
Keywords: Nevus lipomatosis superficialis; Multiple; Surgery; Skin; Excision
Introduction
Nevus lipomatosis superficialis (NLS) is a rare, benign hamartoma of the skin that was first described by Hoffmann and Zurhelle [1]. It is characterized by mature, ectopic adipose deposition in the dermis [2]. The classical form of NLS as described in 1921, is found on the buttocks, trunk, and thighs and described as multiple collections of soft, cerebriform, non-tender papules or nodules that are yellowish in color and may be pedunculated [3]. Solitary lesions have also been reported on the arms or scalp and are typically dome-shaped or sessile papules [4]. We report the surgical management in a young, otherwise healthy male with multiple pedunculated and sessile NLS lesions. Currently, there is no standardized surgical treatment for multiple nearby lesions.
Case Report
A 22-year-old white male presented to outpatient General Surgery with a 9-year history of a painless mass in the perianal region and left buttock. The mass gradually increased in size and was not associated with any skin changes over the surrounding areas. The mass had never been associated with bleeding, discharge, or itching. The patient reported a second mid-sized mass that had progressively increased in size over the past 4 years. Several smaller satellite masses had emerged in the past year. While the patient did have cosmetic concerns, the largest of the lesions was causing discomfort when sitting and requested the mass be surgically removed. Patients past medical history were unremarkable. Laboratory testing was wholly unremarkable and standard sexually transmitted infection screens were negative. He was not previously vaccinated against human papilloma virus. On physical examination, there was a 5.9 x 4 x 3.5 cm, fleshy, tanpink polypoid lesion arising at the 4 o’clock position approximately 2cm from the anal verge and extending 5cm along the radial axis of the left buttock. The largest lesion had a relatively narrow, pedunculated base and multiple papulo-nodules with a warty surface. Another, mid-sized sessile lesion similar in appearance was present 4cm lateral to the largest mass. Several smaller lesions were also present further lateral. All lesions were soft to palpation and nontender. There was no bleeding or discharge with manipulation. Differential diagnosis included anogenital warts and neurofibromatosis. The patient was offered surgical excision of the largest of the masses. Linear incision was made at the base of the mass, 2cm away from the anal margin radially out. Postoperative recovery was uncomplicated. Patient’s two-week post-operative check showed an appropriately healing wound (Figure 1). Pathology revealed nevus lipomatosis superficialis. There was no atypia or malignancy identified (Figure 2). Patient was subsequently offered serial-excisions for the remaining masses.
Figure 1:   (A) Post-excision of largest (5.9 x 4 x 3.5cm) polypoid lesion (B) Sessile, cerebriform masses of varying chronicity including a 4-year-old medium-sized mass. And © Multiple 1-year-old smaller, satellite masses
Figure 2:   Spotty Involvement of Dermal Adipose Deposition (Arrow) Creating an Irregular Zone of Fat Splaying Apart Dermal Collagen.
Discussion
NLS is a rare connective tissue hamartoma with altered epithelial elements with no sex predilection or familial trend [5]. While its incidence has not been reported, NLS descriptions are mostly limited to case reports. NLS may be congenitally present or develop later in life. There are two primary forms of NLS: the classical/multiple form and the solitary form. The lesions of the classical form are either congenital or fully form by the second or third decade of life [6], thought it has been reported to arise in the fifth decade as well [7]. Classical NLS is described as skin-colored papules that coalesce into plaques with cluster-like or linear distribution. Lesions are asymptomatic, unilateral, slow-growing over many years and may have smooth or cerebriform surface that is nontender [8]. Classical lesions most frequently distribute over the lower abdomen, the pelvic girdle, thigh, and gluteal surface. The second NLS subtype is the solitary form (pedunculated lipofibroma [9]) that may appear at any body surface and tends to affect older patients in the third to sixth decade of life [10,11]. The largest study of solitary form NLS is a retrospective case study of 13 cases in Tunisia [12]. A third, much less reported form of NLS is described as “Michelin tire baby syndrome,” an autosomal dominant deletion on chromosome 11 characterized by symmetric circumferential skin folds. These folds harbor underlying NLS and may affect the neck, legs, and arms; they are self-limited and resolve during childhood [13,14]. This is the first reported case of NLS with both sessile and pedunculated features with multiple lesions requiring serial surgery for optimal would healing and cosmetic outcome. Histologically, NLS lesions are classically characterized by ectopic mature adipocytes in the reticular dermis surrounded by dermal collagen fibers. Fat cells may extend to the papillary layer as larger fat lobules blur the boundary between dermis and hypodermis. Adipocytes may be either entirely mature or incomplete. The density of collagen, fibroblasts, and vasculature in the dermis is increased [15]. The epidermis can show acanthosis, elongation, rete ridge obliteration, hyperkeratosis, and hyper basilar pigmentation [16]. On electron microscopy, young adipocytes are of perivascular pericyte origin [17].
The etiology of NLS remains unclear. Originally, Hoffman and Zurhelle theorized that connective tissue degeneration incited dermal fat deposition [1]. In 1955, Holtz postulated that pericapillary lipoblast differentiated into mononuclear cells that grew into preadipose tissues. Others argue that focal heterotropic growths of aberrant adipose tissue occur during embryonic development [18]. Thus far, there have been no studies describing any cytogenic alterations in those with classical or solitary NLS, though Cardot-Leccia et al. [19] report a case of NLS with a 2p24 deletion. As in our case, NLS is typically asymptomatic. Ulceration has been reported secondary to external trauma or ischemia,11as well as café-au-lait macules, hypopigmented spots, leukodermic macules, and comedo-like changes may be present [15]. As such, the differential diagnosis for NLS includes condyloma acuminata, neurofibroma, lymphangioma, skin tag, Fordyce spots, granuloma annulare, and nevus sebaceous. Treatment for NLS is typically cosmetic, as malignant transformation is extremely rare [20]. Surgical excision, as in our patient, is the mainstay of treatment, as recurrence rate are low. Cryotherapy, ultrapulse CO2 laser ablation [21], topical fludroxycortide corticosteroids [7], and intralesional injections of phosphatidylcholine and sodium deoxycholate. Kim HS [22] have been reported as non-surgical options with positive clinical response. For optimal cosmetic results, patients with multiple, separated masses should be serially managed with adequate time for healing and scare formation after each excision.
Conclusion
NLS is a rare, benign skin tumor characterized by aberrant adipocyte deposition. While NLS is typically asymptomatic, lesions can cause discomfort and require surgical management. We report the case of a 22 year-old with multiple large lesions requiring serial surgery for optimal wound and cosmetic outcomes.
To know more about our Journal of Surgery click on https://lupinepublishers.com/surgery-case-studies-journal/index.php
To know more about our website click on lupine publishers impact factor
https://lupinepublishers.us/
Follow on Linkedin : https://www.linkedin.com/company/lupinepublishers Follow on Twitter   :  https://twitter.com/lupine_online
48 notes · View notes
Lupine Publishers | Mineral Supplements of Soils
Tumblr media
Lupine Publishers | Earth and Environment Journals
Abstract
Improving soil quality results in increased yields and benefits for farmers. Research has been carried out on the use of selected mineral waste collected in heaps to improve the quality of poor soils. The natural soil supplements obtained in this way will not only improve their quality and increase polonies but will also allow the liquidation of some of the heaps. This will contribute to the reclamation of the landscape and the environment.
Introduction
Soil and agricultural crops have been the object of multidirectional interests since ancient times. This is documented in scientific literature. Among other publications, a large part deals with those that concern soil as a source of nutrients for plants [1-5]. Another important field of research is publications on the role of soil fauna in improving soil quality [6-18]. Another research complex is geochemical research on the relationship of chemical fertilizers - improving the performance of soil with particular regard to the toxicity of additives, in particular those containing radioactive elements [19-24]. A lot of publications address the problem of the relation of fauna, and soil flora and its mineral composition [25-28] as well as the influence of mineral factors on the development of individual plant elements [4,5,29]. There are also publications on the use of organic waste for fertilization glkeb [30-33], the role of soil sun exposure in stabilizing its beneficial properties [34,35] and the use of the so-called. antagonistic plants [36,37].
A separate section is soil relations - geology [30,38-40], mineralogy - regulation of physical soil parameters including its water retention capacity [41]. Quite extensive literature concerns the use of natural minerals and rocks to improve the quality of soil and increase the size of crops from it [18,38-50]. It is in this trend that the presented publication is included, bearing in mind the impact of the proposed proceedings on possible environmental modifications [41,51-54].
Material and Methods of Research
The material for the research was taken from the profile of sediments found in the Devonian dolomite deposit exploited in the “Józefka” mine near Kielce (Figure 1). In addition, material from local heaps and granulates made from materials stored in heaps were collected for testing. In order to determine the mineralogical and petrographic characteristics of the obtained rocks, the following tests were carried out:
A. Microscopic examination in transmitted and reflected light. The Chinese production of Menij was used. The observed phenomena were documented with micrographs.
B. The analysis of the mineral composition of natural samples was made using the XRD method
C. In order to recognize the content of silty fractions and the clay minerals present in it, samples were sludged.
D. The analysis of the chemical composition of the obtained clay fraction was made using the XRD method
E. Analysis of the degree of quartz grain overlaying and SEM morphology of siliceous minerals as well as the non-standard qualitative analysis of the EDS spectrum (Figure 2.a & 2.b).
Microscopic Tests in Polarized Saint Lighting
Sand Muck
Structure: aurytowo-psamitowa, texture: compact, random Composition: quartz, min. clay, min. heavy, among which tourmaline was recognized (Figure 3). Degree of sorting of quartz grain on average sorted.
Figure 3: Microscopic photos of sandy silt. A: picture of rock at 1 polaroid, B: picture of rocks in polarized light. Visible quartz grains and brown tourmaline grain. Minerals are located in intergranular spaces.
Kaolinite Heel
Structure of silty claystone, texture, texture in places parallel, disturbed. The rock background is a very low birefringent kaolinite mass (Figure 4). In the background of the rocks there are almost exclusively clay minerals, accompanied by single quartz grains with a size of up to 20μm of quartz grain, columnar aggregates of kaolinite in places with a wormlike habit.
Figure 4: Photographs of microscopic kaolinite claystone. A: picture of the rock at 1 polaroid, B: picture of the rock in polarized light. Visible different orientation of kaolinite microaggregates manifested in the variable polarization blanking mode.
Molecular Muscovite Heel
The clay is a parallel texture made of kaolinite, smectite and muscovite (Figure 5). They are accompanied by single quartz grains.
Figure 5: Microscopic photos for kaolinite-muscovite claystone. A: picture in a non-polarized light, B: A photo in polarized light.
Figure 5.a: A: water columns
Kaolinite-Illitic Heel
Pellet structure, random texture, disturbed (flow processes) Mineral composition is a mixture of smectite and kaolinite (Figure 6). The microscopic examination shows that the overburden mainly consists of siltstones, sandy mudstones and claystone’s. The sediments are present next to quartz and clay minerals such as kaolinite and smectite also mica. Thanks to microscopic observations, several generations of quartz can be distinguished, as well as iron oxides and hydroxides responsible for the red color of the sediments.
Figure 6: Microscopic images of kaolinite-illite claystone. A: Image in natural light, B: Image in polarized light. in which the granulometric composition was determined. B: granulometric curve of the tested samples.
Analysis of the External Matrix of Natural Tests Used by the XRD Method
Due to the uncertainty of the clay components, the polarizing microscopy method was used to test the X-ray diffraction method. Analyzes of natural samples, clayey, prażed and glycolated fractions were performed (Graphs 1,1a,2, 2a,3, 3a,4,4a,5). They allowed to state that beside the detritus components in the overburden samples and heaps there are clay minerals represented by kaolinite and illite. X-ray examinations indicate that all overburden sediments and heaps occurring in the heap contain minerals beneficial for agriculture. They do not contain toxic minerals (Figure 5.a).
Graph 1: X-ray diffraction pattern of sandy silt.
Graph 1a: X-ray diffraction pattern for silt sand grain fractions. From the top: raw, roasted and glycolated preparation.
Graph 2: A diffractogram of kaolinite claystone.
Graph 2.a: X-ray diffractogram of clay fraction of kaolinite claystone. From the top: raw, roasted and glycolated preparation.
Graph 3: X-ray diffractogram of kaolinite-muscovite claystone.
Graph 3.a: X-ray diffractogram for clay fraction of clay-muscovite clay. From the top: raw, roasted and glycolated preparation.
Graph 4: X-ray diffractogram of kaolinite – illite.
Graph 4.a: X-ray diffractogram for clay fraction of clay-illite clay. From the top: raw, glycolated and roasted preparation.
Graph 5: X-ray diffraction pattern of dolomite-clay pellets containing quartz crystals.
Results Designated by Content of The Content of the Cross-Fraction on a Standing Water Post
In the case of two samples, the content of clay minerals (fraction <2 μm) in selected samples with extremely small and extremely high clay minerals were tested. They showed that the amount varies both in the overburden material and material from the heap from 3 to 64% by weight (Table 1).
Table 1.
Results of the Analysis of the Quarter of Quartz Grassing and Distribution of Minerals of Chlorine Minerals in Granulates SEM
SEM observations were made for both overburden and granulate deposits. Their distribution in pellets is of great importance as they favor the disintegration of granules and thus the dispersion of granulated material in the enriched soil. Clay minerals in overburden scales are present as aggregates of kaolinite or illite microcells (Figure 7). On the other hand, both on the heap and in the granulate, clay aggregates are degraded by micrometry (Figure 8), which means that in the process of stripping off, staying on heaps and in the granulation process, the clay aggregates disintegrate. This is a favorable phenomenon conducive to material granulation. After pouring on granules, thanks to such a construction, they undergo a faster process of foresting. thanks to which the ingredients of the granules combine with the glaze more quickly.
Figure 7: Kaolinite doline from the dolomite overburden. A: characteristic aggregates made of kaolinite tiles. B: one of the aggregates under magnification.
Figure 8: Granules, visible quartz grains (blue arrows) and dolomite (orange arrow) stuck together with clay minerals. SEM image.
SEM Images (Figures 9-11)
Figure 9: The heap of Józefek quarry. containing overburden rocks collected from the Devonian dolomites.
Figure 10: Granules with different granulation of soil supplements with different grain size, which were made from sediments accumulated in heaps.
Figure 11: Granular structure observed in the cross-section of granules. Visible dolomite grains in the clay anchor. A: magnification 10 x, B: magnification 20 x.
Summary
The performed research indicates that in the dolomite deposit of the Józefka deposit, rocks containing clay minerals from the kaolinite and illite groups as well as quartz, dolomite and calcite are present. The origin of these rocks is not fixed, but for raw material reasons, these rocks are suitable for enriching class IV-VI soils with clay minerals that can provide many ions and promote water retention in soil. Natural granules made from overburden rocks deposited on heaps and small dolomite fractions (waste) can be used as soil supplements enriching it with many elements, including magnesium. It is a beneficial stimulator of biological processes in plants. The use of the described granulates as soil supplements will increase yields, biologically strengthen growing plants and contribute to increasing harvests, and thus will increase the profits of agricultural producers. Both magnesium and other elements that will be delivered to the soil with granules will naturally become the building blocks of plant tissues. In the production and processing cycle, they will then be transferred to organisms using the bred plants. These soil supplements are completely natural and do not contain synthetic toxic substances. They are therefore environmentally friendly and human. The use of granules in the described form will not only strengthen the soil, but will also help to eliminate mineral heaps, favoring the reclamation of mining areas and restoring them to natural conditions.
For more Lupine Publishers Open Access Journals Please visit our website: http://lupinepublishers.us/ For more open access environment journals articles Please Click Here: https://lupinepublishers.com/environmental-soil-science-journal/
To Know More About Open Access Publishers Please Click on Lupine Publishers
Follow on Linkedin : https://www.linkedin.com/company/lupinepublishers Follow on Twitter   :  https://twitter.com/lupine_online
52 notes · View notes
Lupine Publishers | A Clinical Study to Assess the Effects of Pneumoperitoneum on Respiratory Mechanics and Hemodynamics in Laparoscopic Cholecystectomy in Obese Patients
Tumblr media
Lupine Publishers | Journal of Anesthesia
Abstract
Aim: To assess the effects of pneumoperitoneum on respiratory Mechanics and Hemodynamics in Laparoscopic Cholecystectomy
Background: Laparoscopic surgeries have been performed now a day’s very frequently. The pneumoperitoneum created in these laparoscopic procedures produces various effects on haemodynamic and respiratory parameters of the patients.
Methods: 61 patients of ASA grade I and II undergoing laparoscopic cholecystectomy were registered for this study. After induction of GA these patients were ventilated mechanically with tidal volume of 8ml/ kg and RR of 18/min. Haemodynamic (HR, SBP, DBP and MAP) and respiratory mechanics parameter (Peak inspiratory pressure, peak plateau pressure, et CO2, spO2 and end tidal volume) and arterial blood samples (pH, paCO2 and base deficit/excess) were recorded and analyzed at 30min intervals after creation of pneumoperitoneum with CO2. Changes in hemodynamic, respiratory mechanics and ABG were compared between the different time intervals with the base line values.
Results: a) Pneumoperitoneum insufflations produced significant increase of heart rate (HR), systolic blood pressure (SBP), diastolic blood pressure (DBP) and mean arterial pressure (MAP).
b) It produced significant increase in peak inspiratory pressure (PIP), peak plateau pressure (PPP) and end tidal CO2 throughout the period of pneumoperitoneum and even after desufflation of pneumoperitoneum.
c) Arterial blood gas analysis during the period of pneumoperitoneum showed that PaCO2 remain increased even after 30min of desufflation of pneumoperitoneum in the recovery room and pH was lowest at 90min after insufflation and remain decreased after 30min of desufflation of pneumoperitoneum in the recovery room.
Keywords: Pneumoperitoneum; Haemodynamic; Arterial blood gas; Respiratory mechanics; Laparoscopy
Introduction
Obesity is an important respiratory risk factor after surgery. Morbid obesity has deleterious effects on respiratory mechanics, decreasing lung volumes, functional residual capacity, compliance and arterial oxygenation [1,2]. In a study by casati. it was shown that general anesthesia with reverse Trendelenburg position and pneumoperitoneum (PPM) improved lung compliance as well as gaseous exchange in morbidly obese patients undergoing surgery [3]. Laparoscopic surgical techniques seem beneficial in obese patients in terms of respiratory morbidity with a faster reachievement of normal respiratory function. Pneumoperitoneum plays a very important role in laparoscopic procedures; it provides working space to the surgeons for introduction of instruments intra abdominally. CO2 is the preferred gas for creation of pneumoperitoneum, but it is associated with the systemic absorption of CO2, which may have detrimental cardio respiratory effects. There is not much information about intraoperative respiratory mechanics related to pneumoperitoneum in the morbidly obese. As a result, there has been renewed interest in the anesthetic management of this group of patients. Respiratory mechanical properties before or after laparoscopic surgery have been studied [4]. However, there is little information in the literature about intraoperative changes in respiratory mechanics and patient tolerance to pneumoperitoneum (PPM). The aim of this study was to evaluate the effects of PPM on respiratory mechanics and blood gases in obese patients undergoing laparoscopic cholecystectomy.
Methods
After obtaining the approval from Institutional Ethics Committee, the study was carried out on 61 obese patients in the age group of 20-50 years of either sex belonging to ASA grade I or II, scheduled for elective laparoscopic cholecystectomy under general anesthesia. All patients undergo proper preanesthetic assessment and an informed written consent taken from all the patients. Patients were premedicated with Inj. fentanyl 1-2μg/kg BW followed by preoxygenation with 100% oxygen for 3 minutes by facemask. Induction of General anesthesia was done with in Thiopentone Sodium 5mg/kg BW. Endotracheal intubation was facilitated with intravenous Succinylcholine 1.5mg/kg BW and ventilation with 100% oxygen for 120 seconds. General anesthesia was maintained with nitrous oxide & oxygen (66:33), loading and maintenance dosage of inj Atracurium (0.1mg/kg) according to BW and halothane 0.5-1.0%. Lungs were mechanically ventilated with Mindray Wato Ex-20 Ventillator a volume- cycled ventilator using a closed circle system incorporating a carbon dioxide absorber. The ventilator was inspected for leak test before each case. Tidal volume is set at 8ml/kg, with a rate of 18 breaths/ min during surgery. Following variables were recorded i.e. peak inspiratory pressure, mean inspiratory pressure, expiratory tidal volume. The expired tidal PCO2, oxygen saturation PaO2, NIBP, HR, RR were monitored by Mindray Bene View T5 Moniter. During laparoscopy intraabdominal pressure was maintained automatically at 10mm Hg by a CO2 insufflator. Arterial blood sample was drawn from the radial artery with a 26G needle in a 2.5ml heparinized syringe, and analysis was done with a Eschweiler Combi line blood gas analyzer and variable i.e. PaCO2 and pH was recorded. Respiratory variables and hemodynamic parameters were recorded 15min after induction then 0, 30, 60 and 90min after creation of pneumoperitoneum and after desufflation (Ti, T0, T30, T60, T90, Td respectively). Arterial blood was sampled 10min after the induction and before pneumoperitoneum then 0 ,30, 60 and 90min after creation of pneumoperitoneum and after desufflation (Ti, T0, T30,T60, T90, Td respectively) and in the recovery room (30min after tracheal extubation) (Tr). Carbon dioxide was used for peritoneal insufflations, and abdominal pressure was maintained at 8-10mmHg. Throughout the procedure for any 20% rise in MAP above the basal MAP, halothane/ Inj. Dexmeditomidine/ or both was given to maintain the basal MAP and the case was excluded from the study group. For fall in MAP more than 20% of the basal MAP, halothane was decreased or stopped. Heart rate less than 50bpm was treated with Atropine 0.6mg intravenously. After surgery, patients were reversed with Inj. Glycopyrrolate 0.005mg/kg and Neostigmine 0.08mg/kg intravenously. After extubation patients were observed for recovery time defined as time to vocalize after extubation. The observations were recorded and tabulated. The statistical analysis was carried out by using statistical software SPSS 17 and paired student ‘t’ test for intra- group comparison was applied. P value of p>0.05 and p<0.05 were considered as statistically insignificant and significant respectively, and p-value <0.01 taken to be statistically highly significant.
Results
Table 1: Effect of Pneumoperitoneum on Haemodynamic Variables.
There was even distribution of age in study groups with majority of patients belonging to the age group of 20-60 years, majority of patients were females (M:F was 3:58), having mean weight (in kgs) of 56.38±4.71 and mean height 148.2±2.01 (in cms). The mean±SD of duration of anaesthesia (min) was 90.8±31.58 minutes and duration of pneumoperitoneum (min) was 56.6±23.3 minutes. As shown in (Table 1) the mean (±SD) of HR decreased insignificantly just after the creation of pneumoperitoneum, which then decreased progressively upto 60 minutes after pneumoperitoneum which was significant. The mean HR increased significantly 90 minutes after pneumoperitoneum but decreases significantly (p<0.05) after release of pneumoperitoneum. On comparing the baseline values of mean (±SD) systolic blood pressure, diastolic blood pressure and mean arterial pressure with values after induction of general anesthesia, there was significant (p<0.05) increase in SBP, DBP and MAP was observed immediately, 30min, 60min, 90min after insufflation and even after desufflation.
The mean±SD of peak inspiratory pressure (PIP) and peak plateau pressure (PPP) after induction was 14.5±2.4cm H2O and 6.61±1.0cm H2O respectively. When compared with baseline values significant (p<0.01) increase in PIP was seen at all time including after desufflation. The mean±SD of end tidal CO2 and expiratory tidal volume after induction was 29.0±4.1mmHg and 397.16±40.08ml respectively. On comparing with baseline values significant (p<0.01) increase in end tidal CO2 were observed immediately after insufflation, 30 min, 60min, 90min after insufflation and after desufflation but insignificant increase was observed in expiratory tidal volume. Significant rising trend in PaCO2 was observed throughout the period of pneumoperitoneum and remained increased even after desufflation of pneumoperitoneum. There was significant decreasing trend in pH observed immediately after insufflations, 30min, 60min, 90min after insufflation, after desufflation and 30min after extubation in the recovery room. It was lowest at 90min after insufflation and remained decreased after desufflation. The mean±SD of base excess after induction was -2.32±4.84mEq/l. As compared to baseline values there was insignificantly decrease in base deficit just after insufflation, thereafter base excess increases significantly at 30min and at 60min after insufflations, after desufflation and 30min after extubation in the recovery room, but it decreased insignificantly at 90min after insufflation of pneumoperitoneum.
Discussion
This study shows that there was decrease in HR initially which further decreased significantly to 60 minutes after pneumoperitoneum then increased significantly 90 minutes after release of pneumoperitoneum. This may be because of vagal stimulation due to peritoneal stretching during pneumoperitoneum insufflations later on it increases which may be due to central nervous system activation by CO2 accumulation and it evokes sympathoadrenal activation causing tachycardia but after release of pneumoperitoneum it decreased significantly [5,6]. The significant increase in mean MAP values were observed immediately after insufflations, 30min, 60min ,90min after insufflation and after desufflation. During whole pneumoperitoneum, significant changes in MAP were observed. These were due to multifactorial reason first by central nervous system mediated increased myocardial contractility, tachycardia and hypertension. Secondly, due to sympathoadrenal activation by increasing the systemic vascular resistance through the potential mediators i.e. catecholamines, prostaglandins, the rennin-angiotensin system and vasopressin [7-9]. There was significant increase in peak inspiratory pressure and peak plateau pressure was observed immediately after the insufflations, 30min, and 90min after insufflation and after desufflation [10,11]. The reason these changes during laparoscopic procedures may be respiratory embarrassment due to mechanical effect of increased intra-abdominal pressure by pneumoperitoneum. Abdominal distenstion markedly alters respiratory system mechanics, primarily by the abdominal expansion [12,13]. Changes in End tidal CO2 shows a significant increase immediately after insufflation 30min 60min, 90min after insufflations and after desufflation. The Increment of end tidal CO2 may attribute to increase absorption of CO2 into the systemic circulation through the peritoneal surface during pneumoperitoneum and hypoventilation due to elevated diaphragm, if ventilatory parameters were not adjusted accordingly it may lead to hypercabia and acidosis [14]. Significant rising trend in PaCO2 was observed immediately after insufflations 30min, 60min, 90min after insufflations, just after desufflation and 30min after extubation in the recovery room. The pathophysiology of these effects is mainly due to the consistent increase in paCO2 because of CO2 absorption from peritoneal cavity and hypoventilation due to elevated diaphragm. There was significant decreasing trend in pH observed immediately after insufflations 30min, 60min, 90min after insufflations and 30min after extubation in the recovery room after desufflation. The consistent decreasing trend in pH was due to significant absorption of CO2 from the peritoneal cavity and increased metabolic CO2 production after the release of pneumoperitoneum [15]. There was insignificantly decrease in base deficit just after insufflation thereafter base excess increased significantly at 30min and at 60min after insufflations after desufflation and 30min after extubation in the recovery room, but it decreased insignificantly at 90min (0.27±2.82mEq/l) after insufflation of pneumoperitoneum [16,17].
Conclusion
Pneumoperitoneum insufflations with CO2 in laparoscopic cholecystectomy in obese patients produces significant consequences including increase of heart rate, systolic blood pressure, diastolic blood pressure and mean arterial pressure, peak inspiratory pressure, peak plateau pressure and end tidal CO2. There was increase in PaCO2, decrease in pH but Base excess increased significantly. These changes are due to absorption of CO2 via the pneumoperitoneum, vagal stimulation, pressure due to pneumoperitoneum. The anesthesiologists should carefully observe and document respiratory mechanics and arterial blood gases in these patients.
For more Lupine Publishers Open Access Journals Please visit our website: http://lupinepublishers.us/ For more Open Access Journal of Anesthesiology articles Please Click Here: https://lupinepublishers.com/anesthesia-pain-medicine-journal/
To Know More About Open Access Publishers Please Click on Lupine Publishers
Follow on Linkedin : https://www.linkedin.com/company/lupinepublishers Follow on Twitter   :  https://twitter.com/lupine_online
45 notes · View notes
Lupine Publishers | Agricultural Value Creation through Effective Supply Chain Management
Tumblr media
Lupine Publishers | Agriculture Open Access Journal
Abstract
Agriculture is the most important industry for humanity. Unfortunately, however, it is also one of the least effectively managed industries. It is true that for the last several decades, there have been enormous scientific advancements that have increased the agricultural productivity. However, the question is whether the world has been able to reap the benefits to the fullest extent of such scientific advancements. The agricultural supply chain is characterized with an extremely long and fragmented system consisting of many gatekeepers throughout the value chain. As a result, it is vulnerable to a serious systemic malfunctioning such as the bullwhip effect. When a supply chain is inflicted by the bullwhip effect, it suffers huge inefficiencies, which include increasing costs, hampering innovation, and weakening problem solving capability. Unless it overcomes such inefficiencies, the industry as a whole will lose its competitiveness and perish eventually. As such, in order for the agricultural industry to sustain and thrive, it is vital to implement supply chain strategy effectively through coordination among the entire participants in the agricultural value chain.
Introduction
There is no doubt about that the agricultural industry is the most essential one for humanity [1]. It also employs a great number of people, providing economic means to them. But it is not easy to answer whether the agricultural industry is an effective one. On the quite contrary, the industry is perhaps the least effectively managed one for the last several thousand years. As people in the world are enjoying longevity, the world consumes more and more food. Can the world’s agricultural industry feed all the people on earth? It is a vital question. If the earth capacity is limited and the crops are not produced enough, the only possible solution is to increase the productivity of the agricultural industry. In order to find ways to increase such productivity, we first have to understand why the productivity of the agricultural industry is so low. Then we can suggest how the agricultural industry changes itself to be more productive and effective. In this paper, we endeavor to answer the question from a value chain perspective.
The Agricultural Value Chain
The Agricultural Value Chain
Every industry has a value chain. It is usual that the agricultural industry has a long and quite fragmented value chain. For the sake of simplicity, we define the agricultural supply chain as consisting of four primary functions, i.e., suppliers, farms, distributors, and consumers (Figure 1). Each of the supply chain participants is defined as follows [2]:
Figure 1:  Agricultural Supply Chain.
a) Suppliers are those companies or individuals, who provide raw materials or necessary supplies to the farms and farmers. These include seeds, fertilizers, pesticides, machinery, equipment, farming tools, and the like.
b) Farms are those companies or individuals, who are actually growing crops like rice, potato, corn, and beans. In order to produce those crops, farms need the farming land, water, and the supplies from the suppliers.
c) Distributors perform two functions, processing and distributing. Once the farms harvest their crops, these crops must be transported to wholesalers, who then sell the crops to retailers. Individual consumers buy their crops from the retail stores. Distributors perform this transportation function. Sometimes certain crops need to be processed, e.g., sliced, refrigerated, or canned, before being transported to wholesalers or retailers. Some of the distributors carry out this processing function.
d) Consumers are those companies or individuals, who use the crops for their businesses or their own use. Companies as consumers include restaurants or food manufacturers, who produce processed foods such as snacks, soft drinks, frozen packaged foods, and so on. Also individuals are important consumers in the agricultural supply chain.
Although the basic structure of the agricultural supply chain is similar with that of other industries, it has unique attributes that are quite unusual from other supply chains’ perspectives.
Some of the conspicuous characteristics are as follows:
a) The distance, physical or even psychological, between suppliers and consumers, i.e., the length of the agricultural supply chain is in general much longer than that of other industries. It might be simply due to geographic conditions, e.g., the rural area, where most farms are located, is usually far away from the urban area, where many consumers reside. Considering social and also cultural differences between rural and urban areas, we put forth the psychological distance is also quite extensive.
b) As a supply chain’s length increases, more intermediaries enter the supply chain. That is, the longer the supply chain, the more the gatekeepers involved in the transactions at various stages throughout the supply chain.
c) As a result, the agricultural industry becomes more and more fragmented, filled with small players that have myopic perspectives to optimize their own interest without considering the supply chain as a whole. But, this is not a criticism. We just want to highlight the current state of the agricultural industry. Although the individuals in the agricultural supply chain behave myopically, i.e., in a suboptimum way from the entire supply chain’s perspective, it is not because these individuals are ill-intentioned, but because there might be a systemic failure in the supply chain itself. In fact, it is the quintessential proposition we have in this paper and in the next sections, we delve into explaining why it might happen.
Figure 2:  The Bullwhip Effect in the Agricultural Supply Chain.
The Bullwhip Effect
In order to create maximum value in an industry, its supply chain must function effectively. But it is not an easy feat to accomplish. It requires seamless coordination among the participants in the supply chain. As such, we often observe across the board breakdowns of supply chains in many different industries. One of the most conspicuous such breakdowns is the bullwhip effect, which is caused when a long and often complex supply chain makes it difficult for information and communication to flow efficiently through the chain. Let’s first explain what the bullwhip effect is and how it affects the agricultural supply chain.
The bullwhip effect is the phenomenon, where the amount of order or production at an upstream function fluctuates more than that at its downstream function. As Figure 2 shows, the order or production at the distributor fluctuates more than that at the end market. The order or production at the farm (i.e., manufacturing function in a general supply chain) fluctuates more than that at the distributor, and so forth. More fluctuation means more uncertainty faced by the function in point. For instance, the uncertainty faced by the farm is greater than that faced by the distributor. Why does the bullwhip effect hurt the industry? The bullwhip effect causes the uncertainty to magnify more as the decision point moves from a downstream to an upstream function. There are two aspects of the uncertainty. First, there is the magnitude of fluctuation, i.e., the extent or size of the fluctuation, which becomes larger as one moves backward in the supply chain (Figure 3). The other is the time-to-stability, i.e., how long it takes to return to the stable state, which becomes longer as one moves toward an upstream function. As the uncertainty increases, other things being equal, the firm has to keep larger inventory in order to buffer against the uncertainty.
Figure 3:  Consequences of the Bullwhip Effect.
A larger inventory has three negative consequences for the firm. First, more inventory means higher inventory management cost. It is the most widely mentioned consequence of keeping a large inventory. However, there are more damaging consequences than the cost itself. When it has inventory more than effectively optimal, it becomes more difficult for the firm to innovate. For instance, when the firm’s warehouse is full of old, outdated inventory, it cannot introduce new products into the market, unless it is willing to write off its inventory. Finally, keeping an unnecessarily large inventory might prevent the firm from identifying and solving problems. It is due to that the inventory might cover up serious flaws in the firm so that it cannot identify managerial problems appropriately. If you cannot identify a problem, you cannot solve it. We expect these negative consequences of having an unnecessary inventory to affect the agricultural industry in the same way impacting other industries: the agricultural industry is affected by the increased inventory due to the bullwhip effect causing uncertainty throughout the supply chain in three ways. That is, due to the chronic uncertainty embedded in itself, the agricultural industry is afflicted with three deficiencies, i.e., increased inventory management cost, decreased innovation, and reduced problem solving capability.
Discussion
We have discussed why and how the bullwhip effect in the agricultural industry could cause serious inefficiencies and deficiencies in the industry. Considering the unique structure of the agricultural supply chain, we believe the negative impacts of the bullwhip effect affect the agricultural industry more severely than other industries. Is there any way to overcome these problems? To answer the question, we should think about the fundamental cause of the bullwhip effect. As Figure 1 hints, the main cause of the bullwhip effect is the poor information quality, which in turn is determined by two factors, i.e., the physical or psychological distance and the number of gatekeepers between the information source (consumer market) and the decision point (each supply chain participant). The longer the distance, the poorer the information quality. The more the gatekeepers, the poorer the information quality. As a result, in order to reduce and eventually get rid of these negative consequences of the bullwhip effect in the agricultural supply chain, we should improve the information quality. One of the most effective ways to improve the information quality is to enhance the communication for information sharing among the supply chain participants in the agricultural supply chain. Then, another relevant question is “why hasn’t the communication done effectively in the agricultural supply chain?” Like human beings, firms do not communicate with each other, unless there is trust between them. This series of reasoning leads.com to a logical conclusion that in order to solve the damaging problems prevalent in the agricultural industry, it is essential to dramatically enhance the trust among the partners or participants in the supply chain.
It is, however, neither easy nor straightforward to make the supply chain participants trust each other suddenly, unless a trustbased relationship has existed among them. Fortunately, there emerges a new technology that could help managers and their companies to trust other players in the supply chain, not necessarily having known each other for long. It is the blockchain technology [3]. In fact, there are already several cases, where the blockchain enables the supply chain to function effectively by restoring the trust among the players sharing the same value chain [4]. Although it has been regarded as a traditional industry that might not be nicely compatible with cutting-edge technological innovations, now it is the time for the agricultural industry to transform itself completely. The key to this change is technology. In particular, the agricultural industry must invest heavily in innovative technologies that help the partners and participants alike to trust each other in doing business.
https://lupinepublishers.com/agriculture-journal/pdf/CIACR.MS.ID.000132.pdf
For more Agriculture Open Access Journal articles Please Click Here: https://www.lupinepublishers.com/agriculture-journal/
To Know More About Open Access Publishers Please Click on Lupine Publishers
Follow on Linkedin : https://www.linkedin.com/company/lupinepublishers Follow on Twitter   : https://twitter.com/lupine_online
43 notes · View notes
Lupine Publishers | Environmental Impacts of Hydropower and Alternative Mitigation Measures
Tumblr media
Lupine Publishers |  Agriculture Open Access Journal
Abstract
Hydropower is an important renewable energy resource worldwide. However, its development is accompanied with environmental drawbacks. The paper has reviewed the environmental drawbacks associated with hydropower projects. Change in hydrological flow regimes, deteriorating water quality, migration corridors barriers, sedimentation, greenhouse gas emission and biodiversity loss are environmental impacts of hydropower projects. Appropriate mitigation measures are required to sustainably generate hydropower. These are also discussed in the paper
Keywords: Environmental impacts; Hydropower; Mitigation measures
Introduction
There is no doubt about that the agricultural industry is the most essential one for humanity [1]. It also employs a great number of people, providing economic means to them. But it is not easy to answer whether the agricultural industry is an effective one. On the quite contrary, the industry is perhaps the least effectively managed one for the last several thousand years. As people in the world are enjoying longevity, the world consumes more and more food. Can the world’s agricultural industry feed all the people on earth? It is a vital question. If the earth capacity is limited and the crops are not produced enough, the only possible solution is to increase the productivity of the agricultural industry. In order to find ways to increase such productivity, we first have to understand why the productivity of the agricultural industry is so low. Then we can suggest how the agricultural industry changes itself to be more productive and effective. In this paper, we endeavor to answer the question from a value chain perspective.Hydropower is one of the most efficient power generation technologies, which are carbon free and use inexhaustible resources to produce the energy. The prime driver is the force of gravity and the water used to drive this power is non-destructive [1]. According to Yüksel [2] hydropower do not pollute the air we breathe in the way that the energy source does not produce any air pollutants. Unlike thermal power plants for example, there are no gaseous of fly ash emissions emitted during the production. The fact that hydropower often replace fossil-fired generation, it can therefore also be said that it is reducing the problem with acid rain and smog [1,2]. Despite all these advantages hydropower plants have, there may also be negative impacts. Lately the impact on the ecological aspects from the power plants has received attention. In the report from World Commission on Dams [3], it is stated that dams will have effects e.g. on the terrestrial ecosystem and biodiversity, the flow regime, migration of aquatic organisms, and can cause emissions of greenhouse gases. Bratrich [4] states that hydropower affects the flow regime, migration of organisms and transport of nutrients and sediments.
Abbasi [1] claim that hydropower plants causes major ecological impacts in all of the four different habitats, which are associated with the projects; the estuary into which the river flows, the downstream reaches of the dammed river, the reservoir catchment and the artificially created lake. Different research works from all corner of the world’s reported considering the negative effects of hydropower on the environments and calls for the importance of adopting of appropriate mitigation measures [5,6]. Therefore, to ensure sustainable development, various mitigation and enhancement measures have to be integrated at the early stages of project planning. Furthermore, appropriate mitigation measures not only for hydropower development that is newly planned and implemented in future, but also for the refurbishment and upgrading of hydropower plants which are currently in operation, need to be devised. The purpose of this paper is to review the impacts of hydropower on the environment and alternative mitigation measures.
Impacts of Hydropower
The hydropower projects have several impacts on the environment. Water quality decline is one of the impacts of hydropower and water quality may be affected around a hydropower plant [7]. Water discharged from a reservoir can be of a different composition to the water that is flowing into a reservoir [8]. The other impacts of hydropower plant are migrations barriers. Hydropower dams impede the flows of rivers and thereby affect the habitat of various aquatic lives [9]. Migratory animals require different environments for their different phases of their life cycle. Different stages are reproduction, production of juveniles, growth and sexual maturation. These different stages take place in different environments for different fishes [6]. Even if hydropower is a renewable energy source it is not an energy source without GHG emissions [5]. Greenhouse gas emissions can occur at three different phases hydropower plant (the construction, the operation and maintenance and the decommissioning of the plant). During the construction phase the emissions comes from the production and transport of the materials needed for the construction and from the work equipment.
In the second phase, the operation and maintenance, the emissions can come from e.g. heating or cooling systems and transportation for maintenance work [5]. Hydropower project also leads to sedimentation [10]. The natural carried sediment from the usual natural flow will be negatively affected [5,10]. Bergengren [6] states in their report that when constructing a dam or hydropower plant, changes in the hydrological regime will follow. IPCC suggest the same thing and claim in their report that changes in the hydrological regime is a significant impact from the hydropower plant. They argue that due to a hydropower plant there will be changes in water level, timing and temperature, which will affect the surrounding terrestrial and aquatic ecosystem [8]. The other biggest threats from hydropower projects are the loss of biodiversity and ecosystems that provides services that we cannot live without. The loss of valuable vegetative community types and loss of wildlife and habitats are resulted from land clearance and removal of natural vegetation [11].
Mitigation Actions
The purpose of environmental mitigation requirements at hydroelectric projects is to avoid or minimize the adverse effects of development and operation. Among the mitigation actions dam removal is becoming a more frequently used management option all around the world. Dam removal is taking into consideration for old dams in need of renovation or small dams that are no longer used or have lost most of their reservoir capacity. Another point of view is that dams and the reservoirs have many ecological effects; the disruption of the movement of different organisms is probably the most important reason for dam restoration. Thereby, dam removal makes it possible for fish migration and fish species to shift from lentic to lotic, which in turn have the ability to migrate and reproduce in free-flowing water [12]. Flow Regulations is also one the promising action for mitigating negative impacts of hydropower projects. In order to make the use of hydropower plants more compatible with the natural life of rivers, a minimum flow must be released so as to assure the preservation of the hydrological continuity of the river and the consequent conservation of natural habitat and ecological life [13]. Minimum flow release means that you allow some flow below a hydropower plant with the dual aim of maintaining current water ecological conditions and partly also for aesthetic or recreational purposes on a watercourse distance, which would mostly had remained drained otherwise [13].
The problem with sedimentation in the surroundings of the hydropower plant can be mitigated by constructing sedimentation measures such as the construction of small-scale weirs to trap the sands and the particles that later can be manually removed [14]. A direct approach to reduce the accumulation of sediment is to mechanically remove the sediments by periodic dredging [7]. Biotope Adjustments is also other migration action. Adjusting the biotope through the creation of greater environmental heterogeneity, by using such as habitat adjusting measures, the biological diversity of the benthic fauna will be strengthened [15]. Constructing migratory corridors are other mitigation actions for migrating animals. To facilitate for the migration of the species in the streams it is therefore desirable to construct corridors [11]. Fish friendly turbine technology is an emerging technology that provides a safe approach for fish passing though the turbines by minimizing the risk of injury or even death [8]. Aquatic animal Plantations is also other mitigation measures to reduce the negative effects of hydropower projects on the aquatic animals. Artificially manage the fertilization, hatching, growth and release of aquatic animal’s especially salmonid fish [16]. To introduce planted fish to protect endangered species or reintroduce species that have disappeared may be of great benefit to the environment and the biodiversity [16].
Discussion
From the above review, it is known that hydropower project have impacts on the environment. Change in hydrological flow regimes, deteriorating water quality, migration corridors barriers, sedimentation, greenhouse gas emission and biodiversity loss are environmental impacts of hydropower projects. Dam removal, flow regulations, biotope adjustment, fish plantation, sedimentation measures, constructing migration corridors and fish friendly turbine are alternative action for mitigating the negative impacts of hydropower projects on the environment [17]. Therefore, to meet the increasing demands energy and ensure sustainable development, various mitigation and enhancement measures have to be integrated at the early stages of project planning. Furthermore, appropriate mitigation measures not only for hydropower development that is newly planned and implemented in future, but also for the refurbishment and upgrading of hydropower plants which are currently in operation, need to be devised.
https://lupinepublishers.com/agriculture-journal/pdf/CIACR.MS.ID.000133.pdf
For more Agriculture Open Access Journal articles Please Click Here: https://www.lupinepublishers.com/agriculture-journal/
To Know More About Open Access Publishers Please Click on Lupine Publishers
Follow on Linkedin : https://www.linkedin.com/company/lupinepublishers Follow on Twitter   : https://twitter.com/lupine_online
41 notes · View notes
Lupine Publishers| Hypertrophic Cardiomiopathy in Children: The Need of Heart Transplantation
Tumblr media
Lupine Publishers | Cardiovascular Research
Abstract
Hypertrophic cardiomyopathy (HCM) is the most common cardiac disease affecting the cardiac muscle. It can manifest in different forms with or without left ventricular outflow obstruction, with or without right ventricle involvement. Forms with biventricular hypertrophy seem to have poor prognosis. In our case, we describe a young patient with sarcomeric biventricular hypertrophic cardiomyopathy (MYH7 mutation), the poor prognosis of this form and strategies options adopted after failure of medical treatment. It is not always easy the management of hypertrophic cardiomiopathy, after medical treatment failure, especially in children. In some cases, heart transplantation is the only one therapeutic option.
Keywords: Hypertrophic Cardiomiopathy; Right Ventricular Hypertrophy; Heart Transplantation
Introduction
Hypertrophic cardiomiopathy (HCM) is the most common cardiac disease affecting the cardiac muscle and is characterized by heterogeneous genetic, morphological, functional, and clinical features. It is also one of the main causes of sudden cardiac death (SDC) in the young. Left ventricular hypertrophy with left ventricular outflow obstruction (LVOTO) is the most characteristic feature of HCM. There are also variant of HCM without LVOTO, with apical hypertrophy, with medio-ventricular obstruction and with right ventricular hypertrophy. The treatment and the prognosis of HCM seem to be variable on the basis of different forms, the age at presentation, sarcomeric gene mutations or rare phenocopies. Heart transplantation (HT) is the only therapeutic option for selected patients with HCM and refractory heart failure. In effect ESC guidelines recommend heart transplantation in eligible patients who have an LVEF < 50% and NYHA functional Class III–IV symptoms despite optimal medical therapy or intractable ventricular arrhythmia (II a); in eligible patients with normal LVEF (50%) and severe drug refractory symptoms (NYHA functional Class III–IV) caused by diastolic dysfunction (II b)[1].
Right ventricular hypertrophy (SRVH) is a relatively rare subtype of HCM. The anatomic, genetic, clinical, and prognostic characteristics of patients with SRVH and the clinical relevance of these characteristics have not been described widely in the literature [2,3]. MYBPC3 gene mutations have previously been described in two patients with RV hypertrophy. In a recent study, 90% of HCM patients with SRVH were found to possess relevant sarcomere protein mutations and variations in the MYH7 (Myosin heavy chain 7) and TTN genes, followed by variations in MYBPC3. Always in this study 73% of HCM patients with SRVH and multiple sarcomere gene mutations had poor prognosis. 7 In addiction MYH7 mutations can cause hypertrophic cardiomyopathy or skeletal myopathies with or without cardiac involvement, on the basis of the side of mutation. In our case, we describe the poor prognosis and treatment strategies of a young patient with biventricular hypertrophic cardiomyopathy and MYH7 mutation.
Case Report
A 12-year-old young woman with familiarity for hypertrophic cardiomyopathy (mother and mother’s twin with biventricular hypertrophic cardiomiopathy and MYH7 mutation) was hospitalized in our hospital for dyspnea after mild-moderate efforts and reduced functional capacity (NYHA Class II). Mother and aunt of the patient were asymptomatic with good functional capacity. Patient had the same genetic mutation of mother and aunt (p.Asn696Ser heterozygosis MYH7) but with increased and poor phenotypic expression [4]. Echocardiography and cardiac magnetic resonance were performed showing a hypertrophic cardiomyopathy with right ventricular involvement. Precisely, cardiovascular imaging showed left ventricle asymmetric hypertrophy especially at the level of anterior and inferior wall (basal and mild anterior wall =14 mm, z score= 3,5; antero-lateral basal wall = 12 mm, z score 2,78; mild inferior wall = 14 mm and apical inferior wall = 12 mm) with normal ejection fraction (FE = 62% at CMR) and moderate diastolic dysfunction (panel B and D). In addiction wall thickness of right ventricle outflow and basal-mild free wall were increased (= 13 mm) with apical obstruction and development of maximum gradient of 10 mmHg (PANEL A and C) [5,6] (Figure 1).
Figure 1.  
The function of right ventricle was at inferior limits (FE = 51% at CMR, TAPSE = 16 mm at echocardiography). Thus the patient had an interesting right ventricle involvement and moderate diastolic dysfunction of left ventricle. She had not arrhythmia at ECG-Holter but she had reduced functional capacity. also demonstrated at stress test. Stress test was suspended at 6 min (Bruce Protocol) after pre-syncopal symptoms: lack of adaptation of the blood pressure to the effort was observed. In addition, from several months she had pre-syncopal episodes at the peak of the effort. ECG showed left ventricular hypertrophy and biatrial enlargement. Pro BNP was increased = 5841 pg/ml. Considering clinical situation, we decided to start medical treatment with betablockers (bisoprolol) but the patient didn’t tolerate medical treatment. Thus, we decided to start low dose of captopril without improvement of symptomatology. Also, treatment with diuretic was not tolerate by patient [7,8]. Therefore, considering symptom refractory to medical therapy, the poor prognosis and the impossibility to optimize medical treatment, we decided to plan cardiac transplantation, the only option possible at this moment.
Thus right catheterization was performed and patient was inserted in heart transplantation list. ICD implantation was not considered in the absence of ventricular arrhythmia and other factors. Discussion: hypertrophic cardiomyopathy associated with MYH7 mutation and right ventricle involvement seems to have poor prognosis, especially if right ventricle hypertrophy is severe [9]. In effect the young patient had a greater right ventricular hypertrophy compared than mother and aunt. In these cases, after medical treatment failure, heart transplantation seems to be the only strategy to improve symptomatology and quality of the life of the patient. Especially in pediatric population, it is not always easy the management of hypertrophic cardiomiopathy after medical treatment failure and heart transplantation seems to be the only one therapeutic option. Other study are needed to study some variants of HCM with right ventricle hypertrophy, their treatment and prognosis.
https://lupinepublishers.com/cardiology-journal/pdf/ACR.MS.ID.000130.pdf
For more Cardiovascular Research go through the below link
https://lupinepublishers.com/cardiology-journal/index.php
To Know More About Open Access Publishers Please Click on Lupine Publishers
Follow on Linkedin : https://www.linkedin.com/company/lupinepublishers Follow on Twitter   : https://twitter.com/lupine_online
40 notes · View notes
Lupine Publishers | Pre-cochlear Implantation Aural/Oral Rehabilitation, Is it Mandatory?
Tumblr media
Lupine Publishers | Journal of Otolaryngology
Abstract
Background: The use of cochlear implantation (CI) can fully restore hearing. Consequently, speech production can improve over time and enters the normal rang when traditional amplification Devices (hearing aids) are unable to restore access to the full range of phonemic components of speech, a cochlear implant (CI) is a widely used treatment option for children with sensorineural        hearing loss (SNHL).
Purpose: The aim of this study is to compare the functional benefit of the communicative skills of children with CI without pre-implantation aural/oral rehabilitation in relation to those with CI with pre-implantation 6 months aural/oral rehabilitation in order to compare the role of pre-implantation aural/oral rehabilitation on the communicative abilities of severe to profound and        profound sensorineural hearing impaired children.
Method: This study has a prospective design. It started after final diagnosis and decision that all children are candidates for CI but half of them are fitted with behind the ear hearing aids and the other half of children are immediately implanted provided that the primary language assessment before rehabilitation is present in the medical files of these children. A 2nd language assessment was done after 12 months of language therapy to detect the progress of the language development. These sixty patients were        divided into two groups:
a) Group A: Thirty children, who have used behind the ear hearing aids for one year before CI and attended regular language        therapy.
b) Group B: Thirty children, who shifted immediately to cochlear implantation, and were enrolled in auditory training and        language therapy for one year.
Results: Total language age of children using cochlear implant without pre-implantation aural/oral rehabilitation is significantly higher than that in the children while using hearing aids for one year before CI. Also, there is highly significant difference between        frontal and back speech sounds in the children after immediate implantation with positive correlation.
Conclusion: Cochlear implant is safe & reliable technique. The fact that many profoundly hearing impaired children using immediate cochlear implant without pre-implantation aural/oral rehabilitation can develop functional levels of speech perception & production, develop competency level in a language other than their primary language and continuation of language therapy        together with proper mapping accordingly is a must to enroll these children in main stream education.
Keywords: Hearing Aids; Cochlear Implant; Language; Speech Intelligibility Pre-implantation Rehabilitation
Abbreviations: SNHL: Sensorineural Hearing Loss; HA: Hearing Aids; CI: Cochlear Implantation
Introduction
Language in children begins to develop since birth and is nearly complete by the age of 6 years. Language skills, speech quality, expressive and receptive vocabulary are enhanced by exposure to aural language since as early an age as possible [1]. Children spend many hours in acoustic environments where target speech signals are embedded in competing sounds from multiple sources. In these environments, perception of target speech is assisted by a listener’s a listener’s ability to segregate the multitude of sounds into separate auditory streams, one cue to which is the angle of incidence of different sounds [2]. Children with profound sensorineural hearing loss (SNHL) experience delays in learning to understand the speech of others and to produce intelligible speech. There is solid evidence that moderate (or more severe) hearing impairment exerts a negative impact on speech, language, cognitive development, and early identification and management may be of great benefit to these children, through improved language, communication, mental health, and employment prospects [3]. The use of Hearing Aids (HA) or Cochlear Implantation (CI) can partially or fully restore hearing. Consequently, speech production can improve over time and enters the normal range. After hearing is restored, hearing impaired individuals use auditory feedback to adjust voice features such as voice intensity, intonation and vowel duration [4]. When traditional amplification devices (hearing aids) are unable to restore access to the full range of phonemic components of speech, a cochlear implant (CI) is a widely used treatment option for children with SNHL [5]. Cochlear Implants (CI) which are called as bionic ears are effective in trans- mitting salient features of speech, especially in quiet [6]. Because the goal of restored hearing in a deaf child is to enable useful hearing, a key measure of outcome should reflect how a deaf child’s experience with a CI develops into the effective use of spoken language. Parental surveys indicate that the outcome of their greatest concern after surgical intervention in children with SNHL is the level of spoken language achieved [7]. Cochlear implants have become a popular option for children with profound hearing loss. Evidence supporting the benefits of early implantation is found in experimental [1], developmental [2], and clinical cochlear implant studies [3]. The consensus is that children have the best opportunity to learn language during their first 5 years of life. According to [2], this critical period for language learning is particularly important in deaf and hearing-impaired children. Providing cochlear implants to deaf children at a young age may enable them to take advantage of this critical period for learning language and is likely to increase their chances for developing speech and language skills like those of normal-hearing children. Early implantation would also result in a decrease in the duration of auditory deprivation, a decrease considered to positively influence performance with a cochlear implant [4].
Objectives
The aim of this study is to compare the functional benefit of the communicative skills of children with immediate CI without preimplantation aural/oral rehabilitation in relation to those using hearing aid with pre-implantation aural/oral rehabilitation in order to compare the role of each amplification device and the effect of pre-implantation aural/oral rehabilitation on the communicative abilities of severe to profound and profound sensorineural hearing impaired children.
Subjects & Methods
This research was conducted during the period between the years 2017 and 2018. The study protocol was approved by the Otolaryngology Department Council of Beni-Suef University and Otolaryngology Department Council of King Abd Elaziz specialized hospital Jouf, Saudi Arabia. Consent to participate in this research was obtained from the subjects’ parents before commencement of the study. This study employed a comprehensive design to examine outcomes in multiple domains of communication in children who used either bilateral behind the ear hearing aids and preimplantation aural/oral rehabilitation or an immediate unilateral cochlear implant without pre-implantation rehabilitation for a period of one year. These were selected from children seeking language rehabilitation in Phoniatrics Unit, Beni-Suef University Hospital and children seeking language rehabilitation in Phoniatrics Clinic, King Abd Elaziz specialized hospital Jouf, Saudi Arabia. Shortly after confirmation of bilateral permanent hearing loss, thirty children were typically fitted with bilateral behind the ear hearing aids using the desired sensation level (DSL) prescription method and regularly attend aural/oral rehabilitation sessions. Thirty children underwent a comprehensive team evaluation for cochlear implant candidacy and received immediate unilateral cochlear implants without pre-implant aural/oral rehabilitation. All children received audiologic management and preschool rehabilitation and all children were enrolled in rehabilitation programs with a focus on the development of receptive &expressive language. Children were regular in Phoniatrics clinic, were asked to follow up auditory rehabilitation & language therapy program twice per week. Children with cochlear implants were followed every month for mapping of their speech processor and speech recognition testing. The study received ethical approval from the Hospital of Beni-Suef University and from King Abd Elaziz specialized hospital and written informed consent was obtained from all their parents. Collaboration between ENT clinic, Audiology clinic& Phoniatrics clinic was done in the form of ENT examination, audiological assessment, and language assessment and rehabilitation for all children. This study has a prospective design. It started after fitting half of the children with bilateral behind the ear hearing aids with aural/oral rehabilitation sessions and the other half of the children are immediately implanted, mapped, and regularly attended aural rehabilitation sessions provided that the primary language assessment before rehabilitate- ton is present in the medical files of all children. A 2nd language assessment was done after 12 months of language therapy to detect the progress of the language and the efficacy of pre-implantation aural/oral rehabilitation. These sixty        patients were divided into two groups:
a) Group A: Thirty children, who have used behind the ear hearing aids for one year and attended regular language therapy    despite those children, are candidates for cochlear implants.
b) Group B: Thirty children, who shifted immediately to cochlear implantation and had regular language therapy and were       enrolled in auditory training.
Half patients were fitted with bilateral powerful digital signal processing BEHAs and used them for at least a 12-months period before CI. Hearing aid use was determined by parental and therapist reports. After surgical implantation of the device and an adequate healing period for the other half of the patients, the implants were activated (usually 4 weeks after surgery). The children were fitted with one of the two brands of speech processors using a behind the ear controller. Speech processors used in this study were OPUS 2 with standard Sonata electrode & Cochlear Freedom Processor    with nucleus 24 k straight electrode.
In this study using Modified Preschool Language Scale & Subjective Speech Intelligibility Test gave us a summary of the improvement of these children. This is matched with other studies which focused that both comprehension and expression of spoken language are important markers of parent-perceived success of a    CI (Figure 1).
Language Improvement Quotient: The language improvement quotient [8] was used to compare between the rates of progress in language in order to overcome the bias of age        matching between the individuals in the study.
Language Improvement = 2nd language age -1st language age     divided by duration of language rehabilitation.
a) A1 refers to language development of group (A) after using bilateral behind the ear hearing aids for 12 months which is     calculated by this equation:
A1= 2nd language age -1st language age divided by 12(duration     of rehabilitation)
b) B1 refers to language development of group (B) after using unilateral CI for 12 months which is calculated by this equation: B1= 2nd language age -1st language age divided by 12(duration     of rehabilitation).
Speech analysis was performed using the Ain Shams assessment protocol which includes analysis of supra-segmental phonology (rate, stress and tonality), segmental phonology (consonants and vowels), nasal resonance and general intelligibility of speech. Assessment of auditory perception skills was performed evaluating a hierarchy of listening skills ranging from detection, to discrimination, identification, recognition and comprehension. Assessment of speech reading abilities was done and expressed as     percent change over time.
   Statistical Studies
Data was analyzed using SPSS, Statistical Package for the Social Sciences version 17 (SPSS Inc., Chicago, IL). Numerical data were expressed as mean, standard deviation, and range. For quantitative data, comparison was done using Mann-Whitney test (non-parametric t-test). A p-value < 0.05 was considered significant. Spearman-rho method was used to test correlation between numerical variables (r > 0.3 = no correlation, r = 0.3-0.5 =     fair correlation, r = 0.5-0.1 = good correlation).
Results
Group (A) are hearing aids users for one year, Group (B) are CI        users for 1 year. Demographic data of the 2 groups:
a) Age
b) Gender.
c) Incidence of hearing loss.
d) Psychometric evaluation.
e) Pure tone Audiometry.
f) First language age.
g) Radiology.
a) Age Distribution: Both groups are matched according to age. In group (A) the age of the children ranged between 3     years and 7years. In group (B). The age of the children ranged Between 3 years & 7 years, provided that all children were implanted before the age of 6 years.
b) Gender: No significant difference was noted in gender of        both groups.
c) Incidence of Hearing Loss: In group (A) there were 24 children (80%) with congenital hearing impairment and 6 children (20%) with acquired hearing loss, while in group (B) there were 18 children (60%) with congenital hearing    impairment and 12 children (40%) With acquired hearing loss.
d) Psychometric Evaluation: All children in group (A) had normal psychometric evaluation with a mean Value 87.5± 4.6; also, in group (B) all children had normal psychometric    Evaluation with a mean value 86.4±5.1.
e) Pure Tone Results: Pure tone results of group (A) maintained a mean value of 27.9 dB HL. Group (B). Decreased in mean values from 65.7±8.2 dB HL. There was a highly significant difference (P=0.001) between group (A) and group    (B) in favor of group (B).
f) First Language Age: Before start of therapy, both groups had no passive vocabulary and were Nonverbal. They used    either babbling or vocal play.
g) Radiology: All Children both in groups (A) and (B) were        having normal CT and MRI of Petrous bone.
Tables 1-3 demonstrate the progress of the language abilities, the auditory abilities and the speeding reading abilities of both groups, respectively, from the time just prior to the rehabilitation (either oral\ aural in group A or aural in group B) as compared to    the evaluation done one year the rehabilitation (Table 4).
Discussion
The primary purpose of this study was to obtain comprehensive data on the development of language and speech skills in a group of permanent hearing impaired children. This group shared the common degree of bilateral hearing impairment (severe to profound or profound hearing impairment), they all sought amplification, and they all sought language rehabilitation after receiving amplification using primarily auditory-based cues. The study aimed also to investigate the difference between the language and speech development under two amplification conditions; bilateral behind the ear hearing aids and unilateral cochlear implants. The choice of language age deficit to compare language skills development among the studied groups is justified by the fact that three variables usually co-vary when language results are analyzed in children; age of use of the amplification device whether hearing aid (HAs) or cochlear implant (CI), the language age before start of rehabilitation, and the language age of children after the time of rehabilitation. The difference in ages at evaluation places the younger children at a maturational and developmental disadvantage in comparison with their older peers. Thus, analyzing the results in terms of language age scores might put the younger group at a disadvantage. At the same time, analyzing the results in terms of language age deficits, although more reasonable, but still, in theory, puts the older group at a disadvantage because of the impact of their ages giving higher values for the deficit from the scored language age. That’s why the hypotheses of using the language improvement quotient [8] after determining the exact language age, may be more realistic and less biased by the chronological age differences at the time of evaluation. In this study using Modified Preschool Language Scale & Subjective Speech Intelligibility Test gave us a summary of the improvement of these children. A perfect model for comparing the results of both devices may be practically impossible, given the current indications of cochlear implant use. In this study, which was applied on two groups with comparable ages, a comparison was made between the outcomes of the 2 devices along a period of (re)habilitation of one year in their course of therapy. If the CI group were doing better than the HA group, it would indicate that the selection criteria were too conservative and some of the HA users might be better off with a CI. This raises the suspicion of the fact that HA users plateau after a period of little progress or at least their progress continue at a less pace. Cochlear implants may have a superior effect on the acoustic environment of children more than hearing aids. During the 90s of the last centuries, and using the early models of speech processes, studies proved that CI users gained better results than HA users in language and perception skills [9-14]. The minimum age for implantation has progressively reduced [15]. Advantages of cochlear implants over hearing aids extended also the adult population [15]. In a study by [16], they found CIs and children with HAs, aged 4 to 5 years, differ significantly on language abilities and there were differences in articulation skills in favor of the CI users. Advances in sound processors and related software have enhanced the fidelity with which complex sounds are processed into physiologically meaningful codes [17]. This study pointed to the importance of conducting comprehensive assessments when evaluating whether a child with severe to profound sensory neural hearing loss would likely derive greater benefit from a cochlear implant compared to a hearing aid. To date, only a few systematic studies have involved large numbers of children who received implants at various ages and have investigated both the effects of age at implantation and the amount of experience with an implant. Most of such studies were concerned with the speech perception skills after cochlear implantation with a clear evidence of the effect of early implantation on rate of acquisition of such perception skills when they are implanted at 2 – 4 years of age [18]. Concerning the auditory abilities, the progress imposed by the effect of cochlear implantation group produced better abilities than the hearing aids group in the auditory abilities. This may be explained by the fact that the hearing aids group were more rigid to their habits of relying on their visual cues making the children less efficient in acquiring the training proficiency provided to them during therapy sessions. In an explanation of this, [19] described recruitment of the auditory cortex by the visual and somatosensory systems in congenitally deaf humans. They reported that the extent of crossmodal recruitment of the auditory cortex increases as the duration of deafness increases, deterring the restoration of auditory processing in the auditory cortex of long-term deafened individuals after cochlear implantation. They also suggested that the age beyond which the effects of cross-modal plasticity in the auditory cortex are more difficult to reverse is about 6.5 years. It has also been documented that there is a change in the cochlear place code during development [20]. This may be necessary for the formation of normal and effective connections between auditory centers and for the proper development of elements within the central auditory pathways. Early cochlear implantation may contribute to the maintenance of these important developmental milestones.
Conclusion
CI children showed better rate of language acquisition skills along a one-year use of the implant compared to a similar period of HA group of HA users. The implanted group demonstrated significantly better auditory abilities, better speech production skills, and better speech intelligibility one year after implantation and with aural rehabilitation - than the aided group with oral\aural rehabilitation. The implanted group also ended with significantly less or no speech reading abilities than the aided group one year after implantation. These results indicate the favorable effect implantation over the previous parameters. Language skills shows a significant difference between the two groups. Consequently, oral\aural rehabilitation with hearing aids – even for few monthsis not mandatory.
For more Lupine Publishers Open Access Journals Please visit our website: h http://lupinepublishers.us/ For more Journal of Otolaryngology-ENT Research articles Please Click Here: https://lupinepublishers.com/otolaryngology-journal/
To Know More About Open Access Publishers Please Click on Lupine Publishers
Follow on Linkedin : https://www.linkedin.com/company/lupinepublishers Follow on Twitter   :  https://twitter.com/lupine_online
51 notes · View notes
Lupine Publishers | Journal of Pediatric Dentistry
Tumblr media
Lupine Publishers | Journal of Pediatric Dentistry
Abstract
The present study showed the presence of Candida tropicalis as a mayor fungus isolated of dental caries in deciduous teeth.
Keywords: Dental Caries, C Tropicalis
Introduction
Candida species is the most frequent fungus found in the oral cavity [1]. This microorganism provokes a pathology known as candidiasis in many forms [2], however, this yeast can be found in dental decay lesions, gingival and periodontal disease [3]. Candida albicans is the most frequent species of microorganism in all these lesions [4], however, other Candida species as Candida tropicalis, C glabrata, C. Krusei, C. guillermondii are less present in oral cavity [5]. Dental caries, is the most frequent lesions over world and its etiology is eminently microbial, being the Streptococcus mutans who produce the teeth demineralization and destruction [6]. The main of this study is to isolate and characterize the Candida species from dental caries in deciduous teeth.
Materials and Methods
Fifty children, female and male, from pediatric dentistry of Universidad Andina del Cusco, between 4 and 6 years old with dental caries are selected. Before remove and rehabilitate the dental caries, with a dental spoon excavator it was collected a caries sample and stored in 0.9% NaCl [7]. After that, the samples were sonicated and 100 ul aliquot was placed in CHROM Agar Candida medium (CHRO Magar, Paris, France) and were incubated for 48 hours at 37°C [7]. It followed the CHRO Magar Candida manual instructions to determine the presence of Candida species.
Results
The Candida species most present in the dental caries in deciduous teeth were the C. tropicalis. Other species of Candida are found in less percentages (Table 1).
Discussion
Candida species is the most fungus found in oral cavity being the C Albicans the most pathological yeast of the Candida species [4]. This microorganism was found in many oral lesions as candidiasis, dental caries, gingival and periodontal disease [2,3]. Other Candida non albicans are founded in less frequency. However, C. tropicalis and C. glabrata has been described as emerging pathogens in recent years [8]. In the present study, C. tropicalis was presented in the most cases of dental caries in deciduous teeth, being this data corroborated with other studies who the main pathogen is the C. tropicalis. Most studies, in fact, found that the C. albicans as the mayor pathogen isolated from dental caries [6,9,10]. This difference of data can be explained by the geographical location of patients where Candida species can be found in amounts depending on the geographical area. In this study, other Candida species, can be found in less amounts. Despite limitations, the data obtained in the present study demonstrated the high rate of C. Tropicalis in dental caries in deciduous teeth, however, has not been determined which factor is involved in the pathogenesis of dental caries produced by C. tropicalis. It is also important study the oral microbiome in dental caries to dilucidated the role of Candida species, mainly C. tropicalis, in the development of dental caries in deciduous teeth.
Conclusion
Candida tropicalis is the most fungi founded in dental caries lesion in deciduous teeth in child between 4 and 6 years old.
For more Lupine Publishers Open Access Journals Please visit our website: http://lupinepublishers.us/ For more Open Access Journal on Pediatric Dentistry articles Please Click Here: https://lupinepublishers.com/pediatric-dentistry-journal/
To Know More About Open Access Publishers Please Click on Lupine Publishers
Follow on Linkedin : https://www.linkedin.com/company/lupinepublishers Follow on Twitter   :  https://twitter.com/lupine_online
43 notes · View notes
Lupine Publishers | Recognizing, Detecting and Understanding the Development of Lipid Oxidation Off-Flavors in Foods and Beverages
Tumblr media
Lupine Publishers | Agriculture Open Access Journal
Lipid oxidation is a major cause of off-flavor problems in the food industry. Lipid oxidation products can be difficult to detect and identify. New sample preparation techniques to extract lipid oxidation products prior to GC-MS analysis and innovative more sensitive GC-TOFMS instrumentation that performs peak de convolution with co eluting chromatographic peaks are critical to detecting these chemicals and diagnosing possible mechanisms of lipid oxidation. The goal of this article is to show how these new analytical technologies have been applied to solving lipid oxidation problems in milk, ice cream, onion dip, sour cream, powdered nondairy creamer, bottled water, and omega-3 fortified foods.
Keywords: Lipid oxidation products; Aldehydes; Ketones, Dimethyl disulfide; Odor synergy effects
Abbreviations: GC-TOFMS: Gas Chromatography Time-of-Flight Mass Spectrometry; DTD: Direct Thermal Desorption; PDMS: Polydimethyl Siloxane; SBSE: Stir Bar Sorptive Extraction; HSSE: Headspace Sorptive Extraction; TDU: Thermal Desorption Unit; CIS: Cooled Injector System; MPS: Multi-Purpose Sampler; TIC: Total Ion Chromatogram; DHA: Docosa Hexaenoic Acid; EPA: Eicosa Pentaenoic Acid; DHS: Dynamic Headspace
Introduction
The majority of customer off-flavor complaints about food and beverage products are probably related to lipid oxidation products (aldehydes, ketones, etc.). Since lipid oxidation problems can cause product recalls and expensive lawsuits, they must be dealt with quickly. This article will discuss examples of lipid oxidation problems and how, in many cases, the lipid oxidation off-flavor contaminants couldn’t be readily characterized as originating from lipid oxidation. It will discuss how new technology in sample preparation/extraction coupled with gas chromatographic time-offlight mass spectrometry (GC-TOFMS) instrumentation and peak deconvoluton software can provide insights into the nature of lipid oxidation off-flavors. As the following examples will demonstrate, it is remarkable how many types of foods and beverages are impacted by oxidation, how many unusual ways lipid oxidation contaminants can form, how difficult it is to detect and measure the causative chemicals, and how new analytical techniques are assisting in the elucidation of the oxidation mechanisms involved.
Results and Procedure
Unprecedented levels of oxidation off-flavor complaints in milk
Dairy food companies have long been aware of off-flavors caused by butterfat oxidation when milk is exposed to direct sunlight or fluorescent lights in the supermarket dairy case. Companies have gone to great lengths to minimize light exposure from artificial light in supermarkets. A few years ago, the largest domestic fluid milk producer was encountering unprecedented off-flavor complaints in the late winter and early spring. While the problem has been a common industry occurrence, it had been getting worse in recent years. Based on initial sensory testing, the complaints seemed to be related to lipid oxidation even though the company employed packaging to mitigate the problem. Light exposure causes chemical reactions in milk that can modify the proteins and fats to produce many negative flavors, ranging from burnt (burnt feathers or hair) to cardboard or metallic. The resulting off-flavors are dependent upon various factors such as exposure time, intensity and wavelength of light exposure, and composition of the milk [1]. Gas Chromatography-mass spectrometry (GC-MS) testing revealed higher levels of hexanal, heptanal and other lipid oxidation aldehydes and ketones had formed in the milk, in addition to elevated levels of dimethyl disulfide, an off-flavor chemical that forms from the degradation of the sulfur-containing amino acid methionine when it is exposed to UV radiation. The aldehydes, ketones and dimethyl disulfide were detected in ppb levels by a technique called headspace solid-phase micro extraction (SPME) GC-MS.
Elevated levels of dimethyl disulfide in all complaints confirmed that light abuse was the cause of the lipid oxidation off-flavors. The critical questions facing dairy technologists: Why were unusually high concentrations of lipid oxidation off-flavors forming in the milk, and how could the problem be eliminated? Chemists used different GC-MS techniques to analyze butterfat profiles. To their surprise, the composition of the butterfat triglycerides was atypical in complaints, containing significantly higher levels of linoleic acid than normal, thus making it more susceptible to lipid oxidation. Further investigation revealed the reason for the change in the fatty acid profile of the milk. During the past few years, dairy farmers were feeding their cows increasing levels of soybeans, a cheap feed source, in their winter rations. This modification in their diet caused modification in the fatty acids profile of milk, increasing levels of linoleic and linolenic acids in the butterfat and elevating levels of malodorous lipid oxidation byproducts. Dairy farmers who supplied milk to the company were told to decrease levels of soybean meal in their winter rations and to add higher levels of vitamin E (antioxidant) supplements to reduce lipid oxidation. Headspace SPME-GC-MS was used to monitor milk from individual dairy farms to ensure compliance.
Vanilla ice cream with “burnt-feathers” off-flavor
Another interesting problem related to photo degradation of dairy components occurred with vanilla ice cream samples that were stored in a new freezer warehouse facility in Milwaukee, WI. The dairy company began receiving off-flavor complaints (putrid, burnt-feathers malodor) immediately after ice cream samples were shipped to supermarkets from their new freezer warehouse. Food technologists suspected samples were contaminated with a foreign chemical present in the warehouse. However, when they inspected the warehouse, no source could be identified. The complaints continued. GC-MS analysis using a technique called purge-and-trap GC-MS (with Tenax TA as the trapping sorbent followed by GC-MS analysis) revealed the offending chemical was dimethyl disulfide. Protein degradation reactions by microbial enzymes could be a source of the dimethyl disulfide. But high concentrations of hexanal from lipid oxidation reactions in the complaints compared to normal tasting samples from the same lot indicated that the samples may have experienced photo degradation reactions. Other clues supported this theory. For example, the ice cream packaging had a clear plastic lid that allowed light exposure. Not all the samples placed in the warehouse developed the off-flavor.
Food technologists returned to inspect the warehouse a second time – this time looking for potential light sources. The warehouse was equipped with high intensity lights, which were suspected as the cause. It was noted that product stored on the top shelf’s in the warehouse were in close proximity to the lights. A new production lot of ice cream was brought into the warehouse. Half of the ice cream on the top shelf was covered with a tarpaulin. Uncovered samples stored on the top shelves developed high levels of dimethyl disulfide (from photo degradation of methionine), while none of the covered samples developed the off-odor. Figure 1 shows chromatograms of a control (normal tasting sample covered by tarpaulin and a complaint uncovered sample with excessive formation of dimethyl disulfide. Technically lipid oxidation wasn’t the cause of the off-flavor. However, excessive concentration of hexanal indicated that photo oxidation could be a potential problem because of the degradation of methionine by high intensity lights. This is a good example of how chromatographic testing can be used to supply important indirect information regarding the origins of off-flavor development. In the milk sample discussed in section A, dimethyl disulfide was a confirmatory marker pointing to lipid oxidation products as the cause of off-flavor formation; in this ice cream example, elevation in hexanal levels in complaints was a good marker that the dimethyl disulfide was formed by photo degradation problems with the sulfur containing amino acid.
Figure 1:  Light-induced putrid off-flavor in vanilla ice cream by purge-and-trap (Tenax TA) GC-MS.
French onion dip exposed to lights in the supermarket
Sometimes the problem with resolving lipid oxidation off-flavors is that they’re not always easy to identify or are even suspected as a possibility. For example, samples of French onion dip were submitted for analysis because of an unidentified off-flavor. After extensive sensory paneling of ingredients used in the manufacture of the dip, no clear cause was evident. The dip manufacturer considered supermarket lighting could be the source of the offflavor problem but tended to rule it out because the product was sold in light-blocking packaging and the off-flavor wasn’t similar to the typical off-flavors that taint oxidized milk. The company resorted to SPME GC-MS testing but couldn’t detect any unusual chemical markers typical of lipid oxidation. As a last resort, the food chemists looked to more sensitive GC-MS sample preparation techniques prior to GC-MS. The technique that provided the first clues to the problem was headspace stir bar sorptive extraction (HSSE) GCTOFMS. Figure 2 shows a diagram explaining the experimental steps. To further improve GC-MS analysis, the Leco Pegasus HT time-of-flight mass spectrometer was used to increase sensitivity and provide peak de convolution advantages. A study was set up where normal-tasting control onion dip samples were exposed to UV lights under refrigerated conditions for up to a week of storage. The more sensitive extraction technique detected the tell-tale aldehyde and ketone lipid oxidation products in onion dip samples exposed to UV light for 7 days. HSSE using a PDMS Twister stir bar with 50 times more sorbent phase than a SPME fiber can be significantly more sensitive than SPME, and unlike SPME does not suffer from analyte competition effects with active sorbent sites. As in SPME testing, samples were heated to 50 °C for an hour during extraction while stirring a mixture of the onion dip and water with a Teflon-coated stir bar. Using a substantially more sensitive extraction technique generated complicated chromatograms with approximately 120 analytes being detected. Overlapping analyte peaks can be a problem. Peak de convolution capabilities become imperative for accurate quantization. The complaint had higher concentrations of all 15 oxidation products compared to control samples. Most significant off-flavor contributing lipid oxidation products were 2-hexenal; heptanal; 2,4-hexadienal; 2-pentyl furan; (E,E)-2,4-decadienal; t-2-nonenal; and nonanal.
Figure 2:  HSSE set up for PDMS Twister extraction using thermal desorption tube, thermal desorption unit (TDU), and cooled injection system (CIS) packed with glass wool.
These chemicals generated the stale taste of the product. Packaging related chemicals detected were styrene (plastic odor) and benzophenone (a photoinitiator used in inks on packaging). However, levels of these two chemicals were too low to be of concern. Approximately 25 different sulfur-containing onion flavor compounds were detected in the samples and were not significantly different in control and complaint sample. Six different lactones were detected in samples with higher levels present in complaint and light-exposed samples vs. controls. These included -caprolactone, -heptalactone, -octalactone, nonalactone, -decalactone, and -dodecalactone. While low levels of lactones can impart creamy desirable flavors to dairy products, higher levels can impart burnt, fruity, stale, and coconut off-flavors. Lipid oxidation was not initially suspected as the cause of the problems because of the packaging used and because the off-flavor was not recognized as a lipid oxidation off-flavor by sensory panelists – even though they were trained to identify lipid oxidation offflavors in milk. Background flavor chemicals from the onion flavoring were obscuring the lipid oxidation off-flavors, making them unrecognizable. In addition, a more sensitive extraction GCMS technique had to be used to detect the oxidation malodorants. The chemicals were difficult to extract by other sample preparation techniques because of the high fat content of the product. In addition, the Leco Pegasus GC-TOFMS was required because of its higher sensitivity and peak de convolution capabilities. The fast sampling rates of time-of-flight mass spectrometers generate higher data density than scanning quadruple mass spectrometers, thus providing superior peak de convolution capabilities [2]. Of course, the HSSE GC-TOFMS technique is useful for studying other types of off-flavors in French onion dip and other dairy products. One dip manufacturer was concerned about styrene contamination migrating from packaging and wanted a test that could reliably detect less than 5 ppb styrene in the dip. While SPME GC-MS couldn’t detect styrene levels at that low of level, HSSE GC-TOFMS was shown to detect as little as 0.1 ppb styrene.
Sour cream oxidation in pilot plant samples
A food company was experiencing off-flavor problems with a processing line that was used for sour cream production in its pilot plant research facility. It replaced all of the ingredients multiple times and still the problem persisted. Sensory paneling was unable to determine the cause of the off-flavor. Processing temperatures and heat exposure times were carefully monitored. Researchers speculated the problem was related to a microbial culture problem. SPME GC-MS was used to analyze samples. In this case, lipid oxidation aldehydes and ketones were detected but no dimethyl disulfide. While sensory paneling was incapable of recognizing oxidation off-flavors again because of background flavors associated with sour cream, GC-MS analysis clearly showed the off-flavors were caused by lipid oxidation of the butterfat. GC- -MS analysis provided even more insight into the cause for the formation of the lipid oxidation byproducts. Photo degradation was not likely the cause since dimethyl disulfide was not detected in the off-flavor samples. Contamination by contact with prooxidant metals was suspected. Inspection of chromatographic peaks identified by mass spectrometry ruled out bad fermentation metabolites. After consultation with the maintenance department that had recently performed “upgrades” to the processing line, food technologists learned that a new copper valve had been installed. After replacing the copper value with a stainless steel valve, the off-flavor problem disappeared. This is a good example of how profiling volatile chemicals with GC-MS can provide useful clues for investigating the source of off-flavor contamination. Using wet chemistry tests like TBARS assay and peroxide value test would not have provided the key clue that dimethyl disulfide was absent and that light oxidation was unlikely.
Detection of oxidation products in nondairy creamer
A large manufacturer of nondairy creamer (NDC) was experiencing off-flavor complaints with its NDC powder. Sensory analysis pointed to lipid oxidation, a common problem for this product. However, SPME GC-MS could only detect slightly elevated concentrations of hexanal in the complaints compared to the controls but could not detect other unsaturated oxidation chemicals that contribute more potent off-flavors. Again, chemists turned to a different, more sensitive analytical method direct thermal desorption (DTD) using a slitted microvial followed by GC-TOFMS [3]. A slitted microvial with 30mg of a vegetable oil is shown in Figure 3; for NDC testing, 30 mg of powder was placed in the slitted vial. Compared to the normal non-slitted vial that was typically used, the slitted vial extracted higher levels of volatiles from samples since the carrier gas flow generates more turbulence in the vial to flush out the volatiles analytes resulting in better sensitivity and lower detection limits Figures 4 & 5 shows a NDC sample in the slitted vial and in the vial placed in the thermal desorption tube tray of the auto sampler in preparation for thermal desorption in the Gerstel Thermal Desorption Unit (TDU). During the thermal desorption process, volatiles are cryogenically trapped with liquid nitrogen cooling in the Cooled Injector System (CIS). The system is the same as shown in Figure 2, only substituting the thermal desorption tube with a PDMS Twister stir bar with a tube containing the NDC sample in the microvial. The SPME GC-MS test used one gram sample weights of NDC added to 4 milliliters of deionized water and 1 gram of sodium chloride (Figure 6).
Figure 3:  Microvial with slit 1 cm from bottom of vial with 30 mg edible oil sample for direct thermal desorption (DTD).
Figure 4:  Analysis of edible oil by DTD GC-TOFMS with slitted microvial showing carrier gas flow with increased turbulence.
Figure 5:  Analysis nondairy creamer by DTD with slitted vial and tube tray holder for automated analysis.
Figure 6:  30 mg nondairy creamer analyzed by DTD GC-TOFMS showing lipid oxidation products and added flavorants.
The sample, water, and salt were added to a 20mL GC vial, heated to 50°C for 20 min and extracted with a SPME fiber. Hexanal was the only lipid oxidation product detected. The top trace in Figure 7, shows these results. With the new DTD-GC-TOFMS technique using a slitted microvial and robotic thermal desorption instrumentation from Gerstel Inc., 30 mg of NDC was added to a Gerstel thermal desorption tube. The sample tube was placed in the Gerstel Multi- Purpose Sampler (MPS), the auto sampler. The sample was heated to 100°C for 15 min, while the eluting volatiles were cold trapped in a PTV injector (Gerstel CIS) cooled to -80°C. Once all the volatiles were trapped, the PTV was rapidly heated to 260°C, releasing the volatiles into the GC capillary column where they could be separated and sent to the TOFMS detector for detection and identification. Even though the DTD sample preparation technique used only 30mg of sample, it detected eleven different lipid oxidation products that the SPME GC-MS method could not detect. The improved sensitivity of the DTD test vs. SPME is attributed to the fact that DTD testing is an exhaustive volatile extraction technique. SPME is, by nature, an equilibrium extraction technique. Therefore, greater amounts of analytes are extracted by DTD than by SPME. In effect, much higher analyte recoveries are afforded by DTD than by SPME.
The chemists worried that the higher heating temperature used for DTD compared to SPME was generating more lipid oxidation products. They rationalized that increased oxidation was not likely occurring because of the higher heating temperature used in DTD compared to SPME because during thermal desorption of volatiles from the thermal desorption tube; the sample is continually flushed with helium so no air (oxygen) reaches the NDC sample. Additional experimentation, illustrated by the chromatograms in Figure 7, showed that this, indeed, was the case. A one gram sample of NDC was placed in the 20mL GC vial and heated to 100°C for 15 min. The sample was allowed to cool, and the water and salt were added. The sample was then subjected to the normal SPME test. No additional lipid oxidation chemicals were detected in the sample that was heated to 100°C for 15 min. However, some new chemicals were generated from the heating – furfural and furan methanol from thermal decomposition of sugars in the NDC. These chemicals were not detected in the sample when the DTD method was used, as illustrated in the chromatograms in Figure 6.
Figure 7:  Same nondairy creamer sample as Figure 6 by SPME GC-MS with Agilent 5973 MS.
DTD testing of control and complaint samples with DTD-GCTOFMS (Leco Pegasus HT GC-TOFMS) clearly showed that lipid oxidation products were a source of the off-flavors in the complaint samples. However, it also provided important clues as to the possible origin of these malodorous aldehydes and ketones. In addition to elevated levels of lipid oxidation products, DTD chromatograms revealed elevated levels of free fatty acids and other carboxylic acids (especially hexanoic acid) in complaint samples. These acids likely originated from the casein powder used in the product formulation. Casein powder has been shown to develop a characteristic “wetdog” malodor from microbiological spoilage, generating hexanoic acid and other volatiles acids. Sensory studies with hexanoic acid added to caseinate show positive correlation with a barny/ animal-like sensory attribute [4]. Follow-up studies showed that the lipid oxidation products as well as hexanoic and other acids in the retained caseinate powders were unusually high and were the primary contributors to the malodor of the complaint NDCs. So again, testing with GC-MS was able to not only reveal that lipid oxidation products were contributing off-flavors to the product, but that they were originating from the caseinate ingredient, not the more likely suspect – the vegetable oil used in the product. This is another example of potential of GC-MS techniques to reveal the source of off-flavors in food products.
Complaint with bottled water samples
Complaint and control bottled drinking water samples were submitted for analysis. Customers complained the water samples had a malodor reminiscent of motor oil. SPME-GC-MS, a common technique now employed in food industry laboratories for flavor studies, was used for testing volatiles. In this case, an olfactometry detector was added to the analytical instrumentation. With this accessory, the chemist injects a sample extract into the GC injector. The mixture of volatiles passes through the column, separate into component peaks, and exit the column. Half of the effluent is directed to the MS detector for peak identification and peak intensity measurement, while the other half goes to the olfactometry detector. Chemists or sensory scientists can then sniff each peak as it elutes from the column and determine the odor characteristics of each eluting peak. SPME GC-MS analysis of control and complaint samples did not reveal any significant chemical differences in the samples or any significant odors by olfactometry. The samples were reanalyzed using stir bar sorptive extraction (SBSE) GC-MS, a technique employing polydimethyl siloxane (PDMS) sorbent as an extract ant that can be 1000 times more sensitive than SPME-GCMS. It is further explained in Figure 8. SBSE is based on the Gerstel Twister→ (Baltimore, MD). For these samples, the 2cm PDMS Twister with 0.5mm film thickness (63μL volume) was used to increase analyte sensitivity. Thirty millilters of sample was stirred for 2hr. As shown in Figure 8, hexanal, heptanal, octanal, nonanal and decanal–all potential lipid oxidation products–were detected in all complaint samples but not in any of the control samples (or were at least at levels 10 higher in complaints vs. controls). Only hexanal was detected by SPME GC-MS even though the same type of GC column and chromatographic conditions were used. Flavor chemists know that this group of aliphatic aldehydes are potential lipid oxidation products but certainly don’t smell like motor oil. Furthermore, when the samples were injected into the GC-MS using SBSE-GC-MS, no odors were detected, except for octanal which had a slightly sweet/green odor. No other chemicals clues were present that could explain the cause of the malodor in the complaints. More testing that was conducted.
Figure 8:  Odor syngergy effects with aldehydes in bottled water by PDMS SBSE GC-TOFMS.
In every case, complaint waters had significantly higher levels of aliphatic aldehydes than controls. Aldehydes were the only significant peaks detected. On a hunch, the chemists considered possible odor synergy effects. Combinations of single chemicals can produce enhancing or masking effects odor sensory effects. In addition, cases of odor synergy effects have been known to create odors unrelated to the individual contributing chemicals, as explained in the next example for fishy malodors. Researchers conducted experiments to determine if various combinations of aldehydes smell like motor oil even though the individual aldehyde components did not [5]. When odor synergy effects are observed, combinations of two or more chemicals can generate odors that are total different and unrelated to the individual compounds when they are smelled individuals. To test if this was the case, chemists conducted a model systems study–subjecting a series of standards containing various combinations and concentrations of the aldehydes in DI water to sensory analysis. They were able to determine that a mixture of octanal and nonanal (>4.0 ppb and >6.0 ppb, respectively) generated the identical motor oil malodor. This was astonishing to the chemists and sensory scientists involved in the study since these well known chemicals had never been associated with a motor-oil type malodor. Figure 8 shows SBSE GC-TOFMS total ion chromatogram (TIC) results for the sensitive SBSE technique. One baffling question: What was the source of the lipid oxidation products present in bottled water–a product with no added unsaturated fat? Further investigation revealed the source of the aldehydes was photo degradation of erucamide, the amide of C22 mono-unsaturated erucic acid, the slip agent used in closures of the bottled water samples.
Omega-3 fortification causing fishy malodors
While oxidation reactions of docosahexaenoic acid (DHA) and eicosapentaenoic acid (EPA) are responsible for fishy off-flavors in marine oils, gas chromatography-olfactometry (GC-O) and other types of analytical studies have failed to reveal which specific oxidation products are involved – until recently. GC-MS-O studies of marine oils with fishy malodors revealed approximately 32 lipid oxidation products irrespective of the analytical extraction technique used including SPME, dynamic headspace (DHS), or HSSE, but none were characterized as fishy. However, when all sample volatiles were captured together and then desorbed simultaneously in GC-O experiments, the fishy malodor was evident, indicating odorant synergy effects were responsible. By trapping various combinations of lipid oxidation products and then eluting them in GC-O experiments researchers were able to identify that the combination of two lipid oxidation products – heptanal and (E,Z)-3,5-octadien-2-one were responsible for the fishy malodor that develops with omega-3 products. Excluding either heptanal or (E,Z)-3,5-octadien-2-one eliminated the fishy malodor from omega-3 products [6]. To prove that heptanal and (E,Z)-3,5- octadien-2-one were synergistically involved with creating fishy malodor, researchers just trapped these two chemicals from a sample and released them simultaneously to the olfactometry port. A strong fishy malodor was observed. Next, they tried omission experiments. They injected a sample with fishy malodor and trapped all chemicals accept heptanal. When all chemicals minus the heptanal were released to the olfactometry detector, no fishy odor was detected. They repeated the experiment, but this time eliminating only the (E,Z)-3,5-octadien-2-one. Again no fishy malodor was detected.
Conclusion
The examples discussed in this report, show that detection of lipid oxidation products in complex samples and determining their mechanism of formation can be challenging. However, with the aid of modern innovative extraction techniques like SBSE, HSSE and DTD in combination with highly sensitive TOFMS detectors (e.g., Leco Pegasus HT TOFMS or the Leco BT TOFMS), it is possible to decipher critical causes of lipid oxidation off-flavors in foods and beverages so adjustments in ingredients, processing conditions or sample storage, can be made quickly to avoid product recalls and damage to a product’s reputation
https://lupinepublishers.com/agriculture-journal/pdf/CIACR.MS.ID.000131.pdf
For more Agriculture Open Access Journal articles Please Click Here: https://www.lupinepublishers.com/agriculture-journal/
To Know More About Open Access Publishers Please Click on Lupine Publishers
Follow on Linkedin : https://www.linkedin.com/company/lupinepublishers Follow on Twitter   : https://twitter.com/lupine_online
51 notes · View notes
Lupine Publishers | Plant Based Protein Sources and Extraction
Tumblr media
Lupine Publishers | Agriculture Open Access Journal
Abstract
In the context of accelerating interest in novel protein sources, plants gain importance day by day. Vegans, vegetarians, people with gluten-restricted dietary necessities are looking forward to new protein sources from novel plant products. However, concerning protein extraction methods from plants have many influential points to consider. This mini-review below represents the current situation for protein extractions and implements solutions to some drawbacks in plant protein extraction issues. Hopefully, the solutions below will be a guide for industrial and technologic aspects of plant protein extraction.
Introduction
Consuming an adequate amount of high-quality protein in daily nutrition is one of the key features of healthy lives. Shortage of protein in nourishment may lead metabolic diseases [1]. A climbing trend in the requirement of plant-based protein sources can be easily seen due to medical needs as lactose intolerance or lifestyle choice as being vegan or vegetarian. The population of vegetarian, vegan and the number of people who faces with problems depending on proteins based on animals are increasing. Vegetarian is a person who avoids eating for various reasons as health, religious or not being cruel to animals. And vegans do not consume or use meat and any other products such as fish, eggs, cheese or leather [2]. In the USA, Brasilia, Austria, and India have vegans and vegetarians in terms of 4, 8, 3 and 40% of their countries population, respectively [3]. To achieve enough amounts of protein resources, a number of studies have been taking place for years. Additionally, the price of animal-based proteins may cost much higher than plant-based proteins. Therefore, plant-based protein sources give an affordable alternative in the countries with tremendous prices for milk and dairy products [4].
Plant proteins have a wide ranged usage area. Plant-derived pharmaceuticals (as experimental drug ZMapp to heal disease caused by Ebola Virus [5],) are also considered for finding new plant protein sources. Plant-protein-based delivery systems for encapsulating bioactive ingredients in foods are also the new usage areas [6]. In terms of extending demand of plant protein sources for various numbers of reasons, it is clear that new researchers are needed for finding novel plant materials available for extraction efficiency.
Soybeans are the main commercial source for gaining plantbased proteins. Additionally, new alternative possible sources are added to literature and new consumer goods have already been taken place in supermarkets. Searching for a novel plant as a new resource, led researchers to think various plants such as hemp [7], quinoa [8], potatoes [9], rice [10], maize [11], chickpeas [12], lentil Joshi, 2012, peas [13], sesame, peanuts, walnuts, hazelnuts, wheat and so on. Researchers have been considering even aquatic flora [14] as a candidate source. However, even they may contain high protein content; digestibility is another issue to consider while they may have low bioavailability. Another issue related to some of these sources is allergen molecules. So far, 15 major antigens that bind Ig-E antibodies have been found in soybean plant and products made from soybean [15]. Gluten allergy is closely related to celiac disease and nutrition including gluten raises up the symptoms of the disease. Besides that, some researchers indicate that eating habits including gluten may exhilarate severity of schizophrenia and cerebellar ataxia [16]. Additionally, growing trend of avoiding GMO products in society forces companies to find alternative new resources.
To overcome lack of protein sources, extraction methods from plants have been studied for many years. Classification and purification of proteins in the extract or the product is possible by using analytical methods such as 2-dimensionalelectrophoresis (2-DE). During this process, challenges can be seen in many ways [17]. Secondary metabolites and proteases are found in excessive amounts. Due to these components, protein extraction, separation, and identification may be affected. In terms of proteomic studies, a clear extract is necessary to prevent unwanted migrations of unwanted metabolites while 2-DEare made. These unwanted migrates may affect migration of proteins. Therefore, they need to be swept away from extract as much as possible [18].
Extraction procedure development of a protein obtains many factors to think about. Depends on the part of the plant, extraction methods and solvents vary. As long as minimizing the tissues as much as possible is the key factor of having a larger amount of bulk protein; some challenging features (fiber-rich content, adhesive structure, hardly disrupting parts) are needed to be considered. Cell walls of plant cells consist of complex polysaccharides which add extra problems for squeezing tissues. Liquid nitrogen usage with mortar and pestle is a useful way to lower protein degradation during tissue disruption. Protein degradation can also be lowered by cooling and adding buffer solutions in homogenization environment. Addition of quartz sand facilitates obtaining finer powder [19].
New applications of extraction processes open up innovative ideas. One of the most important issues in that concept, considering the most convenient and appropriate extraction protocol depending on the aim of extraction. As an example, usage of toxic chemicals in extraction or precipitation procedures may lead to increase costs for cleaning procedure and also some chemicals may be forbidden due to regulatory rules. In this concept, we may need to take advantage of assisting applications above.
Microwave-assisted extraction (MAE) systems have a marvelous potential due to the new research. Decreased solvent consumption, thermally save applications, increased yields depending on a lowered number of extraction steps are the benefits of this system. Non-ionizing electromagnetic waves (which are between 300 MHz and 300 GHz) are known as microwaves and in the spectrum, they take place between X-ray and Infrared rays. Transferring absorbed electromagnetic energy to heat energy is possible by using electromagnetic waves. The theory behind MAE depends preventing loss of energy captured by environment-which is the drawback of conventional heating [20]. As long as MAE provides targeted and selective heating mechanism (with ionic conduction and dipole rotation), it is more favorable than traditional heating conventional heating methods.
Enzyme-assisted protein extraction (EAE) depends on disrupting cell wall and increasing yield in mild conditions. Therefore, it is more environmental-friendly method than using harsh chemicals. Specific enzymes for extraction methods have been developed for extraction of proteins from tea leaves, Leguminosae gums, oilseed meals and so on. A significant peak in yields has been shown in research when it is compared with chemical-based methods [21].
Ultrasound treatment before extraction process is responsible for reducing size in the target which increases surface area. Speeding up the extraction process and having an increased yield is possible after ultrasound treatment. Ultrasound application procedure stimulates hydration process by using the power of cavitations by using bubles in hydrophobic surfaces.
Secondary metabolite removal is another way to obtain more pure protein extracts. Secondary metabolites are low molecular weight compounds and around 50.000 structures have been enlightening [23]. Definition of secondary metabolites demonstrates a compound whose biosynthesis is restricted to selected plant groups. They can be providing floral scent with volatile compounds, the color of the petals with pigments or protection systems made from toxic chemicals to avoid pathogens and herbivores [24]. Some examples of secondary metabolites can be given as phenols, flavonoids, tannins, lignins, stilbenes [25]. Most of the times, including these type of compounds in a food product, protein isolate and so on is mostly unfavorable due to taste, odor, appearance and healthrelated reasons [26]. Additionally, 2-DE requires samples without secondary metabolites. Removal procedure can be affected by species, tissue-specificity, age, and or developmental stage. These compounds can be removed before or after protein extraction. Using TCA/acetone precipitation, organic solvents are used to clean the extract. These give a result of white or light colored pellet [19,27]. Air-dried acetone powder is used for extracted in aqueous buffers after obtaining a pellet. However, this removal procedure comes out with drawbacks such as lessen yield of extraction. Other strategy-removing secondary metabolites during the extraction by using aqueous buffers including EDTA, DTT or 2-ME, a protease inhibitor cocktail, and poly vinyl poly pyrrolidone (PVPP). PVP(P) compounds can be cleaned up by using a centrifuge. Unfortunately, co precipitating contaminants are hardly removed by using organic solvents.
Therefore, this strategy fits tissues which can be easy extracted such as young tissues [19].
After removing secondary metabolites, another limitation comes up with the bulk of protein. Solubilization of the protein is important to move on with rest of the experiment. 2-DE separation, measuring the amount of protein and other levels of the process may need a solubilized protein. Re-solubilization can be done with SDS. Anionic nature of SDS does not directly affect isoelectric focusing of the gel [28]. Long incubations with lysis buffers (including urea, thiourea, MDTT, CHAPS, carrier ampholytes) and shaking is another way to re-solve bulk proteins [19].
Conclusion
Plants as sources of protein bring new opportunities to people who prefer or need to eliminate or decrease animal-based protein consumption. In this concept, we tried to implement some problems and key factors to rise up yield and overcome the problems depend on the extraction process. Hopefully, future studies will bring additional novel species and sources to fulfill the requirement of wide-ranged plant proteins for food and many other industries
https://lupinepublishers.com/agriculture-journal/pdf/CIACR.MS.ID.000130.pdf
For more Agriculture Open Access Journal articles Please Click Here: https://www.lupinepublishers.com/agriculture-journal/
To Know More About Open Access Publishers Please Click on Lupine Publishers
Follow on Linkedin : https://www.linkedin.com/company/lupinepublishers Follow on Twitter   : https://twitter.com/lupine_online
50 notes · View notes
Lupine Publishers | Pre-cochlear Implantation Aural/Oral Rehabilitation, Is it Mandatory?
Tumblr media
Lupine Publishers | Journal of Otolaryngology
Abstract
Background: The use of cochlear implantation (CI) can fully restore hearing. Consequently, speech production can improve over time and enters the normal rang when traditional amplification Devices (hearing aids) are unable to restore access to the full range of phonemic components of speech, a cochlear implant (CI) is a widely used treatment option for children with sensorineural        hearing loss (SNHL).
Purpose: The aim of this study is to compare the functional benefit of the communicative skills of children with CI without pre-implantation aural/oral rehabilitation in relation to those with CI with pre-implantation 6 months aural/oral rehabilitation in order to compare the role of pre-implantation aural/oral rehabilitation on the communicative abilities of severe to profound and        profound sensorineural hearing impaired children.
Method: This study has a prospective design. It started after final diagnosis and decision that all children are candidates for CI but half of them are fitted with behind the ear hearing aids and the other half of children are immediately implanted provided that the primary language assessment before rehabilitation is present in the medical files of these children. A 2nd language assessment was done after 12 months of language therapy to detect the progress of the language development. These sixty patients were        divided into two groups:
a) Group A: Thirty children, who have used behind the ear hearing aids for one year before CI and attended regular language        therapy.
b) Group B: Thirty children, who shifted immediately to cochlear implantation, and were enrolled in auditory training and        language therapy for one year.
Results: Total language age of children using cochlear implant without pre-implantation aural/oral rehabilitation is significantly higher than that in the children while using hearing aids for one year before CI. Also, there is highly significant difference between        frontal and back speech sounds in the children after immediate implantation with positive correlation.
Conclusion: Cochlear implant is safe & reliable technique. The fact that many profoundly hearing impaired children using immediate cochlear implant without pre-implantation aural/oral rehabilitation can develop functional levels of speech perception & production, develop competency level in a language other than their primary language and continuation of language therapy        together with proper mapping accordingly is a must to enroll these children in main stream education.
Keywords: Hearing Aids; Cochlear Implant; Language; Speech Intelligibility Pre-implantation Rehabilitation
Abbreviations: SNHL: Sensorineural Hearing Loss; HA: Hearing Aids; CI: Cochlear Implantation
Introduction
Language in children begins to develop since birth and is nearly complete by the age of 6 years. Language skills, speech quality, expressive and receptive vocabulary are enhanced by exposure to aural language since as early an age as possible [1]. Children spend many hours in acoustic environments where target speech signals are embedded in competing sounds from multiple sources. In these environments, perception of target speech is assisted by a listener’s a listener’s ability to segregate the multitude of sounds into separate auditory streams, one cue to which is the angle of incidence of different sounds [2]. Children with profound sensorineural hearing loss (SNHL) experience delays in learning to understand the speech of others and to produce intelligible speech. There is solid evidence that moderate (or more severe) hearing impairment exerts a negative impact on speech, language, cognitive development, and early identification and management may be of great benefit to these children, through improved language, communication, mental health, and employment prospects [3]. The use of Hearing Aids (HA) or Cochlear Implantation (CI) can partially or fully restore hearing. Consequently, speech production can improve over time and enters the normal range. After hearing is restored, hearing impaired individuals use auditory feedback to adjust voice features such as voice intensity, intonation and vowel duration [4]. When traditional amplification devices (hearing aids) are unable to restore access to the full range of phonemic components of speech, a cochlear implant (CI) is a widely used treatment option for children with SNHL [5]. Cochlear Implants (CI) which are called as bionic ears are effective in trans- mitting salient features of speech, especially in quiet [6]. Because the goal of restored hearing in a deaf child is to enable useful hearing, a key measure of outcome should reflect how a deaf child’s experience with a CI develops into the effective use of spoken language. Parental surveys indicate that the outcome of their greatest concern after surgical intervention in children with SNHL is the level of spoken language achieved [7]. Cochlear implants have become a popular option for children with profound hearing loss. Evidence supporting the benefits of early implantation is found in experimental [1], developmental [2], and clinical cochlear implant studies [3]. The consensus is that children have the best opportunity to learn language during their first 5 years of life. According to [2], this critical period for language learning is particularly important in deaf and hearing-impaired children. Providing cochlear implants to deaf children at a young age may enable them to take advantage of this critical period for learning language and is likely to increase their chances for developing speech and language skills like those of normal-hearing children. Early implantation would also result in a decrease in the duration of auditory deprivation, a decrease considered to positively influence performance with a cochlear implant [4].
Objectives
The aim of this study is to compare the functional benefit of the communicative skills of children with immediate CI without preimplantation aural/oral rehabilitation in relation to those using hearing aid with pre-implantation aural/oral rehabilitation in order to compare the role of each amplification device and the effect of pre-implantation aural/oral rehabilitation on the communicative abilities of severe to profound and profound sensorineural hearing impaired children.
Subjects & Methods
This research was conducted during the period between the years 2017 and 2018. The study protocol was approved by the Otolaryngology Department Council of Beni-Suef University and Otolaryngology Department Council of King Abd Elaziz specialized hospital Jouf, Saudi Arabia. Consent to participate in this research was obtained from the subjects’ parents before commencement of the study. This study employed a comprehensive design to examine outcomes in multiple domains of communication in children who used either bilateral behind the ear hearing aids and preimplantation aural/oral rehabilitation or an immediate unilateral cochlear implant without pre-implantation rehabilitation for a period of one year. These were selected from children seeking language rehabilitation in Phoniatrics Unit, Beni-Suef University Hospital and children seeking language rehabilitation in Phoniatrics Clinic, King Abd Elaziz specialized hospital Jouf, Saudi Arabia. Shortly after confirmation of bilateral permanent hearing loss, thirty children were typically fitted with bilateral behind the ear hearing aids using the desired sensation level (DSL) prescription method and regularly attend aural/oral rehabilitation sessions. Thirty children underwent a comprehensive team evaluation for cochlear implant candidacy and received immediate unilateral cochlear implants without pre-implant aural/oral rehabilitation. All children received audiologic management and preschool rehabilitation and all children were enrolled in rehabilitation programs with a focus on the development of receptive &expressive language. Children were regular in Phoniatrics clinic, were asked to follow up auditory rehabilitation & language therapy program twice per week. Children with cochlear implants were followed every month for mapping of their speech processor and speech recognition testing. The study received ethical approval from the Hospital of Beni-Suef University and from King Abd Elaziz specialized hospital and written informed consent was obtained from all their parents. Collaboration between ENT clinic, Audiology clinic& Phoniatrics clinic was done in the form of ENT examination, audiological assessment, and language assessment and rehabilitation for all children. This study has a prospective design. It started after fitting half of the children with bilateral behind the ear hearing aids with aural/oral rehabilitation sessions and the other half of the children are immediately implanted, mapped, and regularly attended aural rehabilitation sessions provided that the primary language assessment before rehabilitate- ton is present in the medical files of all children. A 2nd language assessment was done after 12 months of language therapy to detect the progress of the language and the efficacy of pre-implantation aural/oral rehabilitation. These sixty        patients were divided into two groups:
a) Group A: Thirty children, who have used behind the ear hearing aids for one year and attended regular language therapy    despite those children, are candidates for cochlear implants.
b) Group B: Thirty children, who shifted immediately to cochlear implantation and had regular language therapy and were       enrolled in auditory training.
Half patients were fitted with bilateral powerful digital signal processing BEHAs and used them for at least a 12-months period before CI. Hearing aid use was determined by parental and therapist reports. After surgical implantation of the device and an adequate healing period for the other half of the patients, the implants were activated (usually 4 weeks after surgery). The children were fitted with one of the two brands of speech processors using a behind the ear controller. Speech processors used in this study were OPUS 2 with standard Sonata electrode & Cochlear Freedom Processor    with nucleus 24 k straight electrode.
In this study using Modified Preschool Language Scale & Subjective Speech Intelligibility Test gave us a summary of the improvement of these children. This is matched with other studies which focused that both comprehension and expression of spoken language are important markers of parent-perceived success of a    CI (Figure 1).
Language Improvement Quotient: The language improvement quotient [8] was used to compare between the rates of progress in language in order to overcome the bias of age        matching between the individuals in the study.
Language Improvement = 2nd language age -1st language age     divided by duration of language rehabilitation.
a) A1 refers to language development of group (A) after using bilateral behind the ear hearing aids for 12 months which is     calculated by this equation:
A1= 2nd language age -1st language age divided by 12(duration     of rehabilitation)
b) B1 refers to language development of group (B) after using unilateral CI for 12 months which is calculated by this equation: B1= 2nd language age -1st language age divided by 12(duration     of rehabilitation).
Speech analysis was performed using the Ain Shams assessment protocol which includes analysis of supra-segmental phonology (rate, stress and tonality), segmental phonology (consonants and vowels), nasal resonance and general intelligibility of speech. Assessment of auditory perception skills was performed evaluating a hierarchy of listening skills ranging from detection, to discrimination, identification, recognition and comprehension. Assessment of speech reading abilities was done and expressed as     percent change over time.
   Statistical Studies
Data was analyzed using SPSS, Statistical Package for the Social Sciences version 17 (SPSS Inc., Chicago, IL). Numerical data were expressed as mean, standard deviation, and range. For quantitative data, comparison was done using Mann-Whitney test (non-parametric t-test). A p-value < 0.05 was considered significant. Spearman-rho method was used to test correlation between numerical variables (r > 0.3 = no correlation, r = 0.3-0.5 =     fair correlation, r = 0.5-0.1 = good correlation).
Results
Group (A) are hearing aids users for one year, Group (B) are CI        users for 1 year. Demographic data of the 2 groups:
a) Age
b) Gender.
c) Incidence of hearing loss.
d) Psychometric evaluation.
e) Pure tone Audiometry.
f) First language age.
g) Radiology.
a) Age Distribution: Both groups are matched according to age. In group (A) the age of the children ranged between 3     years and 7years. In group (B). The age of the children ranged Between 3 years & 7 years, provided that all children were implanted before the age of 6 years.
b) Gender: No significant difference was noted in gender of        both groups.
c) Incidence of Hearing Loss: In group (A) there were 24 children (80%) with congenital hearing impairment and 6 children (20%) with acquired hearing loss, while in group (B) there were 18 children (60%) with congenital hearing    impairment and 12 children (40%) With acquired hearing loss.
d) Psychometric Evaluation: All children in group (A) had normal psychometric evaluation with a mean Value 87.5± 4.6; also, in group (B) all children had normal psychometric    Evaluation with a mean value 86.4±5.1.
e) Pure Tone Results: Pure tone results of group (A) maintained a mean value of 27.9 dB HL. Group (B). Decreased in mean values from 65.7±8.2 dB HL. There was a highly significant difference (P=0.001) between group (A) and group    (B) in favor of group (B).
f) First Language Age: Before start of therapy, both groups had no passive vocabulary and were Nonverbal. They used    either babbling or vocal play.
g) Radiology: All Children both in groups (A) and (B) were        having normal CT and MRI of Petrous bone.
Tables 1-3 demonstrate the progress of the language abilities, the auditory abilities and the speeding reading abilities of both groups, respectively, from the time just prior to the rehabilitation (either oral\ aural in group A or aural in group B) as compared to    the evaluation done one year the rehabilitation (Table 4).
Discussion
The primary purpose of this study was to obtain comprehensive data on the development of language and speech skills in a group of permanent hearing impaired children. This group shared the common degree of bilateral hearing impairment (severe to profound or profound hearing impairment), they all sought amplification, and they all sought language rehabilitation after receiving amplification using primarily auditory-based cues. The study aimed also to investigate the difference between the language and speech development under two amplification conditions; bilateral behind the ear hearing aids and unilateral cochlear implants. The choice of language age deficit to compare language skills development among the studied groups is justified by the fact that three variables usually co-vary when language results are analyzed in children; age of use of the amplification device whether hearing aid (HAs) or cochlear implant (CI), the language age before start of rehabilitation, and the language age of children after the time of rehabilitation. The difference in ages at evaluation places the younger children at a maturational and developmental disadvantage in comparison with their older peers. Thus, analyzing the results in terms of language age scores might put the younger group at a disadvantage. At the same time, analyzing the results in terms of language age deficits, although more reasonable, but still, in theory, puts the older group at a disadvantage because of the impact of their ages giving higher values for the deficit from the scored language age. That’s why the hypotheses of using the language improvement quotient [8] after determining the exact language age, may be more realistic and less biased by the chronological age differences at the time of evaluation. In this study using Modified Preschool Language Scale & Subjective Speech Intelligibility Test gave us a summary of the improvement of these children. A perfect model for comparing the results of both devices may be practically impossible, given the current indications of cochlear implant use. In this study, which was applied on two groups with comparable ages, a comparison was made between the outcomes of the 2 devices along a period of (re)habilitation of one year in their course of therapy. If the CI group were doing better than the HA group, it would indicate that the selection criteria were too conservative and some of the HA users might be better off with a CI. This raises the suspicion of the fact that HA users plateau after a period of little progress or at least their progress continue at a less pace. Cochlear implants may have a superior effect on the acoustic environment of children more than hearing aids. During the 90s of the last centuries, and using the early models of speech processes, studies proved that CI users gained better results than HA users in language and perception skills [9-14]. The minimum age for implantation has progressively reduced [15]. Advantages of cochlear implants over hearing aids extended also the adult population [15]. In a study by [16], they found CIs and children with HAs, aged 4 to 5 years, differ significantly on language abilities and there were differences in articulation skills in favor of the CI users. Advances in sound processors and related software have enhanced the fidelity with which complex sounds are processed into physiologically meaningful codes [17]. This study pointed to the importance of conducting comprehensive assessments when evaluating whether a child with severe to profound sensory neural hearing loss would likely derive greater benefit from a cochlear implant compared to a hearing aid. To date, only a few systematic studies have involved large numbers of children who received implants at various ages and have investigated both the effects of age at implantation and the amount of experience with an implant. Most of such studies were concerned with the speech perception skills after cochlear implantation with a clear evidence of the effect of early implantation on rate of acquisition of such perception skills when they are implanted at 2 – 4 years of age [18]. Concerning the auditory abilities, the progress imposed by the effect of cochlear implantation group produced better abilities than the hearing aids group in the auditory abilities. This may be explained by the fact that the hearing aids group were more rigid to their habits of relying on their visual cues making the children less efficient in acquiring the training proficiency provided to them during therapy sessions. In an explanation of this, [19] described recruitment of the auditory cortex by the visual and somatosensory systems in congenitally deaf humans. They reported that the extent of crossmodal recruitment of the auditory cortex increases as the duration of deafness increases, deterring the restoration of auditory processing in the auditory cortex of long-term deafened individuals after cochlear implantation. They also suggested that the age beyond which the effects of cross-modal plasticity in the auditory cortex are more difficult to reverse is about 6.5 years. It has also been documented that there is a change in the cochlear place code during development [20]. This may be necessary for the formation of normal and effective connections between auditory centers and for the proper development of elements within the central auditory pathways. Early cochlear implantation may contribute to the maintenance of these important developmental milestones.
Conclusion
CI children showed better rate of language acquisition skills along a one-year use of the implant compared to a similar period of HA group of HA users. The implanted group demonstrated significantly better auditory abilities, better speech production skills, and better speech intelligibility one year after implantation and with aural rehabilitation - than the aided group with oral\aural rehabilitation. The implanted group also ended with significantly less or no speech reading abilities than the aided group one year after implantation. These results indicate the favorable effect implantation over the previous parameters. Language skills shows a significant difference between the two groups. Consequently, oral\aural rehabilitation with hearing aids – even for few monthsis not mandatory.
For more Lupine Publishers Open Access Journals Please visit our website: h http://lupinepublishers.us/ For more Journal of Otolaryngology-ENT Research articles Please Click Here: https://lupinepublishers.com/otolaryngology-journal/
To Know More About Open Access Publishers Please Click on Lupine Publishers
Follow on Linkedin : https://www.linkedin.com/company/lupinepublishers Follow on Twitter   :  https://twitter.com/lupine_online
48 notes · View notes
Lupine Publishers | Effect of Grafting Technique on Productivity and Quality of Cantaloupe under Saline Irrigation Water
Tumblr media
Lupine Publishers | Agriculture Open Access Journal
Abstract
The high salinity of the irrigation water is the biggest challenge facing horizontal expansion of vegetable cultivation especially in the new reclaimed land. The high salinity of the irrigation water is of a deleterious effect on the cantaloupe production. Thus, this experiment was carried out under greenhouse conditions during 2015 and 2016 autumn seasons in Moshtohor, Kalyobiya Governorate, Egypt to investigate the possibility of using grafting technique to ameliorate the negative effects of high salinity of irrigation water on cantaloupe productivity and its quality. Two commercial cultivars “Ideal and Veleta” were used as scion while Cobalt and Strong-Tosa were used as rootstocks. A modified tongue approach grafting method was used, and then seedlings were exposed to four salinity levels [0.8 (non-saline control), 3.9, 7.1 and 10 dSm-1]. The results showed that all investigated factors “salinity levels, cultivars and rootstocks” significantly affected cantaloupe productivity and quality. Where, the medium salinity level (3.9dS/m) resulted in the highest early yield, fruits number and total yield compared to all other salinity levels while the total yield decreased by 39.7% with increasing salinity levels up to 10dS/m. Whereas, graft combination of Ideal/Strong-Tosa increased the total yield by 53.1, 85.5, 43.8 and 1.4% at different salinity levels: control, 3.9, 7 and 10dS/m, respectively compared to the nongrafted plants of Ideal at non-saline control. This percentage was 38.4, 41.5, 19.3 and decline 13.1% with Veleta/Cobalt compared to the non-grafted plants of Veleta at non-saline control. Generally, when cantaloupe plants have to be irrigated with high salinity of irrigation water, it is recommended to cultivate grafted seedling resulted of Veleta/Cobalt and Ideal/Strong-Tosa where these plants resulted the highest benefit and income compared with those on its own roots (non-grafted plants) under saline conditions.
Keywords: Cucumis melo; Cantaloupe; Grafting; Rootstock; Scion; Irrigation water and salinity
Introduction
Cantaloupe (Cucumis melo L.) is a high economic vegetable crop in many countries including Egypt. It is grown in practically every country in the world under outdoor fields or greenhouses. The cultivated area of cantaloupe in Egypt is 66,434 feddan (4200m2) with total production of 846,936 tons and an average of 12.749 ton/fed. Ministry of Agric, Egypt, 2015. The most important problems facing horizontal expansion of cantaloupe in greenhouses or in open fields are the high salinity of the irrigation water or soil especially in the new reclaimed lands. As well as the recurrence of agriculture in greenhouses increases the soil salinity and thus reduces the vertical production of cantaloupe. In addition, cantaloupe is moderately salt tolerant, it has been determined that salinity causes several kinds of damage such as growth inhibition [1-3], yield and quality losses [4-6]. This leads.com to use some newly trends to mitigate these negative impacts. The grafting technique is one of the most modern trends used to improve the productivity of vegetable plants, especially under adverse environmental conditions. Grafted vegetables onto resistant rootstocks offers numerous advantages on growth and yield, i.e., tolerance to salinity stress [7-10], increase yield and fruit quality in many crops such as melon [11] and watermelon [12,13]. Accordingly, the present study was conducted to investigate the possibility of using the grafting as a new promising technique for ameliorate the negative effects of the high salinity of the irrigation water on cantaloupe productivity and its quality.
Materials and Methods
This investigation was carried out in a private farm in Moshtohor village, Kalyobiya Governorate, Egypt during 2015 and 2016 autumn seasons to study the response of yield productivity and fruits quality of grafted and non-grafted cantaloupe plants to different irrigation water salinity levels. The soil was clay with pH of 8.0 and EC of 1.3dS/m. Two commercial cultivars Veleta RZ and Ideal (MG739) were grafted on the Cobalt RZ and Strong-Tosa rootstocks using modified tongue approach grafting method. The grafted and non-grafted seedlings were transplanted under net house condition, on the 21st of July in both investigation seasons. The plants were transplanted on one side of ridges 1.5m width, at 50cm apart. Four irrigation water salinity levels were applied [0.8, 3.9, 7.1 and 10 dSm-1] by adding NaCl to the used underground water. A split split-plot designed was adopted, with three replicates where, salinity levels were placed in main plots, meanwhile cultivars “scions” in subplots and rootstocks in sub-subplot.
The yield of the first tow pickings was considered as early yield as well as number of fruits per plant and total yield per plant (g.) were calculated in the end of the growing season. The fruit length and diameter were measured to calculate fruit shape index (fruit length/fruit diameter) and. Finally, average fruit weight (g), flesh thickness of fruit (cm) and seed cavity diameter (cm) as well as total soluble solids percentage (AOAC, 1990) were measured. Data were subjected to the statistical analysis by the method of Duncan’s multiple range tests as reported by Gomez [14]. Statistical analysis was performed with SAS computer software. Based on average of two seasons, the results of total yield were used to calculate the costs, benefits and saving of using grafted and non-grafted cantaloupe plants which grown under salinity irrigation water.
Table 1: Effect of salinity levels (dS/m), cultivars and rootstocks as well as their interaction on fruit shape index of cantaloupe plants during 2015 and 2016 seasons.
Results and Discussion
Effect of grafting technique (cultivars “scions” and rootstocks) under salinity levels of irrigation water on quality of cantaloupe fruits
Data presented in Tables 1-5 indicate the effect of salinity levels of irrigation water, cultivars, rootstocks and their interaction on fruit shape index, average fruit weight, flesh thickness of fruit, seed cavity diameter and T.S.S, respectively. Fruits quality expressed as average fruit weight, flesh thickness of fruit and T.S.S. were affected by salinity levels of irrigation water but fruit shape index was not affected in both seasons of study. Where, average fruit weight and flesh thickness of fruit were decreased by increasing salinity levels [15,16] and the opposite trend was observed with T.S.S which increased by increasing salinity levels [16,17]. Concerning the effect of cultivars on these traits of fruits quality, all fruit traits except T.S.S. were significantly affected by the used cultivars (Veleta and Ideal). In general, Ideal cultivar fruits were bigger and heavier than those of cv. Veleta. While Veleta fruits were the longer little than those of Ideal and the opposite trend at the fruit diameter all over the growing season, this reflected on the fruit shape index where cv. Veleta recorded the highest value compared with cv. Ideal. Moreover, results indicate that average fruit weight, flesh thickness and seed cavity diameter were positively affected by Cobalt rootstock while no significant effect could be detected regarding to fruit shape index and T.S.S. The obtained results agreed with those stated by cucumber who noticed that grafted plants produced fruits with Colla [9] working on watermelon, [18] and Colla [8] working on highest average weight compared with of non-grafted plants.
Table 2:  Effect of salinity levels (dS/m), cultivars and rootstocks as well as their interaction on average fruit weight (g) of cantaloupe plants during 2015 and 2016 seasons.
Table 3: Effect of salinity levels (dS/m), cultivars and rootstocks as well as their interaction on flesh thickness (cm) of cantaloupe fruit during 2015 and 2016 seasons.
Table 4: Effect of salinity levels (dS/m), cultivars and rootstocks as well as their interaction on diameter of seed cavity (cm) of cantaloupe fruit during 2015 and 2016 seasons.
Table 5: Effect of salinity levels (dS/m), cultivars and rootstocks as well as their interaction on fruit T.S.S. of cantaloupe plants during 2015 and 2016 seasons.
Most fruit quality parameters, i.e., fruit shape index, average fruit weight and T.S.S. were not significantly affected by various trials of the interaction between cultivars and salinity levels and the opposite trend was found with flesh thickness and seed cavity diameter in 2015 and 2016 seasons. However, average fruit weight, flesh thickness and seed cavity diameter as well as T.S.S were significantly affected by the interaction between rootstocks and salinity levels treatments where the highest values were represented in Cobalt rootstock when irrigated by salinity levels 3.9 and 7dS/m but increasing salinity up to 10 dS/m improved T.S.S. Meanwhile fruit shape index was not affected by this interaction. In connection with the interaction treatments between cultivars "scions” and rootstocks, the average fruit weight, flesh thickness and seed cavity diameter were positively significant affected by the grafting combinations of Veleta/Cobalt, Ideal/Cobalt and Ideal/Strong-Tosa compared with non-grafted plants (control) but T.S.S and fruit shape index were not affected by various trials of grafted plants in two seasons. Regarding to the effect of the as well as their interaction on fruit number/plant of cantaloupe interaction treatments among salinity levels of irrigation water, cultivars and rootstocks, there were significant interaction effects on all fruits quality parameters except fruit shape index. Under all studied factors, the best interaction effects were found in the combination of the cvs. VeIeta or IdeaI grafted on CobaIt rootstock under all salinity levels except the highest one (10dS/m). Where, grafted plants of Ideal/Strong-Tosa, Veleta/Cobalt and Ideal/Cobalt produced the biggest and heaviest fruits with the biggest flesh thickness in a suitable contained of T.S.S when irrigated by 3.9dS/m level of salinity levels in both seasons.
Table 6: Effect of salinity levels (dS/m), cultivars and rootstocks as well as their interaction on fruit number/plant of cantaloupe plants during 2015 and 2016 seasons.
Table 7:  Effect of salinity levels (dS/m), cultivars and rootstocks as well as their interaction on early yield (g /plant) of cantaloupe plants during 2015 and 2016 seasons.
Effect of grafting technique (cultivars “scions” and rootstocks) under salinity levels of irrigation water on fruit yield and its components of cantaloupe plants.
Data presented in Tables 6- 8 shows the effect of salinity levels of irrigation water, cultivars, rootstocks, and their interaction on fruits number, early and total yield (g/plant), respectively. Yield production in terms of fruits number, early yield and total yield per plant were significantly affected by salinity levels during both seasons. Where, the medium salinity level (3.9dS/m) resulted in significantly the highest of early yield, fruits number and total yield than all other salinity levels while the total yield decreased by 39.7% (as average between two seasons) with increasing salinity levels up to 10dS/m. The obtained results are in the same line with those reported by [3,5,17,19-21] who showed that increasing salinity levels badly affected total melon yield. Also, early yield, fruits number and total yield per plant were affected by the used cultivars (Veleta and Ideal) where cv Ideal was higher than those of cv. Veleta. The effect of rootstocks was very clear where Cobalt rootstock produced significantly higher yield components than all other used rootstock and non-grafted plants, increased the total yield by 37.4% compare non-grafted plants, as average between two seasons. In the same context, [8] on melon as well as [22-26] on watermelon who noticed that grafted plants gave the highest fruit yields compared with non-grafted plants.
Table 8: Effect of salinity levels (dS/m), cultivars and rootstocks as well as their interaction on total fruit yield (g / plant) of cantaloupe plants during 2015 and 2016 seasons.
Yield components except fruits number were significantly affected by the interaction treatments between cultivars and salinity levels, where the highest values were recorded when cv. Ideal was irrigated by salinity level 3.9dS/m compared with cv. Veleta which recorded the lowest values at 10dS/m level. Early yield and total yield per plant were significantly affected by the interaction treatments between rootstocks and salinity levels while fruits number was not significantly affected by this interaction. As for the early yield, the highest values were recorded by Cobalt rootstock when irrigated by salinity level 7 dS/m followed by 3.9dS/m. Meanwhile, the same rootstock resulted in the highest values of fruits number and total yield when irrigated by salinity level 3.9 dS/m followed by control level. Yield components were positively affected by graft combinations of Ideal/Strong-Tosa and Veleta/ Cobalt, where the total yield was increased by 60.4 and 52.6%, respectively by these combinations compared with non-grafted of the same cultivar. Meanwhile, non-grafted plants especially Veleta cv. (control) recorded the lowest values of yield components.
The best interaction effects between the three studied factors were found in the combination of Ideal/Strong-Tosa and Veleta/ Cobalt with the lowest (3.9dS/m) and control (non-saline) of salinity levels. While non-grafted plants of both cultivars (control) which irrigated by highest salinity level (10dS/m) recorded the lowest values. Generally all types of treatments interactions among the three studied factors were significantly higher compared to the control treatment. The interaction between cv. Ideal grafted on Strong-Tosa rootstock as well as Veleta on Cobalt with the nonsaline level (control) and the lowest level (3.9dS/m) of salinity levels resulted in the best cantaloupe growth El-S Zaki 2018 and the yield. Whereas graft combination of Ideal/Strong-Tosa increased the total yield by 53.1, 85.5, 43.8 and 1.4% at salinity levels control, 3.9, 7 and 10dS/m, respectively compared to the general control (non-grafted plants of Ideal at control water). This percentage was 38.4, 41.5, 19.3 and decline 13.1% with Veleta/Cobalt compared to the general control (non-grafted plants of Veleta at control water). Generally, when cantaloupe plants have to be irrigated with high salinity of irrigation water, it is recommended to cultivate grafted seedling resulted of Veleta/Cobalt and Ideal/Strong-Tosa where these plants resulted the highest benefit and income compared with those on its own roots (non-grafted plants) under each salinity level of irrigation water. Economically, it can be recommend under similar circumstances to use the water with moderate salinity (3.9 and 7dS/m) when combined with grafting seedling (Veleta/ Cobalt and Ideal/Strong-Tosa) in order to get optimum yield with using somewhat saline water
Calculation of Costs and Benefits of Applied Treatments
This parameter is illustrative and is not reliable in order to differentiate between the costs of using grafted and non-grafted plants under salinity of irrigation water. Assuming that, the remaining costs such as rental costs, workers, fertilizers, etc., agree on all factors of the study. Costs and benefits of grafted and nongrafted plants which grown under salinity levels of irrigation water were calculated as average between both seasons. Where, the price of non-grafted seedlings is close to the price of the grafted seedlings because it required additional costs "the costs of controlling soil diseases (1.25LE) according to Hasan (2015)”. The presented results in Table 9 show the costs (L.E) of the irrigation water was invariable with all salinity levels with average 2.22 L.E. /Plant. Where the highest benefits (10.30 and 9.56 L.E./Plant) and income (8.08 and 7.34 L.E./Plant) were obtained with irrigation by 3.9dS/m and non-saline level, respectively. While, the lowest benefits (5.82L.E./ Plant) and income (3.60L.E./Plant) were obtained with cantaloupe plants which irrigated by highest salinity level (10dS/m). It is due to increasing the plants yield which irrigated by 3.9dS/m (4.121kg/ Plant). Using of cv. Ideal obtained the highest benefit (10.10L.E./ Plant) and income (7.83L.E./Plant) compare with those of Veleta plant which recorded the lowest benefit (7.06L.E./Plant) and income (4.89L.E./Plant). It is due to increasing the yield of Ideal cv. (4.042kg./Plant) compare with those of Veleta plants (2.824kg/ plant). Also, the highest benefit (10.41L.E./Plant) and income (8.11L.E./Plant) were represented when both cantaloupe cultivars were grafted on rootstock Cobalt although grafted plants recorded the higher costs (2.30L.E/plant) compared with non-grafted plants (2.05 L.E/plant). It is due to increasing the yield of grafted plants on Cobalt rootstock (4.164kg/Plant) compare with those of nongrafted plants (3.058kg/plant).
Table 9: Effect of cultivars and rootstocks under salinity levels on costs and benefits of cantaloupe plants as average between 2015 and 2016 seasons.
When irrigated the plants by different salinity levels, the Ideal plants recorded the highest benefits (12.25L.E./Plant) and income (9.98L.E./Plant) at 3.9dS/m level, meanwhile the lower benefit (4.69L.E./Plant) and income (2.52L.E./Plant) were resulted with cv. Veleta at the highest salinity level (10dS/m). Also under different salinity levels, the highest benefits (11.20, 11.89 and 11.09L.E./Plant) and income (8.90, 9.59 and 8.79L.E./Plant) were represented in rootstock Cobalt and irrigated by non-saline level, 3.9dS/m and 7dS/m of salinity levels, respectively as compared with non-grafted plants which recorded the lowest benefits (8.97, 9.43 and 7.41L.E./Plant) and income (6.92, 7.38 and5.36 L.E./ Plant) at the same salinity levels. This is due to increasing the yield of grafting plants on Cobalt rootstock (4.481, 4.755 and 4.437kg./ Plant) as well as lower the yield of non-grafted plants (3.590, 3.773 and 2.965kg./ Plant). The grafting combinations Ideal/Strong-Tosa followed by Veleta/Cobalt then Ideal/Cobalt obtained the highest benefits (12.73, 11.17and 9.65 L.E./Plant) and income (10.38, 8.92 and 7.30 L.E./Plant) while the lowest benefits (2.66, 7.35 and 7.94 L.E./Plant) and income (0.44, 5.35 and 5.84 L.E./Plant) were obtained by Veleta /Strong-Tosa followed by Veleta and Ideal plants on its own roots, respectively. Under all studied factors, the grafting combination Ideal/Strong-Tosa resulted the highest benefit (13.35, 16.18, 12.54 and 8.84L.E./Plant) and income (11.00, 13.83, 10.19 and 6.49L.E./Plant) at non-saline level, 3.9 dS/m, 7dS/m and 10dS/m of salinity, respectively compared with the same cv. on its roots also the graft combination Veleta/Cobalt showed the highest benefit (12.67, 13.03, 10.99 and 8.00L.E./Plant) and income (10.42, 10.78, 8.74 and 5.75 L.E./Plant) at the same salinity levels respectively compared with the same cv. on its roots. This is due to increasing the yield of these grafting plants compared with nongrafted plants of the same cv.
https://lupinepublishers.com/agriculture-journal/pdf/CIACR.MS.ID.000128.pdf
For more Agriculture Open Access Journal articles Please Click Here: https://www.lupinepublishers.com/agriculture-journal/
To Know More About Open Access Publishers Please Click on Lupine Publishers
Follow on Linkedin : https://www.linkedin.com/company/lupinepublishers Follow on Twitter   : https://twitter.com/lupine_online
41 notes · View notes
Lupine Publishers | Retooling Agricultural Policies and Programmes for Sustainable Development in Nigeria
Tumblr media
Lupine Publishers | Agriculture Open Access Journal
Abstract
Nigeria is one of the developing countries of the world that is blessed with numerous natural resources and fertile arable land for sustainable agriculture, to counter for its increasing population. As a result of global economic downturn, there is a rising needs of Nigeria government to raise the standard of living her citizen through intensive commitment to agricultural programmes and policies. Over the years, as t Nigeria government initiated most of its agric policies and programmes, there tend to be a wide gap on the expected success of these agricultural programmes and polices; and its impacts on the citizens. The forgoing challenges have continued over time without solution. It is on this note that this researcher find it deem fit to investigate the challenges facing agricultural policies and agricultural programmes in the paces of sustainable development in Nigeria. The paper explains the meaning of the core concepts: agriculture; agricultural policies; and programmes; roles of agricultural policies/programmes; and challenges facing agric policies/programmes. The paper also went further in mapping out ways of retooling agric policies/ programmes for it to yield their objectives/goals. It was recommended among other that the government of Nigeria should ensure adequate financial backing of agricultural programmes and policies which will assist by ensuring these agric related programmes are not hampered in the actualization of their targeted objectives.
Keywords: Agricultural policies; Agricultural programmes, Sustainable development Challenges; Retooling
Introduction
Agriculture is the cultivation of the land and rearing of animals for the benefit of man. The cultivation of the land and rearing animal by the farmer(s) (an individual(s)) has far reaching attributes to the development of the society he live. Agriculture is the creation of outputs for the satisfaction of man. The output are products, goods and services of the agricultural sector, that helps to improve, sustain and better the live of man in any society; and also contributes to both the GDP (Gross Domestic Product) and GNP (Gross National Product) in aggregate. Agriculture is a channel of alleviating mass unemployment; ensure income redistribution; means of food sustenance to an anticipated population of the Nigerian society. Due to the important of agriculture in sustainable development in terms of provision of food and income to a large number of the farmers, what happen in the agric-sector is of prior attention to Nigerian society. Agricultural policies and programmes are path way for enhancing agricultural productivity. Agricultural policies are pre-determined goals, objectives and pathway set by an individual or government for the purpose of achieving a specified outcome, for the benefit of the individual(s), society and the nations’ economy at large. In Nigeria, agricultural policy include Land Use Act of 1978. Agricultural policies are pathways of improving the activities involve in cropping, livestock, forestry, processing and marketing of agricultural product. Agricultural policies take into consideration the primary, secondary and tertiary processes in agricultural production. Agricultural policy is defined as a public policy that conveys the decision to pursue a specified course of action aimed at improving agricultural practices, by ensuring output growth and the overall development of the agricultural sector [1].
Agricultural programmed is an innovative outfit by an individual(s) or government(s), in improving the acts/processes of cultivating the land and rearing of animals in order to raise standard of the living of the individual(s) and the nation at large. Agricultural programmes consist of body of process set by the government of a nation, to reform the agricultural sector so as to increase productivity of the sector for sustainable development and better-life for the citizens. Agricultural programmes are long-term prospect designed to improve the way animals are reared and crops are cultivated. In Nigeria agricultural programmes can be refer to as an outfit embarking upon by the government in the agricultural sector that is intended to ensure meaningful contribution to the development of the nation; and also better the life of the average Nigeria citizens. Example of agricultural programmes in Nigeria includes Green Revolution Programmed (GRP) and Agricultural Development Projects (ADPs) among others. Agricultural policies and programmers are officially designed in sustaining and bettering the lifes of individuals in any society and also contribute to both the GDP (Gross Domestic Product) and GNP (Gross National Product) in aggregate. Agricultural programmes and policies are strategies and mechanism which helps government to carry out set goals and objective in order to improve the agricultural sector in the short- run and in the long-run the larger society as a result of the positive impacts of such programmes and policies. In contemporary society, agricultural policies and programmes have the following objectives.
Self -Sufficient
Agricultural policies and programmes are geared to make the nation economy sufficient in the production of agricultural product such as crops (coco, oil palm and so forth) and animals (goats, and hens among others), which will help to counter for the consumption need of citizens in a particular nation. In light of the above, agricultural policies and programmes are projected avenues to make a nation not to over dependent on other countries agric and agro related product.
Social Economy
Agric policies and programmes enhance and better the life of every citizen, in terms of provision of cash for the sustenance of the individual. Through agricultural policies and programmes the welfare of the citizens will be improved and, hence bridging the gap between the rich and the poor through equitable distribution of income between the national of the country.
Foreign Exchange
Agricultural policies and programmes does not only enhances the domestic economy by production of product to satisfy the immediate need of the individual in the society but also produce for export to other nations of the world. This also assists in cash inflow into the domestic economy through foreign exchange earnings.
Dual Collaboration
Numerous agricultural policies and programmes are designed to create a strong synergy between the private and the public sector of the economy. Government often formulates policies and programmes to open the floor for the private sector to contribute to the economy of the nation especially the agricultural sector. This will facilitates research, innovation, provision of facilities such as farm inputs, and good roads among others.
On-going Challenges Facing Agricultural Policies and Programmes
Over the years, agricultural policies and programmes in Nigeria has not achieved its objectives due to the following bedeviling challenges.
Lack of adequate funding
Funding in terms of financial allocation is an incent zing factor to achieve a given set of goals/objectives. Finance is often regarded as the power house for any project or obligation that is of prior importance [2]. In Nigeria, most agricultural programmes lack sufficient funds, in carrying out their stated goals. In consonance with the forgoing [3] supplemented that inadequate and untimely fund release by all tiers of government, the lack of funds to procure processing machinery and equipment, and the absence of state and local government implementation committees. Overtime, this has induces non-completion in some agric programme and policies; while in some other cases has impede the smooth process of achieving the targeted objectives of agricultural policies and programmes. This often in the long-run hinder the growth of the sector in terms of ensuring agricultural product are produced to cater for the nations’ domestic and foreign.
Embezzlement, misappropriation of fund
Mismanagement is a major challenge ofagriculture in Nigeria [2]. Embezzlement and misappropriation are integral part of corruption in any society. In most agricultural policies and programmes, funds set-aside for specific programmes are often diverted for other function; and in other cases misuse of fund is often witness, this has resulted in lack of implementation of stated goals and objectives. Misappropriation and embezzlement are interwoven variable that create setbacks in the process and processes of implementation and actualization of major agricultural policies and programme in Nigeria. Due to embezzlement, misappropriation of fund most agricultural policies and programmes are often hindered, this invariably retards development of the sector and the economy at large in a particular country [4].
Lack of sufficient agricultural extension service
In any agricultural sector extension service helps in providing and disseminating proven agricultural information to the farmers. Such information regards ways of improving productivity by farmers. For a successful agricultural system extension service plays a vital role of enlighten individuals that are involve in agricultural practices. Lack of sufficient extension services was witness in most agricultural policies and programmes, examples was indicated in Better Life Programme, this makes the programme not to yield its needed result; and also to last longer as anticipated. In some other cases, when extension services are provided, there is always lack of adequate equipment/facilities to effectively achieve the set objectives. In same similitude [5-7] found that the agricultural extension services in Nigeria also suffer from inadequate facilities and input supply.
Unclear/undefined objectives
Unclear or undefined objectives of any programme or policies in any society is an initiator of its’ own failure. In Nigeria, most agricultural policies and programmes are unclear in their stated objectives, this resulted some of their pit-falls. Most agric programmes and policies on the parts ofgovernment that formulated them witness undefined objectives, this narrow their objectives and subsequently lead to wastage of effort. This underlining factor, overtime make government policies on agriculture not to be stable [6]. Unclear/undefined objectives may be due to lack of policy accountability [5]. Nigeria suffers from policy instability driven by high rate of turnover of programmes and personnel, which in turn has made the application of policy instruments unstable [5].
Role conflict among various agric policies/programmes
For actualization of a particular agric policy or programme, it must be free from role conflict with other policies. This is due to the fact that such programme will not play it role effectively due to shared function. In other words, such programme will have a misplaced priority. In Nigeria, role conflict among various agricultural policies and programmes make a specific policies or programme to echoes on another in terms of their area of functioning. Notable role conflict in agricultural programmes and policies in Nigeria was the Land Use Act and Operation Feed the Nation. Such role-conflict retrogress the implement and actualization of both programmes and policies.
Retooling Agricultural Policies and Programmes for Sustainable Development
The following are some of the ways of retooling agricultural policies and programmes to enhance sustainable development in Nigeria.
Agricultural extension service
Theroleofagriculturalextension servicecannotbeoveremphasis in an agricultural sector, due to their task of dissemination of proven agricultural information to farmers, based on proven research. Such agricultural information ranges from ways of dealing with pest/disease infestation, maintaining soil fertility, and dealing with deforestation among others. Additionally, extension services help to provide farmers with improved seedling varieties, fertilizers and farm implements. Agricultural extension service helps farmers to key-in to most agricultural policies and programmes, given the fact that agricultural policies and programmes are geared toward increasing agricultural productivity in aggregate. When effective extension services are provided to farmers, they tends to surmount challenge often encounter ranging from pest/disease infestation among others, hence inducing then to make increase yield of harvest which in monetary terms will increase income for farmer over a given farming season. In light of the forgoing, farmers tends on aprior aid the success of agric policies and programmes with the help of extension services provided for them, which in the long-run increase sustainable development of the nation in particular and actualization of agric policies and programmes.
Provision of sufficient funds
Effective and efficient execution of any project, policy or programme sufficient fund should be made available. Government through her yearly budget should endeavour to benchmark sufficient fund and disburse it to the agric sector. For any policies and programme to yield its’ robust benefit to citizens and the nation at large which is a key to sustainable development sufficient funds should be provided by the government and other stakeholders in the agricultural sector to ensure the implementation of agricultural policies and programmes. This will ensure that no stone is left unturned in the process of implementing agric policies and programmes for it to achieve its’ a major success. Moreover, private sector (agencies and organisations) should always ensure they play a part in assisting in sustaining agric policies and programmes in terms of funding and supplying need implement that would ensure successful accomplishment of particular policy/programme which will pave way for sustaining development.
Clear/defined objectives
For agricultural policies/programme to enhance sustainable development, these agric policies and programmes should be clear/defined objectively. In other words, stakeholders in the agric sectors especially policies/programmes inventor/architect should always ensure that the path ways for the innovated policy for agricultural development should be clear, in other for it to yield its needed objectives which are often than not a pivotal for national development. Agricultural policies/programme can be given defined/clear objective through a process of settingup functional boundaries which a given policies/programme is expected to perform so as not to have a functional/role conflict with other policies/programme. Through clear/defined objective of agricultural policies/programme, stakeholders in the agric- sector would be able to articulate and predict the end result of their policies/ programmes; and to a large extent it successes; and lastly, its contribution to the agric-sector and the nation at large in terms of its’ contribution to sustainable development.
Collaboration
Collaboration is an avenue of shared values. Such values include information and other resources (material/human) that assist an organization to achieve a given goal. In the agricultural industry collaboration is a vital tool for which agricultural policies are formulated, implemented and achieved. In other words, for a successful agricultural programmed and policies to thrive keen collaboration between the private and the public sector should be given priority. This will go a long way to bridge communication gaps in terms of the private sector contributing its part in the implementation process.
On-the-job training
On the job-training is a venue used by employer of labour to equip employees with skills and disposition that would assist them to be productivity in the discharge of their job. In any organization, establishment or sector of an economy on-the-job training is an important strategy to increase aggregate workers performances. In Nigeria, government and relevant stakeholders in the agricultural sector should assist in ensuring that agricultural personnel are given on-the-job training, this will pave way for workers to ensure effective actualization of the agric policies and programmers. The Nigeria government should make provision for in-service-training of agricultural personnel in the Ministry of Agriculture, which will ensure that agricultural officials are well equipped to man the entire processes of implementing these agricultural policies and agricultural programmes.
Conclusion and Recommendations
In Nigeria, inspire of the bless arable fertile land for agriculture, the nation is faced with problems of harnessing its meaningful agricultural policies and programmes to improve the condition of the agricultural sector overtime. In light of the findings of this research paper, the researcher wish to conclude that, if the Nigerian government adopt the solution insight and recommendations made, agricultural policies and programmes will yield its adverse benefits to the agricultural sector in the short-run and the Nigerian society in the long-run in terms of sustainable development.
a) The government of Nigeria should ensure adequate financial backing of agricultural programmes and policies. This will assist ensuring that agricultural related programmes are not hampered in the process of ensuring the actualization of their targeted objectives.
b) Internal auditing should be regularly done in the agric- sector to check if there is corruption, embezzlement and misappropriation of fund in the process of carrying out agricultural programmes and policies. This will assist in persecution of officials involved in such act, hence reducing further occurrence.
c) Public awareness/enlighten programmes should be in organized by non-governmental agencies and the government in particular, through the mass media. This will help in actualization of agric policies and programmes
https://lupinepublishers.com/agriculture-journal/pdf/CIACR.MS.ID.000129.pdf
For more Agriculture Open Access Journal articles Please Click Here: https://www.lupinepublishers.com/agriculture-journal/
To Know More About Open Access Publishers Please Click on Lupine Publishers
Follow on Linkedin : https://www.linkedin.com/company/lupinepublishers Follow on Twitter   : https://twitter.com/lupine_online
50 notes · View notes
Lupine Publishers | The Dental Service of the French Army in 1940
Tumblr media
Lupine Publishers | Journal of Dentistry
Brief Historical Summary
On December 1 1892, the Brouardel law was voted which gave legal status to dental surgeons. At that point, those who had followed appropriate studies delivered by the Faculty of Medicine were the only ones who could practise dental surgery [1]. Therefore, it was a profession in its infancy which enlisted during the war in 1939? On February 26 1916, a dental service was created in the infantry within the French army for the duration of the war. That of the Navy only started on March 1st 1916. After the Great War, only the Navy decided to dissolve its dental service for it did not have a satisfying organization to greet its patients. In 1934, a backup dental service was implemented once again. While they were made lieutenants in 1818, dentists could become captains after the law voted on December 19 1934 [2]. A few months later, dentists in the land army were attributed the same rank.
National Federation of Reserve Dental Surgeons (Federation Nationale Des Chirurgiens-Dentistes De Reserve - Fncdr)
In 1925, the first dental officers who returned to civil life assembled to form the Association of dentists of the land army, the navy and the air force from the Paris region during the 3rd Congress of medicine and military pharmacy. It was aiming at keeping the ties they built during the war and at helping each other to get back to normal life [3]. A two-year optional superior military training was created. Once students in dental surgery had passed it, it allowed them to accomplish the very last part of their military service as 2nd class military dentist which was a rank similar to that of second lieutenant. Then, they joined dental care garrisons, centres for edentulous patients or even centres for maxillofacial prosthodontics. In 1927, there were only eight of them. Those who had not passed their two-year optional superior military training were only able to join the nursing unit [4,5].
Thereafter, these associations asked the Minister of War and of National Defence the permission to open an advanced level training school for dental surgeons called EPOR. The central administration of the French Defence Health service (Direction centrale du Service de santé des armées -DCSSA), who was in charge of diversifying the health service’s branches, immediately supported the initiative which led to the opening of the first EPOR. This training school offered its first services during the monthly lectures of the Villemin military hospital on October 1926. From 1931, the initiative taken by Paris spread over the country and each military region could benefit from their own EPOR. Moreover, integrating this school soon became a prerequisite to get an upper grade [6].
In 1933, all those regional associations eventually gathered into a National federation of Reserve Dental surgeons which was notably later chaired by Pierre Budin who was one of the eleven chairmen of the federation and who received the Legion of Honour in 1920 for his bravery on the battlefield. This institution thoroughly devoted itself to the improvement of the dentists’ status. The DCSSA organized the dental surgeon’s training in order to get them ready for all sorts of duties: anaesthetists, surgical assistants, radiologists, first-aid workers, etc. It was Pierre Henry’s case who was mobilised in 1940 and who joined the surgical team of the Ancemont Hospital where he practised as anaesthetist. During the war, as he was demobilised, he practised in Rennes [7]. It was in this way that, in the Val-de-Grâce Hospital, the general practitioner Ginestet trained dental surgeons to specialties of maxillofacial surgery. The dentists were informed on various medical practices with respect to the damage caused by more modern weapons and which led to more complex and numerous maxillofacial injuries. All that work which was organised with the help of the DCSSA prepared the intervention of the dental surgeons’ corps in the event of future conflicts [4,5].
Active Dentists
The French Defence Health service which included the dental surgery service reshaped its organisation following the assessments which were carried out at the end of World War I. It decided to implement more mobile structures which aimed at following a regiment or a division or which were able to adapt to the new schemas of so-called movement wars. Therefore, mobile surgical groups, advanced hospitals, medical hospitals of evacuation and various health services appeared. Within each of these structures, one or two dentists were incorporated. Through Article 39, the law of April 1 1923 planned that dental students had to accomplish their military service in the health service as nurses if they had not finished their dental curriculum or as second class dental surgeons if they had graduated [7].
Mobilisation
World War II broke out on September 3 1939. Everybody mobilised in France. Reserve dental surgeons were called upon as lieutenants for the most of them on different strategic areas. They were either assigned in the sanitary formation of the divisions, in the surgical ambulances or in the front regiments. At the beginning of the war, they barely practised dentistry. They mainly served as assistants for specialised medical treatments. Only a few practised in the garrison dental surgeries or in those of hospitals. Indeed, with the growing mobilisation, the Health service created dental surgeries in each area settlements and in the most important garrisons. Thus, in the front, they intervened in aid stations to cure the wounded. In divisional areas, they collaborated with doctors and surgeons within mobile or advanced surgical groups. Finally, at the rear, they were employed by maxillofacial centres or by radiology services. At the beginning of the war, immediate interventions and first-aid treatment had priority and demanded a lot of efforts especially as the bombing victims were not only soldiers but also civilians. Those who had followed the teaching delivered by the EPOR were more easily assigned in surgical services [6].
(Delivered) Treatment
Curing the soldiers’ oral pathologies, soothing their pain and quickly making them operational was a primal objective. A man who is in pain does not fight right and immediately constitutes a threat for his comrades. Furthermore, a person with bad teething cannot chew properly which can lead to other general diseases. Seeing the soldiers and being among the troops also allowed the dental surgeons to deliver notions of oral hygiene. A schematic note was written for each soldier in order to keep track of his treatment and the head of the dental surgery also used to write reports on a daily basis. On them were written all the consultations and carried out interventions. Reconditioning oral cavities was systematically carried out before each military operation with a compulsory preliminary tartar removal which was the first hygienic approach. Numerous dental extractions were carried out. Each attribution of a prosthetic device had to be subjected to the approval of the head of the Regional Health Service through the main stomatologist’s intermediary.
Once the head of the dental surgery gave his approval the extractions were made before the making of a device. A month and a half later, the device could be made. On the contrary, if the making was refused, only extractions were made [6]. In each dental surgery, the staffs was made up of a department head who often was a stomatologist or a military dental surgeon and of military dental surgeons to assist him according to the needs decided by the main regional stomatologist. This department head was under the supervision of the chief doctor of the training to which he was linked such as the corps troops, the regimental infirmary, the military hospital, etc., while being under the main stomatologist’s command [7]. In 1940, continuing education was insured thanks to magazines such as “Odontologie” which ended its publication between June/ July 1944 and March/April 1945. In 1942, the publications spread for lack of promoting advertisement and because of the war, and eventually diminished until they did not get enough cover.
Stomatology Service
During World War II and after the French surrender which was signed in Rethondes on June 22 1940, numerous demobilised dental surgeons were called upon to treat soldiers who were wounded in the face. This was when maxillofacial surgery became extremely important in terms of medical care. It became a full-fledged specialty which benefited from new techniques and materials. Stomatology services existed along the secondary organisations such as the garrison dental surgeries which operated in each area settlements and in the most important garrisons. They were implemented at the beginning of the war.
There were two different articulations concerning the rear structures:
Inter-Regional Technical Services
The inter-regional centre of surgery and of maxillofacial prosthetics was independent from the regional service but was under the supervision of the chairman of the regional health service which it was a part of. A surgeon, head of department, was at its head and benefited from a stomatologist’s assistance who was leading the maxillofacial prosthetics service. As a team, they could rely on all the specialists and dental prosthetics technicians. In these inter-regional centres, they mainly treated damaged maxillary bones as well as those of the face and the neck thanks to this coeducational surgical and stomatological training. On the contrary, simpler maxillary fractures, that is to say those without a loss of substance, were treated in the regional stomatological centres. Likewise, simple edentulous patients were not accepted in surgical and stomatological centres. Centres for edentulous patients were implemented for the wounded that could be fitted with prosthesis. Each centre was specialised. The main objective was to quickly orientate the wounded.
Regional Technical Services
Only one stomatology service existed in a region. In the event of a big region, there could be more of them. Only one centre of dentures for edentulous patients worked to making the dentures of all the edentulous people who came in the dental surgeries of the local garrison. Those local stomatology and dental prosthetic services contained from 50 to 100 beds out of a total of around 10,000 and 20,000 beds for the entire medical sector. The odontostomatologic illnesses too important to be treated in the dental surgeries, the dental illnesses, simple maxillary or prosthetic fractures, and surgical extractions were operated there. The simple fitting of prosthesis on edentulous people was also carried out there. The management of regional centres was insured by the main local stomatologist. He had to keep a close watch on the stomatology and dental prosthetic centre, and dental surgeries. Here was his staff:
i. A dental prosthetist who was able to make a set of dentures per day, that is to say 30 dentures per month;
ii. A military dentist who was able to provide the same work than that of 4 out of 5 technicians.
An officer was in charge of the management of the equipment which included the maintenance and the stock according to the wants and needs. He also kept a log of which was fitted in the patients ‘mouths with the description of the device. This allowed justifying the price of each set of dentures. Finally, the local head could subsequently follow the activity of the centre for edentulous patients thanks to a monthly report [8]. The ability training of the military service of December 7 1938 for metropolitan troops included a calculation giving the soldier’s chewing coefficient. In case of a lack of needed dentures, the fitting was made possible only if the soldier’s chewing coefficient was lower to 25%. Therefore, each fully edentulous man could receive a full set of dentures and could be kept in the service (Figures 1-3).
In case of maxillofacial emergency, the wounded soldier had to be driven to a maxillofacial centre from two to ten days after the injury. The quicker the treatment started the better were the chances of recovery. As for the repairing phase, it often needed transplants. The immobilisation of fractured bony edges was essential to be successful. The average period of the primary consolidation of transplants was of around two months but it was only around the fourth month that the transplants looked normal again. On November 11 1942, the German army invaded the free zone, France’s Southern part. A month later, Vichy’s “Armistice Army” was disbanded. Hence, the French Defence Health service no longer existed. Any medical support was entrusted to the British and mostly to the Americans during the conflict. In 1940, numerous demobilised dentists put up resistance, their dental surgeries often served as turntables for the exchange of information. Many of them were shot by the Gestapo [9]. Many others were deported and came back weakened from the camps (René Maheu)(Figure 4). Some others enlisted in the Free French Forces (Maurice Prochasson). Dental lieutenant’s uniform (purple velvet on the flat-topped French military cap, on the collar tips, the caduceus was visible on each collar tips and buttons), made between 1930 and 1935, worn in 1940, the khaki military cap was made from 1930. Before it was blue (Figure 5).
The braids were on the turn-ups during World War I. They were above the turn-ups from 1934/1935. Here, on the picture, they are on the turn-ups. The dental officer’s uniform was similar to that of the cavalry (executive order 1934)(Figures 6 & 7). Here, it was prior to 1934. Among the decorations, from left to right, the following ribbons are recognisable:
i. Knight of the Legion of Honour.
ii. 14-18 Cross of War (without mention in despatches).
iii. Combatant Cross (created following the law of June 28 1930).
iv. Impossible to identify, but given the order of wearing official French decorations, it could be the Allied Medal (or Victory Medal) created following the law of July 20 1922 and which was granted provided three successive months or not of presence between August 2 1914 and November 11 1918, to the soldiers belonging to the units enumerated by the ministerial instruction of October 7 1922, which is worn before distinction (Figures 8-11).
v. Médaille commémorative française de la Grande Guerre
vi. The Insignia for the Military Wounded. The trousers have a command band which existed from 1830 [10-13]
For more Lupine Publishers Open Access Journals Please visit our website: http://lupinepublishers.us/ For more Dentistry Open Access Journals\ articles Please Click Here: https://lupinepublishers.com/dental-and-oral-health-journal/
To Know More About Open Access Publishers Please Click on Lupine Publishers
Follow on Linkedin : https://www.linkedin.com/company/lupinepublishers Follow on Twitter   :  https://twitter.com/lupine_online
57 notes · View notes
Lupine Publishers | Hypertrophic Cardiomiopathy in Children: The Need of Heart Transplantation
Tumblr media
Lupine Publishers | Cardiovascular Research
Abstract
Hypertrophic cardiomyopathy (HCM) is the most common cardiac disease affecting the cardiac muscle. It can manifest in different forms with or without left ventricular outflow obstruction, with or without right ventricle involvement. Forms with biventricular hypertrophy seem to have poor prognosis. In our case, we describe a young patient with sarcomeric biventricular hypertrophic cardiomyopathy (MYH7 mutation), the poor prognosis of this form and strategies options adopted after failure of medical treatment. It is not always easy the management of hypertrophic cardiomiopathy, after medical treatment failure, especially in children. In some cases, heart transplantation is the only one therapeutic option.
Keywords: Hypertrophic Cardiomiopathy; Right Ventricular Hypertrophy; Heart Transplantation
Introduction
Hypertrophic cardiomiopathy (HCM) is the most common cardiac disease affecting the cardiac muscle and is characterized by heterogeneous genetic, morphological, functional, and clinical features. It is also one of the main causes of sudden cardiac death (SDC) in the young. Left ventricular hypertrophy with left ventricular outflow obstruction (LVOTO) is the most characteristic feature of HCM. There are also variant of HCM without LVOTO, with apical hypertrophy, with medio-ventricular obstruction and with right ventricular hypertrophy. The treatment and the prognosis of HCM seem to be variable on the basis of different forms, the age at presentation, sarcomeric gene mutations or rare phenocopies. Heart transplantation (HT) is the only therapeutic option for selected patients with HCM and refractory heart failure. In effect ESC guidelines recommend heart transplantation in eligible patients who have an LVEF < 50% and NYHA functional Class III–IV symptoms despite optimal medical therapy or intractable ventricular arrhythmia (II a); in eligible patients with normal LVEF (50%) and severe drug refractory symptoms (NYHA functional Class III–IV) caused by diastolic dysfunction (II b)[1].
Right ventricular hypertrophy (SRVH) is a relatively rare subtype of HCM. The anatomic, genetic, clinical, and prognostic characteristics of patients with SRVH and the clinical relevance of these characteristics have not been described widely in the literature [2,3]. MYBPC3 gene mutations have previously been described in two patients with RV hypertrophy. In a recent study, 90% of HCM patients with SRVH were found to possess relevant sarcomere protein mutations and variations in the MYH7 (Myosin heavy chain 7) and TTN genes, followed by variations in MYBPC3. Always in this study 73% of HCM patients with SRVH and multiple sarcomere gene mutations had poor prognosis. 7 In addiction MYH7 mutations can cause hypertrophic cardiomyopathy or skeletal myopathies with or without cardiac involvement, on the basis of the side of mutation. In our case, we describe the poor prognosis and treatment strategies of a young patient with biventricular hypertrophic cardiomyopathy and MYH7 mutation.
Case Report
A 12-year-old young woman with familiarity for hypertrophic cardiomyopathy (mother and mother’s twin with biventricular hypertrophic cardiomiopathy and MYH7 mutation) was hospitalized in our hospital for dyspnea after mild-moderate efforts and reduced functional capacity (NYHA Class II). Mother and aunt of the patient were asymptomatic with good functional capacity. Patient had the same genetic mutation of mother and aunt (p.Asn696Ser heterozygosis MYH7) but with increased and poor phenotypic expression [4]. Echocardiography and cardiac magnetic resonance were performed showing a hypertrophic cardiomyopathy with right ventricular involvement. Precisely, cardiovascular imaging showed left ventricle asymmetric hypertrophy especially at the level of anterior and inferior wall (basal and mild anterior wall =14 mm, z score= 3,5; antero-lateral basal wall = 12 mm, z score 2,78; mild inferior wall = 14 mm and apical inferior wall = 12 mm) with normal ejection fraction (FE = 62% at CMR) and moderate diastolic dysfunction (panel B and D). In addiction wall thickness of right ventricle outflow and basal-mild free wall were increased (= 13 mm) with apical obstruction and development of maximum gradient of 10 mmHg (PANEL A and C) [5,6] (Figure 1).
Figure 1.  
The function of right ventricle was at inferior limits (FE = 51% at CMR, TAPSE = 16 mm at echocardiography). Thus the patient had an interesting right ventricle involvement and moderate diastolic dysfunction of left ventricle. She had not arrhythmia at ECG-Holter but she had reduced functional capacity. also demonstrated at stress test. Stress test was suspended at 6 min (Bruce Protocol) after pre-syncopal symptoms: lack of adaptation of the blood pressure to the effort was observed. In addition, from several months she had pre-syncopal episodes at the peak of the effort. ECG showed left ventricular hypertrophy and biatrial enlargement. Pro BNP was increased = 5841 pg/ml. Considering clinical situation, we decided to start medical treatment with betablockers (bisoprolol) but the patient didn’t tolerate medical treatment. Thus, we decided to start low dose of captopril without improvement of symptomatology. Also, treatment with diuretic was not tolerate by patient [7,8]. Therefore, considering symptom refractory to medical therapy, the poor prognosis and the impossibility to optimize medical treatment, we decided to plan cardiac transplantation, the only option possible at this moment.
Thus right catheterization was performed and patient was inserted in heart transplantation list. ICD implantation was not considered in the absence of ventricular arrhythmia and other factors. Discussion: hypertrophic cardiomyopathy associated with MYH7 mutation and right ventricle involvement seems to have poor prognosis, especially if right ventricle hypertrophy is severe [9]. In effect the young patient had a greater right ventricular hypertrophy compared than mother and aunt. In these cases, after medical treatment failure, heart transplantation seems to be the only strategy to improve symptomatology and quality of the life of the patient. Especially in pediatric population, it is not always easy the management of hypertrophic cardiomiopathy after medical treatment failure and heart transplantation seems to be the only one therapeutic option. Other study are needed to study some variants of HCM with right ventricle hypertrophy, their treatment and prognosis
https://lupinepublishers.com/cardiology-journal/fulltext/hypertrophic-cardiomiopathy-in-children-the-need-of-heart-transplantation.ID.000130.php
For more Cardiovascular Research go through the below link
https://lupinepublishers.com/cardiology-journal/index.php
To Know More About Open Access Publishers Please Click on Lupine Publishers
Follow on Linkedin : https://www.linkedin.com/company/lupinepublishers Follow on Twitter   : https://twitter.com/lupine_online
44 notes · View notes