open-access-public-health
open-access-public-health
Public Health Open Access Jouranl
10 posts
Don't wanna be here? Send us removal request.
open-access-public-health · 4 years ago
Text
Annals of Public Health & Epidemiology - APHE
Generic Substitution in Patients Whose Illness Has a Narrow Therapeutic Index, Such as Epilepsy
Authored by Roy G Bera*
Tumblr media
Abstract
Introduction: Generic substitution is championed, claiming health economic benefits, based upon lower costs and therapeutic bioequivalences. This ignores the potential that a change in the salt or excipient may effectively change the availability of the active ingredient. This has relevance when treating conditions with a narrow therapeutic index, as seen in the management of patients with epilepsy (PWE).
What is meant by generic equivalence: This paper specifically addresses generic medications as those compounds produced by other than the innovative research based pharmaceutical company and recognises that these need to satisfy bioequivalence parameters of being within 80-125% of the parent compound at the 90% confidence level. Different generic compounds are not tested against each other but only against the parent medication.
Narrow therapeutic index: Some conditions have a narrow therapeutic index, comparing the 50% efficacy (E50) and lethal doses (L50) and considered narrow if ≤ 2. The European Medicines Association reduced acceptable bioequivalence to 90-111% (12) for treatment with a narrow therapeutic index. It is acknowledged that each formulation may have altered absorption characteristics and solubility properties, leading to possible differences in efficacy and safety. Bioequivalence studies do not account for batch-to-batch potential variations nor possible drug interactions.
Relevance to the treatment of epilepsy: Antiseizure medications (ASM) only achieve of the order of 2/3 seizure freedom, indicative of suboptimal management and need for finesse requiring stability once seizure control is achieved. This can be affected by generic substitution, recognizing that the generic medication may not be identical and respect the narrow therapeutic index. There may be many generic formulations and problems may arise when substituting one generic compound for another, especially if one is at 80% and the other is at 125% bioequivalence, thereby effectively either doubling or halving the effective dosage. While prescribing physicians, in Australia, have the option of refusing the right to substitute brands, by ticking the appropriate box on the prescription, this is not always adhered to by the dispensing pharmacist.
Specific examples in epilepsy management: Noting elevated lamotrigine blood levels in PWE who were taking the innovator, parent compound, Lamictal®, with some necessitating intervention due to toxicity, it was discovered that the pharmaceutical company had changed manufacturers thereby marketing an generic as the parent compound, claiming Good Manufacturing Practices, confirming the suspicion that the excipient caused the elevated levels. A patient, admitted to hospital for an unrelated condition, was given a generic equivalent of Keppra®, resulting in halving the blood level thereof and facilitating breakthrough seizure.
Conclusions: Generic substitution, advocated on the basis of health economic benefits, may not realize same due to inherent risks when treating PWE. There is the risk of either toxicity or breakthrough seizures with foreseeable risk of harm and resultant potential litigation. Prescribing, using proprietary trade names, may obviate risk of substitution, even of one generic for another, if indicating that brand substitution is denied, although not all pharmacists respect this directive.
Introduction
Generic substitution is advocated on the basis of health economics [1]. The claim is reduction of the cost of health care, claiming bioequivalence between the generic substitute and the parent compound, upon which it is based [2]. This need not be the case, considering that a change in the salt that may be used [3] or the so-called ‘inate’ excipient, included in the generic, may affect the bioavailability of the active ingredient contained within the generic substitute [4]. This is of particular importance in illnesses with a narrow therapeutic index. Such conditions, typified by epilepsy, have a restricted range of therapeutic benefit in which therapy provides efficacy without resultant and unacceptable adverse effects [5]. Patients with epilepsy (PWE) are treated with anti-seizure medications (ASM), known to have a narrow therapeutic index requiring close monitoring. Control of seizures is the hallmark of acceptable treatment, without which there may be serious consequences, but this should not be at the cost of unacceptable, adverse events [6,7]. The paper to follow examines generic substitution for PWE.
What is Meant by Generic Equivalence?
A ‘generic medication’ may be a therapeutic agent marketed under its non-proprietary, approved ‘trade’ name, or it may be a medication, marketed with a different brand (proprietary) trade name [7] produced by another company, other than the innovator (research-based) manufacturer, claiming bio-equivalence to the parent compound upon which it is based [8]. The concept of “generic”, for the purpose of this review, will be to denote a supposed bioequivalent medication, not necessarily the product of an innovative research-based pharmaceutical company but rather one designed to capitalize upon such innovation, towards the end of its patient protection, the purpose being to directly and financially benefit from such innovation and to essentially “copy” a successful therapy with a similar formulation, with identical active ingredient but different excipient or salt content. Generic equivalents can enter the market and be produced at potentially much cheaper cost, estimated to be 20-90% cheaper [9]. Some innovative pharmaceutical companies also market their original, innovative medication, identified with a different ‘proprietary’ trade name, as a generic equivalent, the purpose being that of financial expedience, designed to maintain market share [10].
Such generic compounds, unlike the original innovator, previously tested both in health volunteers and extensively trailed in phase I – III studies, need only show bioequivalence in a limited sample of health volunteers [11,12] possibly < 30 people but definitely < 100. Efficacy and safety, of the generic compound, should be within defined boundaries and demonstrate that the identical amount of active ingredient(s), delivered in identical dosage, via the same route of administration, achieves the same, or comparable, standards of drug availability [8]. Bioequivalence testing requires≤ 100 healthy volunteers to receive both the generic and parent medication, in a randomized sequence, with appropriate washout period between them [13]. Both before and after such administration, plasma concentrations are evaluated at regular, predetermined intervals. The identical active moiety, in both the generic and innovator, parent compound, must be compared in the same individual, making each subject his/her own control [13].
Demonstrating such bioequivalence relies upon parameters, such as peak plasma concentration (Cmax) and area under the plasma concentration-time curve (AUC), confirming equivalence of the rate and extent of absorption of the active ingredient [13,8]. The FDA accepts bioequivalence if, at the 90% confidence interval, the ratio of AUC generic to AUC branded and Cmax-generic to Cmaxbranded, for the average of the sample of participants lies between 0.80 – 1.25, namely the generic is between 80 –125% of the innovator, at the 90% confidence level for AUC and Cmax [13,8]. The inherent interpretation of this prerequisite, assuming there exists more than a single generic equivalent to the parent compound, is that the generic at 80% bioequivalence is also bioequivalent to that testing at 125%, despite there being >56% variation between the two equivalence interpretations. One generic compound is not tested against other generic medications based on the same innovator, parent compound.
Narrow Therapeutic Index
Medications used to treat some patients, such as patients with epilepsy (PWE), have a ‘narrow therapeutic index’. This concept is calculated from the 50% efficacy dose (E50), compared with the 50% lethal dose (L50), and is considered to be narrow if found to be ≤ 2, emphasizing the necessity for true bioequivalence [14]. Recognizing same, the European Medicines Association limited the acceptable variation of bioequivalence to 90–111% [12] for those with a narrow therapeutic index which still reflects >23% variation.
While the maximal accepted variation is set at 80 – 125%, there are reports that the true variation may be as great as ~70 – 140% (namely 74 – 142%), equivalent to either 50 or 100% variation when comparing alternative generic formulations [15,16]. While the active ingredients, in the generic medication, may be identical to that within the innovative parent compound, it may be constituted in a different form (referred to, by the FDA, as pharmaceutical alternatives [17]). This may be represented by a different salt or ester of the complex of the active moiety or alternatively different dosages or strengths (such as salt with amlodipine may be besylate or camsylate, ferrous sulfate or gluconate) [17]. Each formulation may have altered absorption characteristics and solubility properties, leading to possible differences in efficacy and safety [17]. Bioequivalence studies do not account for batchto- batch variability nor country-to-country variations. In complex diseases, such as Parkinson’s Disease, patients may absorb the first daily dose more slowly, consequent to delayed gastric motility, and are often on multiple medications, including dopamine agonists, anticholinergic, monoamine oxidase inhibitors and antipsychotics, which may have direct relevance when being treated with either a generic or innovator parent medication [18]. Drug interactions have not been adequately addressed by simple bioequivalence studies [18].
Relevance to the Treatment of Epilepsy
When treating PWE, appropriate ASM are reported to stop seizures in approximately two thirds of patients [19], indicative that available ASM remain suboptimal mandating a degree of finesse in management [20]and a need to respect appropriate dosing, with maintenance of a stable regime, without fluctuations, once stability is achieved. It follows that once there has been established seizure freedom, it is ill advised to modify the available dosages. Generic substitution need not equate to identical bioavailability to that of the ‘parent medication’ but rather should have efficacy demonstrated to be within minus 20% to plus 25% of the parent compound, upon which the generic medication is based [6,7]. Epilepsy, with its narrow therapeutic index relevant to the use of ASM, affects approximately 2% of the population [21]and there may be more than 10 generic substitutes for a given innovative parent compound, as is the case with lamotrigine.
‘MIMS’, a local listing of available medications in Australia, records 12 different formulations of the lamotrigine (APO Lamotrigine; Lamictal; Lamidus; Lamitan; Lamotigine GH; Lamortigine Sandoz; Lamotrust; Logem; Noumed Lamotrigine; Reedos; Sandoz Lamotrigine; and Tolemo DT) of which Lamictal is the parent compound. A similar situation applies to other ASMs, such as levetiracetam (MIMS cites: Levactam; Levecetam; Levetiracetam; Levetiracetam SZ; Levetiracetam AFT; Levetiracetam GH; Levi; and Keppra – the parent compound). The problem arises when one replaces one generic formulation for another. Assuming the first medication was minus 20% and the alternative was plus 25%, as demonstrated above, that amounts to almost doubling, or halving, the effective dosage, without deliberately changing the prescribed amount of ASM. This may have serious consequences for the unsuspecting patient who is totally reliant on his/her treating doctor who has prescribed the ASM or the pharmacist, a learned intermediary, who dispenses the medication.
Proprietary trade names, of the various agents, may be very similar, as per lamotrigine, for which five include “lamotrigine” within the registered trade name. Patients can be forgiven for assuming that they are identical to each other, a situation often reinforced by the dispensing pharmacist. Dispensing pharmacists, in Australia, have been accused of advocating generic substitution on the basis of direct financial gain, rather than specific consideration of the patient’s needs [22]. There was a change in the Pharmaceutic Benefits Scheme which encourage the substitution of innovative parent compounds with generic alternatives [23]. From December 1994, pharmacists were given the option of generic substitution as part of their dispensing services [24], a situation raising some concerns within the Australian Medical Association [24]. It behaves the treating doctor who does not want such substitution to indicate that intension. In Australia, the government issued prescription pad, includes the option for the prescribing physician to nominate “Brand substitution (is) not permitted”. The expectation follows that, once this box has been identified as pertinent, the dispensing pharmacist would respect and accommodate the doctor’s wish to retain the prescribed brand name, irrespective of whether that be the innovative parent medication or a generic substitute, as stipulated on the prescription. This is far from universal [25], with some pharmacists totally ignoring this directive from the treating/ prescribing doctor [25]. Such substitution has the potential to disrupt patient care for PWE as it may result in significant alteration in the bioavailability of what has, thus far, been an effective regimen. The direction, not to substitute, does not preclude the use of generic compounds but rather it should prevent the substitution of one generic medication for another, thereby obviating the identified risk of disrupting patient care by possibly ‘doubling’ or ‘halving’ the effective dosage, once the patient has been stabilized on the ASM being prescribed. This consideration would be enhanced if prescribing doctors adopted the approach of only prescribing medications using the proprietary trade names of the chosen medication(s), identifying these as specifically nominated generic ASM or the parent compound.
Specific Examples in Epilepsy Management
Lamotrigine (Lamictal®) blood levels were noted to be universally climbing, without the treating doctor having changed anything in the patients’ regimen, with some patients, previously stable and well controlled, showing evidence of medication toxicity [26]. After confirming that the pharmacists had not substituted a generic alternative, the patients asked to produce their medication packaging, and the evaluative laboratory confirming there had been no change in measurement methodology, the responsible pharmaceutical company confirmed that it had changed the manufacturer which resulted in the inadvertent substitution of a generic compound, marketed as the parent medication, resulting in the increase in blood levels of the ASM, without any change in dosage or therapeutic regimen [26]. The pharmaceutical company insisted that the new manufacturer had totally complied with Good Manufacturing Practice (GMP) which demonstrates that the observed changes in blood levels, with some patients experiencing medication toxicity, was most probably not as a consequence of faulty manufacturing but rather as a direct result of altered bioavailability, consequent to inadvertent generic substitution with an altered excipient, despite maintaining equality of the active ingredient contained within the parent compound (Lamictal®) [26].
The take home message from this report was twofold: it underwrote the direct value of monitoring ASM blood level without which this observation may have gone unnoticed; and it further underwrote the need to take changes, as observed in this sample of cases, very seriously and to pursue any unexpected results to determine the underlying cause. There was report of a PWE who was well controlled until being admitted to hospital where the formulary only included a generic of levetiracetam [7] and the patient, while in hospital, was treated with that generic and discharged from hospital on it without, advising the patient of the generic substitution [7]. She experienced a breakthrough seizure with her blood levels of levetiracetam being halved, resulting in her being accused of non-compliance. Neither she, nor her treating physician, were aware of the brand substitution with the hospital discharge summary only referring to the generic name of the medication and indicating that there had been no change in treatment, while the patient was in hospital for an unrelated problem, citing the same generic name both prior to and subsequent to hospital admission. It was only once the patient left the doctor and examined her medications that she realized what had occurred and brought it to the doctor’s attention [7].
It is far from clear that generic medications interact identically with other preparations, prescribed for unrelated diagnoses. Some ASM are known to interact as may be the case with hormone replacement for which interactions have been reported, especially with lamotrigine [27]. This raises additional unanswered questions, when contemplating generic substitution, especially for PWE, but it definitely is not restricted to PWE. These considerations apply to any condition in which there is a narrow therapeutic index for which epilepsy is but one example.
Should untoward consequences follow the introduction of a generic substitute, this has the potential to also undermine any economic rationale that was anticipated by such substitution. Patients may experience loss of income, following loss of employment consequent to a seizure, resulting in a domino effect with serious social sequelae. Litigation is also a costly proposition if there should be related unwanted consequences directly caused, as a result of the generic substitution, and such consequences may result in cause for action in negligence should serious issues arise, such as death or injury, which might directly be caused by changes in the therapeutic regimen as a result of generic substitution.
Conclusion
Generic substitution, advocated on the basis of improved health economics, may not realize same, particularly for patients with illnesses for which there is a narrow therapeutic index, such as PWE. There is the risk of potential harm with patients experiencing either toxicity or breakthrough seizures which may initiate a domino cascade of serious consequences. Despite the option to nominate that brand substitution is not permitted, some pharmacists have ignored this directive, placing patients at risk. It behoves prescribing physicians to use proprietary trade names, when writing prescription, to further protect against brand substitution, be it from the innovator or between various forms of generic alternatives of which there may be many.
To read more about this  article: https://irispublishers.com/aphe/fulltext/generic-substitution-in-patients-whose-illness-has-a-narrow.ID.000510.php
Indexing List of Iris Publishers: https://medium.com/@irispublishers/what-is-the-indexing-list-of-iris-publishers-4ace353e4eee
Iris publishers google scholar citations:
https://scholar.google.co.in/scholar?hl=en&as_sdt=0%2C5&q=irispublishers&btnG=
0 notes
open-access-public-health · 4 years ago
Text
Annals of Public Health & Epidemiology - APHE
Composition of Camel Milk: A Blessing for Health
Authored by Asim Faraz*
Tumblr media
Abstract
Camel milk is very rich source of protein along with potential anti-microbial and protective activity. Water is among the important factors that severely affect the camel milk composition. Its contents in camel milk vary from 84-90 percent. Dehydrated camel has a most remarkable feature that it has the ability to maintain lactation with the milk secretion having over 90 percent water contents, which could be considered a natural adaptation in order to provide the necessary fluids to the calf. The relative amount of the fat, protein and lactose components of camel milk are very similar to those in the cow milk and the water contents affect the percentage of fat. Moreover, the fat present in the camel milk doesn’t forms a layer so it is evenly distributed throughout the milk as small micelles that make its digestion easier. It contains higher concentration of long chain fatty acids so healthier enough. The pH and density of camel milk ranges between 6.5-6.7 with an average of 6.56 and 1.025-1.032 with an average of 1.029 and both values are lower than those of cow milk. The colostrum of camel is slightly diluted and clearly white like normal milk unlike bovines. Generally; colostrum transforms to milk within 7-10 days in camels. Camel milk has unique property to inhibit the growth of microorganisms as it contains protective proteins and enzymes with special antibacterial and antiviral properties such as lactoferrin, peptidoglycan protein and lacto-peroxidase. It also contains protective insulin to treat the Diabetes mellitus. This paper will describe the composition and health benefits of camel milk; thus, will be a useful addition in the field of camel science.
Keywords: Camel; Milk; Food; Desert; Pastoral
Introduction
Camel seems very important in many countries where used as food animal. Its ability to utilize rangeland in marginal areas and to survive and produce well under hot and harsh environment has been recognized greatly over the years [1]. Globally camel enjoys a very unique ecological and socioeconomic status [2]. It has been giving services to the humans under highly marginalized ecosystems and very harsh climatic conditions since ancient times [3]. The camel has numerous unique capabilities and characteristics as that it can be ridden, loaded with baggage, milked, eaten, traded for goods, harnessed to plough and used for other agricultural operations and exhibited in the zoo [4,5]. Camel plays a pivotal role in the subsistence pastoral economy in the diverse eco-zones [6]. Due to its appearance and ability to survive in hot, harsh arid environment and to tolerate many stresses, e.g. heat, scarcity of feed and water; camel has fascinated mankind [7]. The camel can digest dry matter and crude fiber better than other ruminants. This dry matter and high fiber digestibility might be attributed to the longer retention time of large particles in its fore stomach and unique movement of the fore stomach of the camels [8]. The dromedary is the most important among the domestic animals in the hot, arid and semiarid regions as it has potential to produce higher quality foods like milk and meat under extremely harsh weather at comparatively lower costs [9]. It has great tolerance to higher temperatures, solar radiation and water scarcity. It can thrive well on sandy deserts with poor vegetation and mainly consume those feed materials which is unutilized by several other domestic animals [10].
Milk composition is dependent on various factors like species, breeds, nutritional status, management, parity, age, physiological status and season, etc. Milk composition as well as quality severely altered by many factors such as genetic, physiological (age, body weight, stage of lactation), milking methods and management [11]. The milk composition is mainly affected by the stage of lactation [12]. It changes all over the year, milk fat and total solids were found to be significantly higher in early lactated and non-pregnant females as compared to late lactated and pregnant she-camels [13].
Milk Composition
Composition of camel milk greatly varies from the milk of other large and small ruminants. Kappeler S, et al. 1998 [14] reported the camel milk composition as fat 3.8%, protein 2.7-4.7%, calcium 1000-1400 mg/ml, phosphorus 0.650-1.10 mg/L, iron 0.3-0.8 mg/L. Four fractions of casein in camel’s milk have been isolated, that casein particles range from 20-300 nm in diameter. Their amino acids composition has similarities to the αs1, αs2, β and γ casein, respectively, as of cow’s milk. The fat contents of camel milk vary from 2.5-5.9 % having a mean of 4.6 %, lactose contents from 4.8-5.8 % which are slightly higher than those of cow’s milk, minerals in term of ash from 0.6-0.8 % and protein is 2.8 %. Literature revealed that fat and protein percentages in camel’s milk ranged between 2.4-5.5% and 2.5-4.5%, respectively. Mean values for fat and protein percentages were reported to be 3.57±0.09 and 2.85±0.036, respectively in Pakistani camels [15]. [16] reported fat and protein percentages as 2.63% and 4.01%, respectively in Kohi camels in Balochistan. Elamin FM and Wilcox [17] reported 3.15% fat and 2.81% protein in milk of Majaheim camel in Saudi Arabia. Reported fat and protein percentages were 3.22% and 2.91% in Majaheim, 2.85% and 2.52% in Hamra and 2.46% and 2.36% in Wadah camel’s milk, respectively in Saudi Arabia [18]. Camel’s milk was found to be less fatty than that of cow’s milk so it is readily digested [19]. Reported percent fat, protein, SNF, lactose, ash and water were to be 5.5, 4.5, 8.9, 3.4, 0.9 and 85.6, respectively in Ethiopian camels [20].
Yagil R and Etzion Z [21] reported the milk composition of camel as 4.3, 4.6, 14.3, 4.6, 1.01 and 85.7 percent fat, SNF, protein, lactose, ash and water, respectively while 1.1, 2.5, 8.8, 2.9, 0.96 and 91.2 percent fat, protein, SNF, lactose, ash and water, respectively in dehydrated camels. It is evident that when camels are exposed to drought conditions (seasonal water shortage), for the provision of fluid to the calf there is a physiological mechanism of change in the water content of milk. [22,23] reported the camel milk fat, protein, SNF, total solids and casein contents as 4.2±1, 3.0±0.6, 8.7±1.6, 12.8±1.6 and 2.4±0.5 percent, respectively in the Errer valley. Guliye AY, et al. [24] reported the milk composition for Bedouin camels as 2.79, 3.39, 4.81, 0.77 and 11.5 percent protein, fat, lactose, ash and total solids, respectively. [25] studied physicochemical quality of camel milk and reported range for total solids, solids not fat, fat, protein, casein, lactose and ash as 7.76-12.13, 5.56-8.29, 1.8-5, 1.8-3.2, 0.78-2.76, 2.9-4.12 and 0.85-1 gm/100 gm. [26] Summarized 82 reports and reported the average camel milk composition as 3.82±1.08, 4.46±1.03, 3.35±0.62, 0.79±0.09 and 12.47±1.53 for fat, lactose, total protein, ash and total solids, respectively. [27] Studied milk yield and characteristics of Tunisian camel and reported average total solids, fat, lactose, ash and total protein concentration as 116.76±11.32, 35.67±7.61, 43.82±5.68, 8.21±0.64 and 29.45±3.29 gm/L, respectively. [28] Studied milk production of dromedary camels under intensive management in United Arab Emirates and reported average fat, protein, lactose, total solids and solids-not-fat (SNF) concentrations as 2.51±0.03, 2.60±0.01, 4.03±0.03, 9.98±0.03 and 7.56±0.03 %, respectively. Camel milk has a higher level of Vitamin C and equal or higher levels of vitamins B1 and B12 than milk of other domestic animals [29] and this high level of vitamin C is especially important for the nomadic pastoralists as they face rarity of fruits and vegetables [30,31]. Reported ranges for fat and protein percentages were 2.60-3.20 and 3.73-3.89, respectively in Indian camel milk. In a later study [32] reported fat and protein percentages as 5.5% and 3.87%, respectively in Indian Bactrian camel milk. Reported mean values for fat and protein percentages of Mauritanian camel milk were 2.92+0.59 and 2.50+0.10, respectively [33]. Shoel breed, first stage of lactation and settled system showed highest reported concentration of fat while Soffer breed, first stage of lactation and semi nomadic system showed highest concentration of protein [34]. Range for lactose percentage was to be 2.9-5.8% in dromedary camel’s milk [35,15]. Reported percentage of lactose was to be 4.16% in Majaheim camel’s milk in Saudi Arabia [17]. Mehaia MA, et al. [18] reported 4.43% lactose in Majaheim, 4.46% in Hamra and 4.44% in Wadah camel’s milk, respectively in Saudi Arabia. Morin DE and Rowan LL [36] reported fat and sugar percentages as 2.7% and 6.5% in Llama camel’s milk in USA. Reported mean value for percentage of lactose was to be 4.91+0.61% in Mauritanian camel’s milk [33]. Soffer breed, first stage of lactation and semi nomadic system showed highest concentration of lactose while this value has been decreased by subsequent parity [34]. SNF and total solids percentages in camel’s milk ranged between 8.9- 14.3% and 11.5-17.8%, respectively. Mean values for SNF and total solids percentages were reported to be 9.00±0.13 and 12.36±0.19, respectively in Pakistani camels [35]. In recent studies, [37] reported fat & protein percentage as 4.44±0.46 and 3.42±0.04 respectively in milk of Marecha dromedary she-camels under extensive conditions in Desert Thal. Faraz A, et al. [38] investigated milk yield and composition of Barela dromedary camel under extensive conditions in Desert Thal and reported fat and protein percentage as 4.26±0.36 and 3.62±0.06 respectively. The mean values and range of milk fat, protein, lactose, SNF and total solids of dromedary camel at Desert Thal were found to be 4.79±0.48, 3.56±0.06, 4.92±0.05, 9.09±0.06, 13.88±0.09 and 3.95-4.98, 2.85- 3.96, 3.91-5.08, 7.62-9.67, 12.52-14.85, respectively [39]. Elamin FM and Wilcox CJ [17] reported 7.8% SNF and 10.95% total solids in milk of Majaheim camel in Saudi Arabia. Reported SNF and total solids percentages were 8.13% and 11.35% in Majaheim, 7.78% and 10.63% in Hamra and 7.61% and 10.07% in Wadah camel’s milk, respectively in Saudi Arabia [18,30,31]. Reported ranges for SNF and total solids percentages were 7.25-8.25 and 9.85-11.45, respectively in Indian camel’s milk. Reported SNF and total solids percentages were to be 9.18% and 14.68%, respectively in Indian Bactrian camel’s milk. [32,33] reported mean values for SNF and total solids as 8.88±0.08 and 11.80±1.0 in Mauritanian camel’s milk. Soffer breed, first stage of lactation and semi nomadic system showed highest concentrations of SNF and total solids while these values have been decreased by subsequent parity [34]. Mean values reported for titrable acidity and specific gravity were 3.57±0.09 and 1.03±0.007 in Pakistani camel’s milk [35,15]. Mehaia MA, et al. [18] reported percent acidity values as 0.144 in Majaheim, 0.137 in Hamra and 0.140 in Wadah camel’s milk, respectively in Saudi Arabia. [30,31] reported range for percent acidity as 0.12-0.14 in Indian camel’s milk. Mean value reported for percent acidity was to be 16.1±1.2 in Mauritanian camel’s milk by [33].
Average values for mineral composition were reported to be 49.42, 15.04, 0.55, 0.07, 0.22 and 1.42 mg/100gm of Na, Mg, Fe, Mn, Cu and Zn, respectively in milk of Kohi camels [16]. Elamin FM and Wilcox CJ [17] reported mineral contents as Ca (30.03), K (72.48), Mg (4.50), Na (43.10), Fe (0.28) and Pb (0.18) mg/100g in milk of Majaheim camels in Saudi Arabia. Reported mean values for Ca, Mg and P were to be 120.6±16.6 mg%, 11.2±2.0 mg% and 82.1±10.4 mg%, respectively in South Morocco camels. [40,41] reported range of Ca, Mg and P as 74.1-200.2 mg%, 5.9-115 mg% and 2.1- 92.9 mg%, respectively in Somali camels. Reported mean values for Fe, Cu and Zn were to be 1.00±0.12, 0.44±0.04 and 2.00±0.02, respectively in Indian dromedary camels [42]. Camel milk contains higher values of trace minerals such as Fe, Zn and Cu as compared to bovine’s milk [30,31]. Mal G and Pathak KML [32] reported Ca and P of Bactrian camel’s milk in India as 0.11 and 0.09 percent, respectively. Meiloud GM, et al. [33] reported mean value for mineral contents in Mauritanian camel’s milk as 1.30±0.09. Aljumaah RS, et al. [34] determined the factors like breed and production system that altered the milk composition of dromedary camels in Saudi Arabia and reported that Ca and K values were higher in Maghatier breed and in semi nomadic system.
Conclusion
Camel milk has great importance having rich contents and therapeutic peculiarities so worth considering for adults and infants. It can be used in people with allergies and lactoseintolerance in a safe way. It could be used to cure some diseases like liver disorders, long bone pain, joint problems, diabetes and may also be used as an aphrodisiac globally. As getting awareness it has become a peculiar diet having composition alike cow milk but with additional therapeutic worth; hence its use should be encouraged.
To read more about this  article: https://irispublishers.com/aphe/fulltext/composition-of-camel-milk-a-blessing-for-health.ID.000509.php
Indexing List of Iris Publishers: https://medium.com/@irispublishers/what-is-the-indexing-list-of-iris-publishers-4ace353e4eee
Iris publishers google scholar citations:
https://scholar.google.co.in/scholar?hl=en&as_sdt=0%2C5&q=irispublishers&btnG=
0 notes
open-access-public-health · 4 years ago
Text
Annals of Public Health & Epidemiology - APHE
Treatment of Infected Tibial Non-union: A Systematic Review and Meta-Analysis
Authored by  Mohammed Anter Meselhy*
Tumblr media
Abstract
Objectives: The aim of this work is to systematically review and make meta-analysis for treatment of the infected non-union of the tibia.
Methodology: A systematic review was completed, Study objective and comprehensive literature search were performed from the SCI, PubMed, Cochrane Library; and Embase between January 2000 and Feb. 2019. Some major data were statistically analyzed including number of patients, manage, external fixation time, Searches will be performed in the MEDLINE, Life Science Citations, PubMed, Google Scholar and EMBASE Biochemical (http://www.embase.com) databases will be accessed to search studies with no limits set during research, Terms aimed at capturing the target diagnosis, such as “infected tibial nonunion” and “fixation” were combined using the Boolean operator “or” and MeSH terms.
Result: 318 abstracts were identified during the search, 10 were excluded, 308 full articles were fully reviewed, 295 articles were excluded, 13 studied matched the inclusion criteria and were included in this review, 2 additional articles were identified during searching, a total 652 patients of 15 studies were presented in this systemic review.
Conclusion: Radical debridment of the infected non-union is mandatory, a satisfactory bone results and functional results were obtained. On the other hand Ilizaov technique proved as the most useful method in the treatment of the infected non-union of the tibia.
Keywords: Tibial; Nonunion; Infected; Ilizarov
Introduction
Tibial shaft non-union is considered the commonest type of long bone non-union, the aetiological factors are due to high incidence of severe open fractures, high energy trauma associated with vascular and soft tissue compromise, infection and bone devitalization and segmental bone loss [1-5]. In this review, analysis of the different methods for the treatment of the infected non-union of the tibia and the results to find the most successful technique with a satisfactory outcome.
Subjects and Methods
Study objective and comprehensive literature search were performed from the SCI, PubMed, Cochrane Library and Embase between January 2000 and Feb 2019. Some major data were statistically analyzed, including number of patients, manage, external fixation time, Searches will be performed in the MEDLINE, Life Science Citations, PubMed, Google Scholar and EMBASE Biochemical (http://www.embase.com/) databases will be accessed to search studies with no limits set during research, Terms aimed at capturing the target diagnosis, such as “infected tibial nonunion” and “fixation” were combined using the Boolean operator “or” and MeSH terms.
Selection criteria
1) Prospective or retrospective studies discussing treatment of infected Tibial nonunion.
2) Journal articles, studies and papers are all involved.
3) Clinical studies within last 18 years.
Exclusion criteria
i. Non-human studies and case reports.
ii. Cadaveric studies.
iii. Reviews, Commentaries and general discussion papers not presenting data on impacts.
Statistical analysis
The collected data will be presented as suitable tables and illustrated as suitable figures. Analysis of data will be with the aid of software package of SPSS using suitable statistical tests. For all studies, patient selection, study inclusion and exclusion criteria, patient characteristics, procedure, protocol and outcomes measured were extracted using standardized data extraction forms. Outcomes were also extracted from each study. During data extraction, each included study was assessed for quality using the Cochrane Risk of Bias Checklist.
Results
318 abstracts were identified during the search, 10 were excluded, 308 full articles were fully reviewed, 295 articles were excluded, 13 studied matched the inclusion criteria and were included in this review, 2 additional articles were identified during searching [6-20], a total 652 patients of 15 studies were presented in this systemic review (Table 1).
1) The studies included are retrospective and prospective, Outcome measures were reported in most of the studies according to ASAMI score. This includes mainly bone results and functional outcome. Some papers reported further data.
2) Study designs, study period and outcome measures are shown in (Table 2).
Regarding the type of bacteria causing the bone infection was not mentioned in studies [7-9,11,13,15,18]. While Madhusudhan TR, et al. [12] reported mixed bacterial growth (Table 3). Associated comorbidities were not mentioned in the majority of studies (Table 4). Only Bakhsh K, et al. [6] reported Smoking, Diabetes, malnourishment and obesity as associated comorbidities.
Bone results and functional results
I. The criteria are commended by ASAMI were adopted to evaluate bone results and functional results in the studies [6,7,10,12,14-17,19,20]. Bone results were evaluated by 4 criteria: union, infection, deformity and limb-length discrepancy.
II. Functional results were evaluated by 5 criteria: active, limp, minimum stiffness (knee or ankle joint), reflex sympathetic dystrophy and pain.
III. Bone results were evaluated in12 studies by ASAMI [6,7,10,12,14-17,19,20] Random effects meta-analysis showed that the weighted frequency of excellent rate, good rate, fair rate and poor rate in bone results were listed in (Tables 5,6,7).
External fixation index
The external fixation index denotes the number of days the external fix at or is attached to the bone per centimeter of length gained. Using conventional Ilizarov fixation, this index is typically thirty days per centimeter of length gained (Table 8). However, the rate differs based on variables such as patient age, osteotomy site and amount of lengthening [20,21]. In the selected studies, it was not reported in all. Ranged between 1.2 and15.7 [6,10].
Complications
Pinsite infection, kneestiffness, K-wires loosening, recurrence of wound infection, regenerate fracture, Malunion, paraesthesia, softtissue impingement and Mortality was reported. Pintract infection was the most common complication and it was managed by antibiotics in all studies. Mortality was reported in one case by Khan, et al. [10]. Complications are listed in (Table 9). According to Bakhsh, et al., the associated soft tissue defect shealed by soft tissue transport, VAC dressings and fasciocutaneous flaps [6].
Meta-analysis of bone results
Rate of excellent, good, fair and poor bone results are shown in Table 10 (Figure 1).
Discussion
Infected non-union of the tibia is still challenging, especially with presence of bone and soft tissue defect, it was proved that Ilizarov external fixator show a high rate of success in the treatment of such difficult cases [20]. Debridment of devitalized bone and soft tissue, antibiotic therapy and antibiotic impregnated beads can lead to acceptable cure rate in less virulence infection, while in sever type of bone infection radical debridment, stable external fixator and bone transport or compression distraction are mandatory to gain bone union [21,22]. Papineau technique, tibiofibular synostosis, other bone substitutes is of limited ability to reconstruct the limb length discrepancy and to correct the limb deformity [22]. in addition to lack of limb movement [23].
Ilizarov and his follows since 1950 have employed new biological techniques and a different system of external fixation to achieve union, correct deformity eradicate infection, limb length restoration, in association of maintaining of limb function [22,23]. Ilizarov apparatus is a very good tool to treat large bone defects due to radical debridment in infected nonunion [22,23]. It can be done either by trifocal osteosynthesis or bifocal osteosynthesis, or by gradual fibular transfer [24].
This is a systematic review of the treatment of infected nonunion of tibia. The majority of the studies used Ilizarov method. This systematic review included 15 studies, and we conducted a meta-analysis of 11 studies to evaluate the efficacy of fixation method in the treatment of infected nonunion of tibia. The poor rate in bone results and functional results was 4.6% (95%CI, 0.012, 0.051; I2 = 22%, P = 0.237) and 4.9% (95%CI, 0.057, 0.243; I2 = 80%, P = 0.002). The data were not statistically heterogeneous. So, these results showed that the patients with infected nonunion of tibia treated by Ilizarov methods had a low rate of poor bone and functional results.
We did a meta-analysis of complication in patients with infected nonunion of tibia treated by Ilizarov method. Statistically homogeneity was found in most of the complications (Tables 11,12). Pin-track infection is the most common complication by using Ilizarov methods, and significant statistically heterogeneity was found in the complication. The rate of pin-track infection was 3.6-100% among included studies in our systematic review. Hence, we considered that careful pin care was the key to reducing the complication. The data of infected tibia nonunion could be found in (Tables 5,6). The poor rate in bone results and functional results was was 4.6% (95%CI, 0.012, 0.051; I2 = 22%, P = 0.237) and 4.9% (95%CI, 0.057, 0.243; I2 = 80%, P = 0.002). The rate of Pin site infection, knee stiffness, K-wires loosening, recurrence of wound infection, regenerate fracture, Malunion, paranesthesia soft tissue impingement and Mortality were respectively 36.50%, 2.45%, 3.68%, 1.38%, 1.84%, 7.06%, 0.15%, 1.07% and 0.15%. These data were not statistically heterogeneous.
We also conducted meta-analyses of bone and functional results in our systematic review. High heterogeneity existed in several pooling data in our study, and we thought the heterogeneity was probably resulted from the different research quality, various surgeons’ experience and diversity of rehabilitation nursing. Failure to include the non-English language studies in our article could have resulted in missing data and our estimates of effect size might have been biased, nevertheless, 15 studies were included in our article and they were not excessively affected by significant statistical heterogeneity. The data of the present review were extracted from observational studies, which are likely to cause both systematic and random errors. Therefore, more prospective randomized controlled trials are needed to overcome the limitation of this study.
Conclusion
In conclusion, our systematic review showed that the patients with infected nonunion of tibia treated by Ilizarov methods had a low rate of poor bone and functional results. Therefore, Ilizarov methods may be a good choice for the treatment of infected nonunion of tibia.
To read more about this  article: https://irispublishers.com/aphe/fulltext/treatment-of-infected-tibial-non-union-a-systematic-review-and-meta-analysis.ID.000508.php
Indexing List of Iris Publishers: https://medium.com/@irispublishers/what-is-the-indexing-list-of-iris-publishers-4ace353e4eee
Iris publishers google scholar citations:
https://scholar.google.co.in/scholar?hl=en&as_sdt=0%2C5&q=irispublishers&btnG=
0 notes
open-access-public-health · 4 years ago
Text
Annals of Public Health & Epidemiology - APHE
Use of Antibody/Antigen Diagnostic Testing and Face Coverings with Traditional Non-Pharmaceutical Intervention Measures to Mitigate Human-to- Human Coronavirus (COVID-19) Transmission at the Community Level
Tumblr media
Abstract
In December 2019, the Health Commission of Hubei province, China reported a cluster of new cases of viral pneumonia later identified as a highly infectious novel human coronavirus that causes the severe acute respiratory syndrome (SARS-CoV-2), renamed Covid-19. In January 2020, the first confirmed case of Covid-19 was reported in the United States. As of this writing, Covid-19 has spread globally, with greater than 700,000 fatalities and infecting over 21 million people around the world in just over 8 months. The United States is now the epicenter with over 5 million reported cases and greater than 170,000 deaths. The Covid-19 pandemic has emerged as an unprecedented global crisis overwhelming healthcare systems around the world. Public health authorities must rely on non-pharmaceutical intervention (NPIs) measures to mitigate the spread of coronavirus transmission as no vaccine or antiviral therapy is available. As the number of infectious disease cases continue to rise with increasing mortality, the authors recommend routine antibody/antigen diagnostic testing of not only symptomatic persons for the coronavirus, but asymptomatic and presymptomatic individuals as well. The universal policy of routine use of face coverings when out in public should also be implemented as it is now recognized that viral shedding can occur several days before the onset of clinical symptoms and last for days. In this article, we describe the public health strategies and the challenges public health authorities may encounter in this 21st century global pandemic to flatten the pandemic curve.
Introduction
Pandemic the word strikes fear because of the severity of the disease in terms of morbidity, mortality, social and economic turmoil. Within 24-36 hours, an infectious disease can arrive on the doorstep of any densely populated city from a remote part of the world due to air travel and globalization [1,2]. Undetected, the virulent pathogen can quickly overwhelm the healthcare system protecting its citizens. Several viruses have caused pandemics in the 20th century (1957 Asian flu; 1968 Hong Kong flu; 1981 AIDS/HIV), but all pandemics are measured against the Spanish influenza pandemic of 1918-1919. It is estimated to have infected over 500 million individuals and killed 50-100 million people around the globe due to sustained community spread [3,4]. It is estimated that 675,000 deaths occurred in the United States. Mortality was highest in young adults, aged 24-40 years.
A common characteristic of pandemics is that they are facilitated by sustained transmission between individuals in a community and can spread rapidly over a short time causing relatively high mortality [5,6]. Compared to the 2003 SARS pandemic, Covid-19 is far more lethal with greater infectivity and mortality as asymptomatic and presymptomatic individuals are capable of spreading the virus in the community 2-3 days before the onset of clinical symptoms [7-9]. Therefore, it is uncertain if traditional NPIs can effectively mitigate transmission of the coronavirus. As the Covid-19 pandemic trajectory continues to rise with increasing mortality, the authors recommend additional non-pharmaceutical intervention measures such as antibody/antigen diagnostic testing for the coronavirus of both asymptomatic and presymptomatic individuals and the routine use of face coverings (masks) out in public to flatten the pandemic curve. In this article, we describe the additional and traditional non-pharmaceutical intervention (NPI) measures to contain the transmission of the coronavirus in the community. We also describe the unique challenges public health authorities will encounter while implementing NPIs in the community.
2003 SARS and COVID-19
The 2003 coronavirus that is the etiology for the severe acute respiratory syndrome (SARS-CoV-1) in China killed 774 people and had 8,096 reported cases was the first pandemic of the 21st century [1,10,11]. With no viral vaccine, the coronavirus outbreak was successfully contained with traditional non-pharmaceutical public health interventions used to eradicate the 1918 influenza pandemic and other influenza pandemics of the 20th century.
In December 2019, multiple cases of a severe pneumonia with unknown origin were reported in Wuhan, the capital city of Hubei province China [12-15]. The pathogen was identified as a novel enveloped RNA beta-coronavirus that was designated severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), which is similar to SARS-CoV-1 but much more infectious. The virus rapidly spread throughout the world and became known as Covid-19. On March 11, 2020 the World Health Organization (WHO) declared Covid-19 a worldwide pandemic [16]. As of this writing, the infectious disease has spread globally, and greater than 21 million cases have been reported and over 700,000 people have died [17].
Patients infected with Covid-19 that are asymptomatic or mildly symptomatic and not seeking medical treatment are likely spreading the stealth virus throughout the community from respiratory droplets and direct surface contact [8,18,19]. This is why the Covid-19 pandemic has generated such urgency regarding the implementation of public health mitigation strategies. With no controlled scientific studies that document the effective use of NPI measures, public health has had to rely on historical analysis and observations of what proved successful in slowing down the spread and eventual eradication of a pandemic. An example is the use of geographic quarantine measures by a small town in Colorado and a naval station located on an island in San Francisco Bay during the 1918 influenza pandemic that resulted in minimal morbidity and mortality [20,21]. Use of simulation model studies based on contemporary public health data from the 2009 influenza pandemic were also studied to evaluate the effectiveness of NPIs in select cities of the United States [22]. The results of the simulation studies support the implementation of NPIs in controlling the spread of the contagion when an antiviral vaccine is not available.
Epidemiologic Characteristics of the Coronavirus Outbreak
The Covid-19 outbreak from Wuhan, China is similar to the 2003 coronavirus epidemic where zoonotic transmission from bats via the intermediate host palm civets in Guangdong, China is largely responsible for the 2003 global pandemic. As Covid-19 is more contagious than the 2003 SARS, and with a basic reproductive number of 2.2 to 2.5 it rapidly spread from Wuhan to the entire country in 30 days overwhelming China’s healthcare and public health system [14,23]. Such rapid disease transmission was due to shedding of high viral loads that can occur 2-3 days prior to the onset of clinical symptoms and human-to-human transmission in the community [8,18,19,23]. Studies have demonstrated that initial high viral loads are associated with death [24].
Mitigating the transmission of any infectious disease that may progress to a pandemic requires an understanding of the potential disease transmission of the pathogen [23,25,26]. The basic reproductive number (R0) and the case fatality rate (CFR) are two important epidemiological parameters that will allow clinicians and public health authorities to understand the viral dynamics and epidemiological characteristics of a pandemic [27,28]. One measure of viral infectivity is the basic reproductive number which is characteristic to a pathogen. It is the average number of secondary cases produced by a primary infectious case in a susceptible population without intervention [29]. Calculating the R0 can estimate the virulence of the infectious disease from an infected individual and helps to determine the selection of mitigation strategies to contain and slow down disease transmission. An R0 greater than 1 represents a disease that will be difficult to slow down and will spread rapidly in the community. A R0 of less than 1 will not progress to a pandemic and will decay. The median R0 of influenza has been reported to be between 1.5 to 2. Measles has an R0 of 15 and is very contagious. The R0 of SARS-CoV-1 was approximately 1.7 to 1.9, while it is estimated that the R0 of Covid-19 is between 2 and 4, with a median of 2.79 [23,26,30]. Guan et al (2020) estimates that a R0 between 2 and 3 correlates to a high pandemic potential where each infected individual can spread the virus to two or three other individuals [31]. Therefore, compared to SARS-CoV-1, Covid-19 with a much higher reproductive number is more virulent with greater potential for transmission of disease.
The case-fatality rate (CFR) is the number of deaths in persons who tested positive for the disease divided by the number of confirmed cases [32]. Public health interventions rely on the early CFR estimates to determine intensity, timing and duration of intervention strategies of outbreaks [33]. Patients infected with the 2003 SARS virus had a CFR between 7% to 17% (average 9.6%) in the most severely affected areas by the virus (774 deaths in 8, 096 reported cases) involving 26 countries [34]. Early studies regarding Covid-19 case fatality rate in the city of Wuhan varied from 4.3% to 14.6% [13,35]. The average fatality rate of persons reported by the Chinese CDC and Prevention was 2.3% (1,023 deaths among the 44, 672 confirmed cases in Wuhan, China). However, when CFR was stratified by age group, the fatality rate increased due to increasing age, the presence of comorbidities and initial high viral loads [36].
Differences in age structure of the affected population will reveal the true burden of disease. High viral load is considered an independent risk factor with a worse prognosis [37]. Adults in this age group had reported comorbidities such as cardiovascular disease, pulmonary disease, and diabetes mellitus. The CFR for adults older than 60 years was 3.6%, while adults over 70 years had a reported CFR of 8.0%. Adults older than 80 years had a high lethal CFR of 14.8% [31]. High viral loads observed in the older patient population is associated with a higher CFR.
Clinical characteristic of Covid-19
Estimates of the Covid-19 incubation period range between 1-14 days but may be as long as 24 days [38]. Incubation period is defined as the time from exposure of the pathogen to the development of signs and symptoms of disease. Although the estimated median incubation time is approximately 5.1 days for Covid-19, it is now recognized that asymptomatic or mildly symptomatic individuals are capable of spreading the virus in the community 2-3 days before the onset of clinical symptoms [8,18,19,23,39]. In comparison, infectiousness of the 2003 SARS outbreak occurred 7-10 days after onset of symptoms that resulted in patients seeking treatment before they could continue to spread the virus [1].
Patient clinical presentation in both SARS-CoV-1 and Covid-19 are similar to the influenza virus and includes the following: persistent fever, chills, fatigue, myalgia, malaise, dry non-productive cough, headache, dyspnea, sputum production, arthralgia, sore throat, rhinorrhea, diarrhea and nausea/vomiting. Such non-specific clinical findings that mimic influenza make the diagnosis difficult and may be responsible for the early spread of the Covid-19 virus [8,24,31,40]. However, new reported clinical findings that have raised alarming concern are the presence of neurologic manifestations. During the SARS-2003 pandemic, Tsai et al (2004) first reported on neurologic complications that consisted of axonal peripheral neuropathy or myopathy with elevation in creatine kinase levels in four patients three weeks after the first symptoms of SARS [41]. More recently, Mao et al. (2020), reported that 78 (36.4%) of 214 hospitalized patients diagnosed with Covid-19 had various neurologic manifestations that involved the central and peripheral nervous systems and skeletal muscles [42]. Older patients with comorbidities were more prone to present with neurological manifestations including ischemic stroke, seizures, ataxia and depressed levels of consciousness. Oxley et al (2020) reported five cases of large vessel stroke in patients under 50 years of age who were all diagnosed with Covid-19 [43].
Reports of loss of smell and taste have also been reported that occurs early during the course of Covid-19 [42,44,45]. However, anosmia with SARS-2003 was first reported by Hwang (2006) [46]. In this single case report, the loss of smell occurred about 3 weeks after the onset of clinical symptoms. With reports of new onset neurologic symptoms, Covid-19 should be included in the differential diagnosis when patients present with such neurological manifestations.
Non-Pharmaceutical Interventions (NPIs)
The objective of community mitigation in the face of a pandemic are to slow down the peak of the pandemic, decrease the total number of cases to decrease morbidity and mortality, control the surge capacity of the healthcare infrastructure and buy time to develop an effective vaccine or new antiviral therapies (Figure 1) [47,48]. In addition to implementing simultaneous largescale community NPIs to control viral transmission, promoting personal and environmental protective measures should not be underestimated. Household-based public health interventions include hand washing with soap and water (or alcohol-based hand sanitizers), respiratory etiquette (covering your mouth and nose when coughing or sneezing), the use of face coverings, and use of protective eyewear [18,40]. Aerosol and fomite transmission of the coronavirus is possible as the virus can remain viable and infectious in aerosols for hours and on different surface tops for days [18,24,49] Environmental cleaning of surface tops and objects is recommended as it proved effective in controlling the spread of the H1N1 influenza pandemic of 2009 along with good respiratory and hand hygiene [50,51].
Universal wearing of face coverings (Masks) in the community
The controversy and confusion of wearing face coverings by individuals when in the community is sparked by the World Health Organization (WHO) recommending against the wearing of face coverings because of the lack of evidence of protection against the coronavirus [52,53]. There is a paucity of studies that are inconclusive on the efficacy of face covering (mask) use and the ability to prevent transmission of infectious disease from respiratory droplets and aerosols in entire communities. Further, there are no studies that evaluated the effectiveness of face mask use during a pandemic.
In a meta-analysis, Xiao and colleagues (2020) concluded that there is no significant reduction in laboratory influenza transmission [54]. Other studies demonstrated limited evidence that wearing a face mask in households, in the presence of an individual suffering from the influenza virus, or in mass social gatherings will prevent the spread of the influenza virus [55,56]. In a systematic review by Brainard et al (2020) examining whether wearing a face mask with other protective measures would prevent the transmission of respiratory illness such as coronavirus, influenza and tuberculosis, the authors concluded that the evidence is not sufficiently strong to support the widespread use of face masks as a protective measure against Covid-19 [57].
Although there are no studies assessing the effectiveness of face coverings during a pandemic, the lack of protection of viral transmission remains unclear. Use of face coverings is a form of source control that may protect individuals from respiratory droplets or aerosols. The US Centers for Disease Control and Prevention now advises the wearing of cloth face coverings in public [58]. It has recently been discovered that the coronavirus sheds at high concentrations from the nasal cavity 2-3 days before the onset of symptoms and continues for at least one week after symptom onset [7,8,18,19,23]. Therefore, asymptomatic or presymptomatic individuals could be potential super spreaders of the stealth coronavirus as they are indistinguishable from healthy individuals in the community. Moreover, Booth et al (2013), demonstrated that use of surgical face masks resulted in a six-fold reduction in exposure to the influenza virus in aerosols and recommends the use of face masks [59]. Lastly, in a study by Leung et al (2020) use of surgical face masks prevented the transmission of both the coronavirus and influenza from individuals who are symptomatic [60].
Routine diagnostic testing for coronavirus antibody and antigen
The debate over what type of diagnostic test to implement and whether to test both asymptomatic and patients that exhibit signs and symptoms for the coronavirus have led to a delay in the effective response to flatten the pandemic curve and decrease the reproductive number. Further, knowledge of diagnostic tests specific for the coronavirus is still evolving and is contributing to the delay in mitigation of this novel emerging infectious disease. Currently, the gold standard for diagnosis of early active RNA viral infection is the polymerase chain reaction (PCR) test. PCR testing can identify viral genetic material in sufficient amounts from a specimen obtained from the nasopharynx or other respiratory tract specimens, including saliva [61,62]. However, PCR testing may fail to detect viral infection if insufficient amounts of the virus are not obtained and missing the critical time of viral replication may result in false negative results. A false negative result will allow infected individuals to circulate in the community spreading the virus [63
Coronavirus infection can also be detected indirectly by evaluating the host immune response to Covid-19 infection using serological assays. In a study by Zhang and colleagues (2020) they reported that seroconversion occurred 7 to 12 days after the onset of symptoms [64]. Understanding of viral dynamics is important to evaluate the penetration of the coronavirus in the community. Most important, serological testing may identify which individuals have developed herd immunity and protected from infection [65]. Detection of antibodies, especially IgM which is produced after the infection can be used concurrently with PCR testing to increase sensitivity and accuracy in detection of the coronavirus [66]. Therefore, what is urgently needed and not yet available is an portable rapid point-of care test for detection of SARS-CoV-2 specific IgM and IgG that can identify infected persons in real-time in a matter of minutes and does not need highly trained laboratory personnel to complete the testing.
Also not yet available but the need is paramount is a test to detect the presence of viral antigens that will identify which individuals are actively infected with the coronavirus and who must go into isolation to prevent viral transmission in the community. This is critical for screening individuals in the community, including healthcare staff to determine who has become actively infected [67]. The advantage of antigen detection is because coronavirus infections cannot be diagnosed strictly by clinical symptoms and it has been demonstrated that viral shedding occurs before the onset of symptoms [7,8,18,19,23]. In a study by Bruning and colleagues (2018), they were able to identify the coronavirus as the etiology for infection and antigen level in six patients who developed symptoms of respiratory disease [61]. They also observed a correlation between severity of clinical symptoms and antigen levels.
Opponents of routine diagnostic testing are concerned that testing large populations will overwhelm healthcare facilities and their services that are needed to treat more severely ill patients [68]. In addition, individuals that do not have the coronavirus will be exposed to persons that may actually harbor the coronavirus. Other concerns are misinterpretation of the testing results. As the coronavirus incubation period is 4- 5 days, such a window of infectivity may result in individuals believing they tested negative for the coronavirus. However, a positive test reflects the presence of viral RNA but does not indicate viability of the coronavirus. Such uncertainty about infectivity will still require isolation of that individual from the public [69].
But individuals that have mounted an immune response to the coronavirus that causes Covid-19 may be immune and unlikely to spread the virus to others in the community. Testing for the coronavirus to ease social distance, isolation and quarantine restrictions makes testing an integral part of public health and restoring civil liberties [70]. Surveillance is a time-tested public health strategy to protect communities from emerging infectious diseases [5,6]. Not only is surveillance important in identifying disease outbreaks that could progress to a pandemic but is also involved in containment and eradication of the threat. To monitor disease outbreaks, surveillance is utilized which relies on symptoms as described in this paper before a diagnosis is arrived at.
Social distancing is a community mitigation strategy designed to decrease non-essential gatherings between individuals to reduce viral transmission and the reproductive number of the virus [48,71,72]. Such measures may flatten the pandemic curve by increasing the physical distance between individuals and decreasing the population density. Examples of social distancing include the cancellation of musical concerts, sporting events, open food markets, and the closing of schools and businesses. Such interventions when implemented early during an outbreak were associated with lower peak death rates and viral transmission [5,73]. Because of the ability to conduct educational classes at all levels and business online over the Internet, students learning at home and employees working from home should be seriously considered to control the spread of the pandemic [51]. In a systematic review by Ahmed et al (2018), social distancing in the non-healthcare workplace has been shown to decrease the risk of influenza transmission that occurs within a distance of 6 feet or greater [74].
Social distancing is not without controversy and challenges for community acceptance as public health policy is created to protect the entire community population and not the individual. Social distancing may not be sustainable as schools and businesses close, and unemployment continues to rise [75]. Reluctance or lack of compliance with social distancing measures will increase the rate of disease transmission. This has been observed when social distancing measures were implemented after the contagion has entered the community. But, implementing social distancing measures too early may result in poor compliance due to fatigue and economic hardship [76,77]. As there is no clear criteria about the relaxation of social distancing restrictions and other NPI measures that include isolation and quarantine, social distancing must be tailored to individual-level information that relies on surveillance and antigen or antibody testing for the coronavirus. The importance of this policy is that mass testing of individuals will avoid broader community-level restrictions on civil liberties [78].
Closure of schools as an NPI have proven effective in mitigating the spread of the influenza pandemic [79,80]. It was observed that early implementation of school closures resulted in reducing the peak of the epidemic [81]. However, school closings before or after the influenza peak showed no differences in decreasing the spread of the influenza epidemic [81]. During times of an pandemic, if the R0 is low, school closings may be effective at reducing viral transmission to below 1. In contrast, if the R0 is high indicating sustained viral transmission, school closure may not have the same impact [75]. What remains unclear is the specific duration of school closure time and its effect on eradication of the disease. It was observed that the longer schools were closed, the longer the delay in the peak of the influenza outbreak. Therefore, a specific duration of school closures is not recommended.
Isolation and quarantine are two time-tested public health mitigation strategies that attempt to slow down community viral transmission while trying not to breach civil liberties and human rights [48,82,83]. Isolation is the separation and restriction of movement of individuals who have become infected with the contagious disease to prevent transmission to others. For isolation to be effective, two epidemiological factors must be considered, and they are the number of secondary infections from the index source and the degree of viral transmission that occurs before the onset of symptoms. Viral transmission before onset of symptoms can be prevented by tracing contacts of infected individuals and placing them in a 14-day quarantine [10,50,51]. Isolation procedures can be implemented at home or any special designated facility [48, 51,74].
Individuals who are asymptomatic or minimally symptomatic should remain in isolation until they are without symptoms of respiratory infection (fever and cough). However, isolation is not without personal challenges. Isolation of infected individuals to decrease the spread of viral transmission requires cooperation to adhere to this intervention. Such intervention may be inconvenient socially and could result in loss of economic viability [84]. Quarantine can be completed voluntarily, or by legal authority and refers to the restriction of movement and separation of individuals who have been exposed to an infectious disease before they become infected and transmit the disease to others [78,82,83]. Quarantine can occur in the individuals own home or in a designated facility. Although considered an extreme NPI measure, quarantine has been shown to contain the spread of secondary cases from the index case that decreases the reproductive number (R0) to less than 1 [85,86]. In many parts of the world, especially the United States restricting movement of healthy individuals for weeks or months may not be considered socially acceptable as it infringes on individual civil liberties [78,83].
With cities or communities designated “hot spots”, communitylevel quarantine, (the “cordon sanitaire”) is an extreme interventional measure that involves the closing off of an entire city, community or neighborhood from external contacts designed to increase social distancing and reducing disease transmission [2,48,87]. To control the spread of Covid-19, the Chinese government effectively used cordon sanitaire with draconian social control, armed police and electronic surveillance of its people to close off the city of Wuhan and Hubei province confining 60 million people from entering or leaving. Pan et al (2020) reported that cordon sanitaire with other NPI measures effectively decreased coronavirus transmission and the R0 to less than 1 [88]. As the Covid-19 pandemic continues to spiral out of control, it may be necessary to quarantine “hot spots” to contain disease transmission to other regions of the country. In the United States, the use of cordon sanitaire will no doubt encounter extreme opposition and resistance. Further, there are no broad national quarantine powers that would allow for the closing off of large cities all in the name of public health [82,83]. Such a mass quarantine of both infected and non-infected individuals would violate public health statutes. The more acceptable less restrictive NPI measures would be surveillance, contact tracing with isolation and social distancing while monitoring rates of infection and the reproduction number [89].
Contact tracing with isolation and quarantine has always been part of the strategy to mitigate disease transmission by identifying persons exposed to the infected individual and who may need medical evaluation [25,90]. The definition of a close contact is an individual who did not wear personal protection equipment (PPE) while in face-to-face contact with a confirmed case for 15 minutes or greater after onset of symptoms. As many individuals are harboring the coronavirus and are asymptomatic, isolation of infected individuals and contact tracing may not be possible due to the inability to identify this cohort [18]. Therefore, successful contact tracing and decreasing the delay between onset of symptoms and isolation are important as infected individuals can continue to transmit the virus while circulating in the community.
It has been shown that with airborne infections many of the network’s contacts may not be traceable [91]. If the disease is rapidly spreading in the community and the amount of contacts identified are occurring at a slow rate, contact tracing may not be able to keep pace with pandemic spread. Under these conditions, the reproductive number will be greater than 1 and the outbreak will continue to progress due to an increase in the number of infected cases [27,92]. Although non-pharmaceutical intervention strategies were effective in controlling the spread of the 1918 influenza virus, in a study by Hatchett et al (2007) reintroduction and transmission of the virus returned and extended for many months as the containment strategies were relaxed too early in many cities (Figure 1) [92]. In a study comparing 17 cities, most of the NPIs were implemented between 2 and 6 weeks. Cities that maintained their interventions beyond 6 weeks did not experience a second wave of the influenza pandemic. In contrast, cities that relaxed their interventional strategies too early experienced not only a second pandemic wave, but greater population mortality that was not observed in cities that maintained their interventions beyond six weeks [71,93]. Therefore, a critical issue that must be determined by public health authorities is when to relax the interventions implemented to control the spread of viral transmission. Without a vaccine or antivirals available to protect individuals in the community, individuals will remain vulnerable to the pandemic virus. With such uncertainty, communities may have to maintain their interventions longer that 8 weeks [92].
Conclusion
Reliance upon NPIs that include the routine use of surgical face coverings and antibody/antigen testing for both symptomatic and asymptomatic individuals should be initiated immediately to contain the spread of the novel human coronavirus. If these intervention strategies fail or relaxed too early in the course of this pandemic, the trajectory of the pandemic will continue to rise and will continue to overwhelm our health care infrastructure leading to greater mortality and economic demise in our communities.
To read more about this  article: https://irispublishers.com/aphe/fulltext/use-of-antibody-antigen-diagnostic-testing-and-face-coverings-with-traditional-non-pharmaceutical.ID.000507.php
Indexing List of Iris Publishers: https://medium.com/@irispublishers/what-is-the-indexing-list-of-iris-publishers-4ace353e4eee
Iris publishers google scholar citations:
https://scholar.google.co.in/scholar?hl=en&as_sdt=0%2C5&q=irispublishers&btnG=
0 notes
open-access-public-health · 4 years ago
Text
Annals of Public Health & Epidemiology - APHE
Using Cellphone While Driving Among Saudi Drivers in Saudi Arabia, Cross Section Study 2018
Authored by Samar A Amer*
Tumblr media
Abstract
Background: Recently, the use of cell phone has increased among people nevertheless use it while driving lead to driver distraction which increases the risk of accidents and it is considered as a main cause of deaths in Kingdom of Saudi Arabia (KSA).
The aim: To measure the level of knowledge, the practice and the risk of using cell phone while driving and its’ related hazards in KSA, in order to decrease the prevalence of its use while driving.
Method: A cross-sectional study targeting 1320 randomly selected Saudi drivers, stratified to present the main 5 different regions (404 center, 386 west, 212 east, 232 souths, and 86 north), during October 2017-Jan year 2018. The data collected through online well-structured questionnaire and analyzed using the suitable tests.
Results: 1003(75.9%) of participants using cellphone recurrent while driving, 1076(81.5%) for calling, because 44.3% addicted to its use,85.1% by hands,90% when alone. 59.1% of drivers don’t use the Cellphone Holder or Bluetooth due to unavailability.82.8% had an accident, and 86% exposed to danger .97.9% had good knowledge only 12.1% had good practice, there was a significant association between using cell phone while driving and risk of accidents (p<0.05).
Conclusion: The central region had a highest prevalence of using cell phone while driving, most of the drivers have a good knowledge but they still using cell phone while driving in a bad practice.
Keywords: Cell phone use; Driving; Accident, KSA
Introduction
Road Injuries are the main cause of deaths in Kingdom of Saudi Arabia (KSA) [1]. However, driving cars is the most important transportation in developed countries, but most of the drivers are not fully aware of the driving rules that probability of occurrence hazards [2]. In recent years the importance of mobile phones has increased among people and they can use it in both positive and negative way [3], such as in Jeddah they estimated the number of using the cell phone while driving by 98.2% of them [4].
Moreover, the prevalence of using cell phone while driving was different between countries such as United States (US) 69% [5] and United Arab Emirates (UAE) 80% [6] are showed higher prevalence than the United Kingdom (UK) 21% [5] and Australia 39% [7]. In fact, using cell phone while driving lead to driver distraction which defined as ‘’the diversion of attention from activities critical for safe driving towards a competing activity, it is a behavior that jeopardizes the safety of drivers, passengers, and non-occupants alike’’ [8]. The distraction of using cell phone while driving can distract drivers’ eyes (visually) and minds to be attention to the road (cognitively) and their hands on the steering wheel (physically) [9]. Distraction crashes killed 3,092 people, 408 (13%) of them at least one driver was using cell phone when the crashes occurred [8] As well as 78% of the accidents in KSA related to using cell phone while driving [10]. A study showed, the risk of accident dramatically increasing with using cell phone while driving, (72%) of drivers believe that very likely to have a crash caused by texting/browsing cell phone, while (41%) very unlikely of a voice call (handheld) [7]. Moreover, Study show that uses cell phone while driving is similar dangerous with driving in the drunk situation at the legal limit [11].
Most governments develop their laws of using cell phone while driving, for instance in KSA, Ministry of Interior apply fines not less than 150 Saudi Riyal(S.R.) and not more than 300S.R. for using cell phone while driving [12] and in the UK the penalty of using handheld phone is 200£ while driving car and 2,500£ while driving lorry or bus [13]. As a global rank road traffic accident become the 11th leading cause of death [14]. Moreover, road traffic accident gave a higher rate of mortality in all Gulf countries [15,16]. While in KSA road injuries are the main cause leading to death [1]. Lately using cell phone while driving has become more common among drivers and it caused to injuries, disability, and accident.
Using cell phone while driving
Due to technological changes, the importance of cell phones services become more and more significant throughout the world. The reasons of that are what these devices have to software and features such as (social media, GPS, games, radio, internet and downloading, et al) that makes the daily life easier [17].
According to Sanbonmatsu D, Strayer D, et al. (2015) [18], the study conducted in Salt Lake City, Utah with 77 undergraduate participants to examine the impact of multitasking on performance monitoring and assessment showed there is a significant association between using cell phone while driving and the more serious errors of driving (p = .008). Also, people who use cell phone while driving is less aware than people who did not use it. These results indicate weak awareness among drivers about traffic safety [18].
A cross-sectional survey of 695 respondents was aimed to determine the prevalence of seat belt use and distracted driving behaviors among health-care providers in Saudi Arabia and its comparison with non-health-care providers in 2017. According to Jawadi H., et al Study showed most of the drivers using cell phone while driving but the highest rate was among who answer the cell phone while driving (98.5%).
Also, most of the drivers texting a message while driving (74.3%) but the accident that caused by text a messaging was only (28.7%) [19]. Open-ended interviews study with 228 sample conducted in U.S by Bergmark R, Gliklich E, Guo R, Gliklich R. [20] to describes the development and preliminary evaluation of the Distracted Driving Survey (DDS) and score in 2016. As we mentioned before, most of the drivers prefer to use cell phone to text a message while driving and study showed only 12.7% read the message while driving with any speed, 15.6% lowering them speed and 10.1% when stopped [2].
Recently, cell phone has become a necessity of life and has spread among the world as much as there are advantages and disadvantages to using cell phone according to Billieux J, Maurage P, Lopez Fernandez O, Kuss D, Griffiths M [21] study conducted in 2015 show Enhancement of health education in terms of physical fitness, healthy food, and improved behavior was from advantage of using cell phone but using phone while driving, addiction to using cell phone and health damages were disadvantages of using phone [22]. In 2015 a qualitative study was done in Pennsylvania by McDonald C, and Sommers M, to describe teen drivers’ perceptions of cell phone use while driving in order to inform future interventions to reduce risky driving among 30 drivers, Study showed most adolescents know the risk and distracting of using mobile phone while driving but there still using for text message, calling and social applications while driving [23].
Prevalence of using cellphone while driving among drivers
The prevalence of using a cellphone while driving is different between countries [5]. However, most countries around the world their prevalence is above 50% while there are countries with lower prevalence but not one of the Gulf countries [4,6,20,24]. A crosssectional study conducted by Jawadi, A. et al. in 2017 [19]. The title of the study was (Seat belt usage and distracted driving behaviors in Saudi Arabia: Health-care providers versus Nonhealth-care providers). The sample size of the study was 695 Saudi respondents who live in Saudi Arabia, aged 18 years and above, 51.2% of them were health-care providers and the rest were Nonhealth-care providers. Data were collecting out of online questionnaire and distributed through the emails of Saudi health-care providers in Saudi Arabia and social media using a snowball. One of the results showed the prevalence of using a cell phone while driving and it was 99.1% health-care providers and 89.8% of Nonhealthcare providers (total prevalence =95.9%) [20]. In another hand Trespalacios, O. King, M. Haque, M. and Washington, s. conducted a cross-sectional study in Queensland (2017). The study aimed to investigates characteristics of usage, risk factors, compensatory strategies in use and characteristics of high-frequency offenders of mobile phone use while driving. The study conducted an anonymous online questionnaire distributed across social media, local press releases, and electronic mail through Queensland University of Technology mailing lists.
Not only-but also public face-to-face dissemination. The sample size was 484 drivers 49.8% were aged 17-25 years and 50.2% were aged 26–65 years, 65.1%of them were women. The results showed that 49% used a cell phone while driving [7]. Moreover, at 2016 a cross-sectional study was done by Ahamed H, and Hafian, M in Saudi Arabia, Jeddah with 882 sample size. The aim of the study was to investigate the effects of using a cell phone while driving. The sample included men drivers aged above 17 participants. The instrument of this study developed a 34-item closed-format questionnaire. As a result, the prevalence of the study was 98.2% of drivers use their cell phones while driving [4]. At 2015 Rasool, F. et al. [24] conducted a cross-sectional study aimed to raise awareness about road traffic accidents and their causes and consequences among medical students in Arabian Gulf University (AGU) in Bahrain.
The sample size was 200 students with Bahraini or non- Bahraini, aged between 20-24 years. Data instrument was a structured questionnaire and designed to be self-filled by the participants. The prevalence of using a cell phone while driving (49%) showed in a part of the results [25].
An observational study conducted in Texas, US (2015) by Wilkinson, M. Brown, A. Moussa, A. Day, R. the study aimed to assess the 3-year prevalence of cell phone use (CPU) of drivers and characteristics associated with its use in six cities across Texas, from 2011–2013. CPU and driver characteristics of 1280 motor vehicles observed at major intersections in Dallas, Austin, San Antonio, El Paso, and Brownsville at respective University of Texas medical and academic campuses. The main result showed an overall prevalence of CPU, which was 18.7% [26].
Another study evaluates relevant factors related to causes of Road Traffic Accidents, RTAs among drivers in Abu Dhabi, UAE, 2014. Quantitative data method used through questionnaire survey as it is developed and piloted in the UK and UAE with 600 drivers as a sample size, aged between 18 and less to 65 years, 49% of the questionnaires returned. The prevalence of using a cell phone while driving was 80% in both male and female [6]. In addition, Al-Rees, H. et al. have done a cross-sectional study conducted in Oman, 2013. The study aimed to investigate driving behavior as indexed in the Driving Behavior Questionnaire (DBQ). A sample of 1003 participants was token from Omani university, 632 of them were students and 371 were staffs aged with a range of 17-58 years. The instrument of this study is a standard questionnaire that called DBQ questionnaire. The results showed that 92% of the drivers are using a cell phone while driving [24].
Demographic factors associated with using cell phone while driving
A study was done in Kingdom of Saudi Arabia in 2017 by Ahamed and Hafian to investigate the effects of mobile phone usage while driving. The sample size was 882 drivers aged over 17 years, they surveyed by a 34 -item closed-format questionnaire to gather information on their mobile phone use while driving as well as their risk perception. The survey covers eight variables representing demographic characteristics of the participants; nationality of the participant, marital status, education, work status, age, driving experience, the time participant got a smart mobile phone and conversation with passengers. A part of result in this study show there was statistically significant differences in the use of mobile phone while driving according to their nationality (P=0.03). The frequency of using mobile phone while driving is higher for Saudi- driver than non- Saudi- driver. Also, there were statistically significant differences according to marital status (P=0.04) and work status (P = 0.04). However, there were no statistically significant differences in the use mobile phone while driving according to their age, education level and driving experience [4].
Moreover, a study has done by Rudisill and Zhu in 2017, examined whether universal hand-held calling while driving bans were associated with lower road-side observed hand-held cell phone conversations across drivers of different ages (16≥60 years), sexes, races, ruralities, and regions conducted in US. The main data source was the 2008–2013 National Occupant Protection Use Survey (NOPUS). In this analysis, 263,673 drivers are included. The result shows that drivers aged between 16–24 years are talked on hand-held devices more than other age groups regardless of handheld ban existence and sub-group differences were seen in regards to drivers’ sex females talked on hand-held phones more than males irrespective of whether a hand-held ban was in existence. In case of drivers’ race, no sub-group differences were noted (p = 0.30) [27].
A cross-sectional study was conducted by V Kumar, Dewan and D Kumar [27] among 848 school going adolescents (15-19 years) over one year from 2014 to 2015 in India with an aim to assess the health risk behavior of rural and urban male adolescents concerning injuries, violence and sexuality. The participants were selected from government and private schools by using multistage simple random sampling technique. Data were collected by using self-administered 2011 youth risk behaviour survey (YRBS). In the results, almost quarter of both rural and urban adolescents were report using mobile phone while driving in at least up to 10 days during the past 30 days 22%. (statically significant (p<0.05). The study concluded that adolescents frequently reported high risk behaviour regardless of place of residence and type of school [28].
In addition, Hammoudi, Karani and Littlewood conducted study in 2014 aimed to evaluate relevant factors related to causes of Road Traffic Accidents, RTAs, among drivers in Abu Dhabi, United Arab Emirates, UAE. The participant in this steady was 600 drivers in Abu Dhabi aged between 18 and less to 65 years and the received a questionnaire survey that designed to obtain information regarding their behaviors and attitudes and it was piloted in the UK and UAE. The study shows the using mobile phone while driving is statistically significant related with nationalities (p=0.04). However, there was no statistically significant relationship between using mobile phone while driving and other demographic factors like: gender, age, monthly income or education [6].
Knowledge, practice and attitude toward using cell phone while driving
Trespalacios, O King, M Haque, M and Washington s conducted a cross-sectional study in Queensland (2017). The study aimed to self-reported behaviour and attitudinal characteristics of mobile phone distracted driver. The study conducted an anonymous online questionnaire distributed by social media and electronic mail through Queensland University of technology students. also, public face-to-face dissemination. the sample size was 484 drivers 49.8% were aged 17–25 years and 50.2% were aged 26–65 years, 65.1%of them were women. The result shows that 45% of the participants answering a ringing phone compared to 28% of participants who use the handheld phone. The correlation between answering a ringing phone and using handheld was statistically significant (p < 0.05, n = 117). On other hands, 34% of participant texted or browsed than 39% looked at the phone more than 2 seconds. The association between these two activities were significant (p <0.001, n = 109). However, 12% of participants need more conviction to believe talking is dangerous compared to 4% of the participants need a lot of persuasive to believe the danger of texting and browsing. With observance to the differences in driving behavior, 79% of the participant is likely to lower their driving speed, 70% increase the distance from the car in front and 44% scan the check the road while texting or browsing. Furthermore, 33% were peripheral to increase their control over the steering wheel while talking. Although regarding self-regulatory behavior, 9% of respondents mentioned that they reminded the caller that they were driving or shortened the conversation [7].
A cross-sectional study conducted in the United States (2017) by Rudisill TM, and Zhu M [26] the study aimed to investigate the association between hand-held CPWD laws and roadside observed hand-held cell phone conversations across driver sub-groups and regions. the percentage of drivers who were talking on a hand-held phone was 5.1%. therefore, over 72% of the drivers the hand-held was not present. Also, 85% of respondents were wearing seat belts. The younger drivers, female, African American, and from Southern states compared to those not engaging in cell phone conversations [27].
Moreover, at 2016 a cohort study was done by Bergmark RW, Gliklich E, Guo R and Gliklich RE [20] in Boston with 228 sample sizes. The aim of the study was to develop a reliable self-reported survey for assessing levels of cell phone related distracted driving associated with viewing and typing activities and to validate it in a higher risk population of drivers age 24 years or younger. As a result, most of the respondents wrote text messages never or rarely, while 16 % said they write text messages some of the times they drive and 7.4 % said they write text messages most or every time they drive. Although they were asked about their speed when they write text message, 9.7 said they write text messages at any speed in addition 24.1% said that they write text messages in low speed or in stop and go traffic. Furthermore, reading text messages was more common, 71.5% of respondents said they read text messages while driving (29% rarely, 27.2% sometimes, 13.2 most of the time, and 2.2 % every time they drove. Compared to writing texts, 12.7 % read text messages at any speed, 15.6 % at low speeds and 10.1% in stop and go traffic. Reading and writing an email and browsing social media were less common. 74.6 % of respondents have used maps on a phone. About their knowledge or thinking, 36%.4 of participants think that texting while driving is never safe, 27.6% said rarely, 20.2% said sometimes, 8.8% said most of the time and 7.0% said it always safe [21].
Another study conducted by Zhou, R., Yu, M. and Wang, X. in China (2016). The study aimed to assess drivers’ compensatory beliefs and address the frequency of engagement of different selfregulatory actions for mobile calls or messaging and to examine the effects of basic demographic measures on drivers. Also, to address whether mobile phone usage while driving can be predicted by driver’s compensatory beliefs. The result shows that 17.14% of participants sending text messages, 23.57% making calls, 29.29% reading text messages and 40% answering calls (p < 0.05). however, 90% of respondents who answering call while drive said they reminded the caller that they were driving or shortened the conversation. With regard to driving behaviors, the percent of drivers who slow down or increase the distance when the use cell phone was more than 80%, drivers reported less frequency in changing lanes was 77% and who pulled over more frequently was 60% with p-value (p < 0.05) [29].
According to Adolescent Cellphone Use While Driving: An Overview of the Literature and Promising Future Directions for Prevention study was done by Delgado M, Wanner K and McDonald C (2016) [29]. The study aimed to provide an overview of the incidence, crash risk, risk factors for engagement, and the effectiveness of current mitigation strategies. The results show that 97% of adolescents know texting and driving are dangerous. however, they still use cell phone in talking, texting and social media app while driving [30]. In another hand Parr M, Ross L, McManus B, Bishop H et. [30] Conducted self-reported personality factors and the Questionnaire Assessing Distracted Driving in the United States (2013-2014). The study aimed to determine the impact of personality on distracted driving behaviors. Teens reported significantly more instances of interacting with the phone (t (50.61) = 4.25, p < .0001) and more instances of texting while driving compared to older adults (t (46.175) =3.87, p = .0003). No significant differences were found between teens and older adults on the number of instances of talking on the phone while driving [31].
Risk of using cell phone while drivingRisk of an accident
An observational study aimed to examine in a naturalistic driving setting the dose-response relationship between cell phone usage while driving and risk of a crash or near crash. One hundred and five (105) participants in UK were observed every day for one year, each month 4 trips were chosen as a random sample to classify driver behavior and every 3 months they work to find out the relationship of using phone while driving and overall crash and near-crash rates for each period. this survey published at 2015 with a result of the risk of a near-crash/crash was 17% higher when the drivers use cell phone this proportion was more likely due to the use of the phone in answering and calling, which nearly triples risk (relative risk = 2.84) [32].
Oviedo-Trespalacios, O., King, M., Haque, M. and Washington, S. at 2017 conducted a cross-sectional study with the title (Risk factors of mobile phone use while driving in Queensland: Prevalence, attitudes, crash risk perception, and task-management strategies). The sample size was 484 drivers aged 17 – 65 years, 34.9% were males and 65.1% females. Data instrument used in this study was an online questionnaire disseminated by using social media (Twitter, Facebook, and blogs), local press releases, and electronic mail through Queensland University of Technology mailing lists and public face-to-face dissemination. The results showed that around 72% of the participants reported high-perceived crash risk for mobile phone usage for browsing/texting with a significance of p-value < 0.001 [7].
Health problems
A systematic review study of 29 papers in 2014 done by Cazzulino, Burke, Muller, Arbogast, & Upperman [32] aimed to determine factors that influence young drivers to engage in (CPWD; defined here as talking on the phone only) and texting while driving (TextWD) suggest a basis for prevention campaigns and strategies that can effectively prevent current and future generations from using cell phones while driving. The result showed there is a relationship between using cell phone while driving and psychological factors in young drivers [33].
In 2017 a qualitative study with 123 injured drivers by Brubacher J, Chan H, Purssell E, Tuyp B, Ting D, Mehrnoush V, to determine the prevalence of driver-related risk factors and subsequent outcome in drivers involved in minor crashes. The result showed, most of the drivers during 6 months they still suffer from health problems, were 53.3% did not recover completely from their health and 46.7% had not returned to their previous activities. On the other hand, were 16.7% using a cell phone while driving and unfortunately, drivers still continue on risk after being injured [34]. According to Martin J, Kauer SD, Sanci L, in 2016 a cross-sectional study in Australia aimed to examine the type of road risks and associated behaviors in young people attending general practice among 901 patients showed, most of the young engaging at least in one road risks or more. Moreover, there is no significant assocciation between road risks and people with the risk of mental health p-value (0.260) [35].
Strategies and policies of using phone while driving
Traffic violation of using cell phone while driving have a different laws and regulations between countries, which is study shows the strict laws can affect to reduce the rate of using phone while driving. There are some drivers still using phone despite governments efforts to limit of using phone while driving [36]. In KSA Fines not less than (S.R.150) and not more than (S.R.300) for who using cell phone while driving (11). While in the United Arab Emirate Using phones while driving or any other distractions will be attracted AED 400 fine and four black points [37]. In Bahrain, a driver pays a fine of 50 BD when making or receiving a phone call by hand held [38]. Also, in Jordan, use phone in handheld while driving is a traffic violation (15 JOD) [39]. In United Kingdom it’s illegal to using phone while driving, you will breaking the low if you use it when you are stopped at traffic lights, when you are queuing in traffic, to make or receive calls, to send or receive picture and text messages and to access the internet.
If you are caught using a hand-held mobile phone or similar device while
1) Driving or riding, you’ll get an automatic fixed penalty notice - three penalty points and a fine of £60.
2) If your case goes to court, you may face disqualification on top of a maximum fine of £1,000. Drivers of buses and goods vehicles face a maximum fine of £2,500.
3) If you reach six or more points within two years of passing your test, your licence will be taken off you. You’ll need to re-sit your driving test to get your licence back [13].
Rational
In KSA road injuries are the first cause of death with 9% [1]. Drivers using cell phones are approximately 4 times more likely to be in accidents than drivers that are not using cell phone, [39] is considered on an issue because of 78% of accidents caused by using phone while driving [11]. Moreover, accidents cost 3% in most countries of their gross domestic product. As well as it can affect the personal level, for instance in health it can lead to back neck or brain injuries or/and disability [39]. Measure the knowledge, attitude, and practice among drivers can help us to find out the solution to decrease the prevalence of using cell phone while driving.
Aim
To decrease the prevalence of using cell phone while driving and its’ related hazards in KSA through the following objectives among Saudi male drivers:
I. Determine the prevalence of using cell phone while driving.
II. Assessing the knowledge, attitude and practice when using cell phone while driving among Saudi.
III. Identify the relationship between using cell phone while driving and demographic variable among Saudis.
IV. Identify the relationship between using cell phone while driving and risk of accident among Saudi.
Methodology
A cross-sectional study targeting all Saudi male drivers, which conducted during Oct-Jan year 2017 in KSA. Study includes all Saudi men drivers aged between 17-60 years and excludes non-Saudi drivers, more than 60 years old and driver with mental disease.
Sampling Methods
A cluster sampling method used. The sample distributed into 5 different regions in KSA: South- East- West- North and the center, then take the big city of each region Riyadh city from the Center, Mecca from the West, Tabuk from the North, Aseer from the South and East region. After that, purposive sampling was used because the access to the participant it was difficult. The sample was conducted from hospitals, universities, crowded streets and companies.
Sample size
The sample size was 660 participants, calculated using Open Epi website, with a population of 6,026,479 Saudi men aged between 17-59 in KSA, with 95% confidence interval, and using a prevalence 78% of accidents caused by using a cell phone while driving [11]. The 660 participants doubled to be 1,320 participants in case to increase the power of the study, increase sample size and to achieve the roles of the cluster method. then proportionally divided into 5 different regions, 404 participants in the center, 386 in the west, 212 in the east, 232 in the south, while 86 in the north.
Data collection instrument
Collected the data was by using online survey questionnaire in the Arabic language by Google Forms website and the participants were received the link by the data collector. The questionnaire was constructed with a total number of 16 question and classified into 4 parts. The first part includes questions on demographic information, the second part contains questions measure the knowledge of using cell phone while driving, third part contains questions to measure the practice of using cell phone while driving, and the last part includes questions for risk of using cell phone while driving the study. Before the data collected, a pilot study among 10 drivers at Ministry of Health (MOH) conducted to apply the validation of the questioner.
Statistical analysis
Data were analyzed by using SPSS (version 20) and presented in the suitable method, P-value ≤ .05 considered as significant and 95% of Confidence Interval which 80% the power of the study.
Ethical considerations procedure
Participants got informed consent before answering the questionnaire. In the questionnaire, there are no sensitive and private questions and their identity was anonymous. In addition, an approval was taken from the ethical committee of the research center at King Fahad Medical City.
Results
It shows that the drivers aged 17-30 had the highest rate (49.1%) while, most of the drivers on the Academic level (75.7%). Also, most of the drivers using phone by hand (85.1%) and only (14.7%) using holder furthermore, (90.8%) of them use cell phone alone. Moreover, the results of bad and moderate practice were close percentage. (42.2% -45.7 %).
The prevalence of using cellphone while driving and distribution of recurring use for each region. The prevalence was 76% classified as recurring use and 24% as rarely use. While the highest prevalence of recurring use was 24% at the Center and the lowest was 5% in the North. The percentage of using cell phone while driving associated with age and education level. In all age groups the highest percentages who used cell phone ‘’sometimes’’ was (53.2%) among aged 17-30y , (54.7%) among aged 30-<45y and (50.7%) among aged 45-60, while the highest rate of rarely used among aged 46-60 was (40.4%). Moreover, (24%) of academic drivers was higher than educated to using cell phone as always while in ‘’sometimes’’ was higher in educated level than academic with (54.4%). In addition, there is a significant association between using cell phone while driving and age/education level which is highly significant in education level (P-value=.000).
the relationship between using cell phone while driving and the risk of accidents. (82.5%) was answered “sometimes” and (76.1%) answered ‘’Always” accident caused by using cell phone so, there is a significant association between using cell phone while driving and accident p-value (.001). Also, there is a highly inverse significant association between using phone while driving and the danger with a p-value (.000), where drivers who use cell phone “Sometimes” caused by accident was (68.4%) and only (31.6%) who is not. Most of the drivers have a good knowledge which percentage was (97.9%) and only (17.8%) of accidents caused by using cell phone while driving that’s mean there is no significant association between accidents and using cell phone. While, (31.7%) of dangers caused by using cell phone with good knowledge and (14.8%) with poor knowledge also, there is no significant association between dangers and using cell phone.
Participants who have bad practice shows to have a higher percentage in accidents and dangers compared to those who have good/moderate practice (20.3% vs. 16.5% / 16%) and (36.2% vs. 20.9%/30%). The chi-square test revealed that there is a significant difference in dangers between different level of practice (p=.001) however no significant difference in accidents. 200 (17.8 %) of those who use it by hand-held exposed to accident and there is a significant relationship with p=0.019. The highest percentage of using cell phone was (81.5%) for calling and the lowest for games (2.6%). The driver who uses calling are more susceptible to accidents caused by using cell phone with (13.5%) and (24.9%) are susceptible to dangers which there is a significant association of accidents caused by using cell phone for calling with p-value (0.05). Also, there is a highly significant association between accidents and dangers caused by using cell phone with reading/ writing messages. Furthermore, the percentages of drivers who use location while driving are (50.8%). The main six common expected causes of a high prevalence of using cellphone while driving. Cell Phone addiction has the highest Percentage (44.3%), while 21% and 22.5% believe that the traffic violation is not applied or it’s s not dangerous to use cellphone while driving. Family obligations are the lowest with 4.1% and 2.4% have different causes. 59.1% of drivers don’t use the Cellphone Holder or Bluetooth while driving because it’s not available in their cars which is the highest percentage and the lowest is the fear of cellphone steeling (2.9%), while 10.9% give a deferent answers of why not using Holders or Bluetooth while driving.
Discussion
In this study 80.2% of the drivers aged between 17-45 using cell phone while driving as recurring use while 40.4% among aged 46-60 of rarely use and there is a highly significant association between using cellphone while driving and age p-value (0.000). In a similar Study conducted in United State shows driver aged 16-24 talked in handheld device more than other age groups. Another study conducted in Saudi Arabia at 2017show there is no statistically significant differences in using mobile phone while driving according to age and education. The prevalence of using cellphone while driving showed 76% of drivers classified as recurring use and 24% as rarely use. In compared to study in Saudi Arabia was conducted in 2017 the prevalence of using cellphone while driving (95.9%) was higher than this study because there is different ranking in question that measures the prevalence, in this study ranked as (Always, sometimes and rarely) while (yes , no) was in the other study. On the other hand, there is a study has a lower prevalence than this study it was conducted in Texas and the main result showed an overall prevalence of using cellphone while driving which is 18.7%.
The drivers in a bad practice 42.2% of them stay on the same path and same speed when using phone and only 12.1% in good practice were their stop on the side road which is 45.7% lowering their speed, in this study there is a high percentage of bad and moderate practice that’s mean drivers need to improve their practice because there was significant association between dangers caused by using cell-phone with practice. While a study conducted in United State in 2016 showed 10.1% stopped when reading a message while driving and 12.7% with any speed. Another study conducted in Washington proved there is 79% of drivers lowering the speed when driving. There was highly significant association between using cell-phone while driving and risk of accident/ dangerous with (p-value 0.001/0.0000) it showed positive correlation between them, which is high percentage of using cellphone will dramatically increase in proportion of accident and dangerous, while 85.1% of drivers using phone by hand-held but there is no significant association between accidents/dangerous caused by using phone while driving and hand-held. In Queensland, 72% of drivers showed that there is a high perceived percentage of accident risk caused by using cellphone for browsing/texting with significant of p-value < 0.001.
In another hand the participants classified as a good or bad knowledge of using cell phone while driving, 97.7% had good knowledge but 76% of drivers still use it and there is no significant association between accidents/dangers caused by using cell phone and the knowledge. Likewise, a study conducted in 2016 showed that there is 97% of adolescent know texting and driving are dangers and they still use cell phone while driving. Approximately 81.5% of drivers are calling while driving which has a significant association with an accident caused by using cell-phone with p-value 0.05, while the second is the location with 50.8% of them and there is no significant association with accidents or dangerous because the location is less distracting. Then the third one is reading/writing message (48.5%) was a highly significant association with accidents and dangerously caused by using cellphone because the main focus will be on the phone and that makes distracting for the drives. Also, 26.3% of drivers taking pictures and only 2.6% of them using games while driving. A study conducted in China showed that’s 17.14% of participants sending text messages, 23.57% making calls, 29.29% reading text message and40% answering calls that’s mean calling is the highest kind of use while driving.
A high prevalence of using cell phone while driving in KSA could be caused by the driver’s addiction to using cell phone (44%). Therefore, 22.5% of respondents think that using cell phone while driving is not dangerous, this can be the reason why most drivers don’t use cell phone with holder or Bluetooth. However, drivers believe that using cell phone is not dangerous while driving is not the main reason but 59.1% use handheld rather than Holder or Bluetooth because it’s not available in their cars. A study conducted in United State shows drivers aged between 16-24 years are talked on the hand-held device more than other age groups.
Limitations
The questionnaire was face to face in the beginning of the study then turned to online questionnaire because the data collectors were distributed in a different region in Saudi Arabi, some of them deal to solve it in face to face and some not.
Conclusion
using cell phone while driving is a prevalent national problem with the highest prevalence in the central region, mainly for calling and reading/writing message, that significantly associated with increased risk of accidents. Its use prevalence varies in different regions in Saudi Arabia. Although the majority of drivers have a good knowledge while they still in a bad practice.
Recommendation
Includes
1) Ministry of Health cooperates with car companies to provide phone holder or Bluetooth in all cars.
2) Increase the amount of traffic violation and make it more effective among drivers.
3) Suggest further researches to study the factors associated with using cell phone while driving.
4)Improve their practice by educated people about the roles of using cell phone while driving.
5)Provide drive mood in all cell phone from technological solutions.
Acknowledgement
A special thanks to the Saudi Ministry of Health (MOH) for allowing us to conduct our research and providing any assistance requested. The authors thank lecturer Dr. Huny Bakry and T. Lama Albashir who were more than generous with their expertise and precious time. Finally, we would like to thank the participants from each region (Center, West, East, North, South), their excitement and willingness to participate made the data collection of this research possible.
To read more about this  article: https://irispublishers.com/aphe/fulltext/using-cellphone-while-driving-among-saudi-drivers-in-saudi-arabia.ID.000506.php
Indexing List of Iris Publishers: https://medium.com/@irispublishers/what-is-the-indexing-list-of-iris-publishers-4ace353e4eee
Iris publishers google scholar citations:
https://scholar.google.co.in/scholar?hl=en&as_sdt=0%2C5&q=irispublishers&btnG=
0 notes
open-access-public-health · 4 years ago
Text
Annals of Public Health & Epidemiology - APHE
Challenges in Diagnosis of Cardiovascular Disease in Patients with Diabetes: Reflection to The Last 2019 ESC/EADS Guidelines for Diabetes, Pre-Diabetes and Cardiovascular Diseases
Authored by Arif Al Nooryani*
Tumblr media
Abstract
With increasing incidence of diabetes mellitus worldwide, particularly in some areas of Middle East where almost 50% of patients presenting with acute coronary syndrome have diabetes mellitus, the question of the best diagnostic approach to detect early coronary artery disease before complications develop are of paramount importance. In this mini review, we present and discuss main recommendations, directions and current issues for diagnosis of cardiovascular diseases according to the new 2019 ESC/EASD Guidelines for diabetes, pre-diabetes and cardiovascular diseases.
Introduction
mellitus (DM), per se, carries at least 2-fold risk for cardiovascular diseases, with some areas in Middle East particularly vulnerable to high prevalence of DM and cardiovascular diseases (CVD) as complication of DM [1]. Although Type2 DM is far more common than Type1 DM, both patients populations carries adverse prognosis, particularly severe in the young onset female patients with Type1 DM, underlying the need for early diagnosis, CDV risk modification, strict adherence to non-drug and drug therapy and systematic follow-up. In fact, patients with known DM and baseline fasting blood glucose of ≥7mmol/L are at highest risk of coronary artery disease (CAD), whereas patients with DM and regulated glucose of <7mmol/L and patients without known DM and fasting blood glucose concentration of ≥7mmol/L carries the same risk for CAD [1]. CVD, particularly cardiac death and myocardial infarction (CAD) accompanied by stroke, is by far most severe complication of DM and diagnostic modalities to address this issue has been discussed by the new ESC/EASD guidelines for diabetes, prediabetes cardiovascular diseases from 2019 [2].
According to the new 2019 ESC/EASD guidelines for diabetes, pre-diabetes and cardiovascular diseases [2], the patients are stratified into 3 CVD risk categories including very high-risk, high risk and moderate risk for future adverse events. Very-high risk group (10-year risk of CVD death >10%) include individuals with DM and cardiovascular diseases (CVD), or DM with target organ damage, such as proteinuria or renal failure (estimated glomerular filtration rate (eGFR) <30 mL/min/1.73 m2), patients with DM with three or more major risk factors, or with a DM duration of >20 years, and Type1 DM at the age of 40 years with early onset (i.e. 1-10 years of age) and particularly female. Patients of the young age (<35 years) with type1 DM of short duration (<10 years), and patients with T2 DM aged <50 years with a DM duration of <10 years and without major risk factors, are at moderate risk (<5% 10 year risk of CVD).
Yet, the most of the patients remains to the high risk group (10 year risk of CVD death 5-10%) consisting of patients with DM of more than 10 years and without previous CVD, without target organ damage and with at least one additional risk factor including age, hypertension, dyslipidemia, smoking and obesity [3]. According to the guidelines [2], the screening of these patients remains challenging as the guidelines only recommended resting ECG (Class I), whereas noninvasive functional imaging (radionuclide myocardial perfusion imaging, magnetic resonance imaging, or physical or pharmacological stress echocardiography) or CT angiography imaging may only be considered (Class IIb) in asymptomatic high-risk patients. In addition, the only other Class I recommendation refers to routine assessment of microalbuminuria as an indicator of risk of developing renal dysfunction or future CVD.
The addition of circulating biomarkers for CV risk assessment has limited clinical value and is not recommended by the guidelines (Class III) [2,3]. The reason for this suggestion is in the fact that in asymptomatic patients with DM, measurement of C-reactive protein or fibrinogen provides only minor incremental value to current risk assessment [2,3]. The addition of hsTnT to conventional risk factors has not shown incremental discriminative power in this group [3]. In individuals with type1 DM, elevated hsTnT was an independent predictor of renal decline and CV events [4], whereas prognostic value of N-terminal pro-B-type natriuretic peptide (NT-proBNP) in an unselected cohort of people with DM (including known CVD) showed that patients with low levels of NT-proBNP (<125pg/mL) have an excellent short-term prognosis [5]. Therefore, and despite some prognostic significance, routine clinical assessment of cardiac biomarkers is not recommended for CVD risk stratification in any patient group with DM [2].
Similarly, and once popular in risk evaluation, carotid ultrasound intima-media thickness should not be recommended for screening CV risk [2], whereas assessment of carotid and/ or femoral plaque burden with arterial ultrasonography should be consider as risk modifier in asymptomatic patients with DM. So, arterial ultrasonography remains one of the imaging tests that should be performed in asymptomatic patients with DM not for stratification but comprehensive consideration in diagnostic algorithm.
Non-invasive estimation of the atherosclerotic burden, based on the coronary artery calcium score, can also be performed in asymptomatic patients for the risk assessment. In fact, patients with DM have a higher prevalence of coronary artery calcification compared to non-DM individuals [6] a CAC score of 0 (Agatston score) is associated with favorable prognosis, whereas incremental coronary artery calcium score from 1-99 (minimal to mild), 100- 399 (moderate), and ≥400 (severe calcification) is associated with a substantial higher relative risk of mortality of 25-33% [3]. Therefore, coronary artery calcium score may be considered by the guidelines [2] as a risk modifier in CV assessment in asymptomatic patients with moderate risk.
The most of the controversies and challenges regarding diagnosis of CAD in asymptomatic patients carries evaluation of myocardial ischemia with noninvasive functional testing and/ or noninvasive imaging of coronary arteries. Stress testing with myocardial perfusion imaging or stress echocardiography allows the detection of myocardial ischemia, particularly silent form which is more prevalent in patients with DM [7-9]. Randomized trials evaluating the impact of routine screening for CAD in asymptomatic DM and no history of CAD have shown no differences in the outcome (cardiac death and unstable angina) in those who underwent stress testing or CT angiography, or not [9-13]. In fact, four randomized trials (DIAD, DYNAMIT, FACTOR-64, DADDY-D) [9,11-13] including form 520 up to 1123 patients have shown no significant decrease in the rate of cardiac events, except for the last DADDY-D study [13] that demonstrated significant decrease in cardiac events in the subgroup of patients over 60 years undergoing routine exercise stress testing. In addition, study by Faglia et al. [14] using also exercise stress testing or stress echocardiography also demonstrated better outcome in patients undergoing functional testing for myocardial ischemia. Taken together, the studies showed obvious disparities in the testing modality, patient population, the rate of invasive coronarography following positive testing (15-93%!), treatment strategy following testing (usually left to discretion of the treating physician), whereas the annual rate of major adverse cardiac events was very low ranging from 0.6-1.9%. In fact, this rate annual rate of adverse events correspondents to moderate to high risk group of asymptomatic patients with DM.
In addition, a meta-analysis including 3299 asymptomatic subjects with DM showed that non-invasive imaging for CAD did not significantly reduce event rates of non-fatal MI (relative risk 0.65; p=0.062) and hospitalization for HF (relative risk 0.61; p=0.1) [10]. Accordingly, routine screening of CAD in asymptomatic DM is not recommended [2]. However, CTCA or functional imaging (radionuclide myocardial perfusion imaging, stress cardiac magnetic resonance imaging, or exercise or pharmacological stress echocardiography) may be considered in asymptomatic (presumable high risk) patients with DM for screening of CAD (Class IIb), whereas stress testing or CT angiography may be indicated in very high-risk asymptomatic individuals (with peripheral arterial disease (PAD), a high CAC score, proteinuria, or renal failure) [2,15].
2019 ESC guidelines for management and treatment of patients with chronic coronary syndrome [16], proposed unique diagnostic algorithm guiding revascularization in patients with anginal symptoms, and without any relevant differences for patients with diabetes. In brief, myocardial revascularization is now strongly based on functional evaluation of coronary stenosis, unless coronary stenosis is critical defined as more than 90% luminal stenosis, or in case or poor left ventricular ejection fraction (EF<35%). In addition, in case of clear previous evidence of myocardial ischemia correspondent to the territory with intermediate coronary lesions revascularization is indicated, however in patients with multivessel coronary artery stenoses, invasive functional testing is warranted and should be performed to interrogate each coronary lesion of intermediate significance. Regarding invasive functional parameters, the guidelines recommend both fractional flow reserve (FFR) or instantaneous wave-free flow reserve (iFR) with cut-off points of 0.80 and 0.89, respectively, and without any particular notion about possible interaction in patients with diabetes, or differences between FFR and iFR. In fact, previous large randomized studies (FAME, iFR SwedeHeart, Define Flair) have shown no difference in the outcome between patients with and without DM [17-19].
On the other hand, some studies focused on diabetic population, have shown that the outcome of patients with and without DM may not be the same, i.e. DM patients with negative FFR had inferior and almost 2 times higher event rate [20] than non-diabetic patients. However, none of these studies have been powered enough to evaluate differences in the outcome between diabetes and nondiabetic patients. Regarding comparison of functional parameters, although there was no difference in the outcome between FFR and iFR group, iFR group had consistently demonstrated less stenting [18,19]. On the other hand, the data on iFR in diabetic patients, which is not a surrogate of FFR, but incorporates the hemodynamic status of microcirculation, which is particularly vulnerable in DM patients, are limited.
Conclusion
With emerging role of CT angiography and advanced functional imaging techniques (magnetic resonance imaging, strain, coronary flow reserve), on one hand, and exciting new therapeutic drug options, diagnosis of CAD is particularly challenging in moderate and particularly high risk diabetic patients, and relies on the capacity of the health care system, as well as imaging capacities and expertise, having in mind prevalence of diabetic patients in the population. On the other hand, diabetic patients with known CVD and symptomatic patients are per se high risk group and should promptly undergo functional imaging testing and invasive evaluation with functional interrogation of intermediate coronary stenosis. The threshold for revascularization remains the same as in non-diabetic patients, including comprehensive evaluation of each patient.
To read more about this  article: https://irispublishers.com/aphe/fulltext/challenges-in-diagnosis-of-cardiovascular-disease.ID.000505.php
Indexing List of Iris Publishers: https://medium.com/@irispublishers/what-is-the-indexing-list-of-iris-publishers-4ace353e4eee
Iris publishers google scholar citations:
https://scholar.google.co.in/scholar?hl=en&as_sdt=0%2C5&q=irispublishers&btnG=
0 notes
open-access-public-health · 4 years ago
Text
Annals of Public Health & Epidemiology - APHE
The Inside Story of Coronavirus Pandemic
Authored by Ricardo Gobato*
Tumblr media
Abstract
Coronavirus are RNA virus of the order of the Nidovirales of the family Coronaviridae. The average incubation period for coronavirus infection is 5 days, with an interval that can reach up to 16 days. Influenza H1N1, H5N1, Sars, Mers, Ebola, Coronavirus, are all zoonotic viruses, that is, they have been transmitted to humans by animals. For data on the number of infected 83000, 151767, 167518 and 173344 on March 1, 16, 18 and 19, 2020, respectively, released by WHO (World Health Organization), the number of dead d and infected i confirmed according to the data released daily on the network, rises exponentially to the initial do of dead and io infected initially confirmed, d = do.e0.049.t and i = io.e0.0409.t, respectively, where t is equal to the number of days, for t = 1 to n. Although the development and production of the potential vaccine has been incredibly rapid, its evaluation will take considerable time. All participants will be followed for 12 months after the second dose to collect the data that researchers initially need to find out if it is safe and effective.
Introduction
Coronavirus are RNA virus of the order of the Nidovirales of the family Coronaviridae. The subfamily consists of four genera Alfacoronavirus, Betacoronavirus, Gammacoronavirus and Deltacoronavirus. Alfacoronaviruses and Betacoronaviruses only infect mammals. Gammacoronaviruses and Deltacoronaviruses infect birds and can also infect mammals. The coronavirus was isolated for the first time in 1937. However, it was in 1965 that the virus was described as coronavirus, due to the profile under microscopy, looking like a crown as proposed by Tyrrell as a new genus of virus [1-5].
The SARS-CoV, MERS-CoV and COVID-2019 virus are highly pathogenic Betacoronaviruses and responsible for causing respiratory and gastrointestinal syndrome. In addition to these three, there are four other types of coronavirus that can induce disease in the upper respiratory tract in immune compromised individuals, as well as affect children, young people and the elderly. All coronavirus that affect humans are of animal origin [1-5]. The average incubation period for coronavirus infection is 5 days, with an interval that can reach up to 16 days. The transmissibility of patients infected with SARSCoV is on average 7 days after the onset of symptoms. However, preliminary data from the new Coronavirus (COVID-19) suggests that transmission may occur, even without the appearance of signs and symptoms [1-5].
Development
So far, there is not enough information on how many days before the signs and symptoms that an infected person starts transmitting the virus. Influenza H1N1, H5N1, Sars, Mers, Ebola, Coronavirus, are all zoonotic viruses, that is, they have been transmitted to humans by animals.
The more people on the planet, the closer we are to living with each other. With a world population of 7.7 billion people and geometric growth, it means more people in smaller spaces, therefore, a greater risk of exposure to disease-causing pathogens. Currently, about three out of four new diseases are zoonotic.
Our worldwide demand for meat is increasing and animal production is expanding as different parts of the world enrich and develop a taste for a diet rich in animal protein. The world is more connected than ever, but we still don’t have a global health security system capable of responding to a threat at its source.
To contain the outbreak, it depends on the government of the country where it originated, and a failure is evident. The planet Earth has rich biological diversity and virus can cope in all types of ecosystems and climate; even they are able to survive through mutation [6-64].
According to Chinese scientists, the pangolin, a small mammal at risk of extinction, may be the animal that transmitted the new coronavirus to man. After testing about 1,000 samples of wild animals, the scientists determined that the genomes of the virus sequences in pangolins were 99% identical to those of the patients. On the basis of data on the number of infected persons 83000, 151767, 167518 and 173344 on March 1, 16, 18 and 19, 2020, respectively, released by WHO (World Health Organization).
The number of dead d and infected i confirmed according to the data released daily on the network, rises exponentially to the initial do of dead and io infected initially confirmed, Eq. (2) and Eq. (3), respectively, where t is equal to the number of days, for t = 1 to n, therefore
Under this situation, the dollar has soared high and stock exchanges are oscillating. There is rush on markets for financial security, where someone wins, someone loses. Money does not exist, only financial speculation, virtual numbers, where the one who commands is who is behind a keyboard, typing, manipulating, dictating the rules, of a virtual war for economic power. While the human population as mere spectators, manipulated, by the system of which they are part, are thrown from side to side, in the struggle for survival, in the face of the global system.
It is not feasible for the market to eliminate a population, because if it does, there is no consumption. It is not feasible to solve a problem, cure a disease, as the pharmaceutical giants are, the giants of manufactured products. The economy cannot stop; its flow must be continuous, in one direction, like entropy. The system must control the markets and the population. But in an economic versus bacteriological war, if the complete system loses control, the virus will dominate, but it will not eliminate the entire population, otherwise it will have no means to spread.
A vaccine has already been obtained and is being tested. The vaccine cannot cause Covid-19 and does not contain the virus, as is the case with some other vaccines. Instead, it contains a small piece of genetic code called mRNA, which scientists extracted from the virus and then expanded in the laboratory. In this case, the mRNA encodes the viral protein “spike”, which is vital for the coronavirus to gain access to human cells. The researchers hope that the vaccine will stimulate the immune system to attack the virus, preventing the development of the disease.
The mRNA-1723 vaccine was not tested in mice before the start of clinical trials in humans, an incredibly rare occurrence that has proved controversial. Some experts are insisting that the gravity and urgent need for the current situation justifies the move, while others are concerned that it could violate various ethical and safety standards and put study participants at greater than normal risk. Although the development and production of the potential vaccine has been incredibly rapid, its evaluation will take considerable time. All participants will be followed for 12 months after the second dose to collect the data that researchers initially need to find out if it is safe and effective.
Conclusion
For data on the number of infected 83000, 151767, 167518 and 173344 on March 1, 16, 18 and 19, 2020, respectively, released by WHO (World Health Organization), the number of dead d and infected i confirmed according to the data released daily on the network, rises exponentially to the initial do of dead and io infected initially confirmed, d = do.e0.049.t and i = io.e0.0409.t, respectively, where t is equal to the number of days, for t = 1 to n. A vaccine has already been obtained and is being tested. The vaccine cannot cause Covid-19 and does not contain the virus, as is the case with some other vaccines. Instead, it contains a small piece of genetic code called mRNA, which scientists extracted from the virus and then expanded in the laboratory. In this case, the mRNA encodes the viral protein “spike”, which is vital for the coronavirus to gain access to human cells. The researchers hope that the vaccine will stimulate the immune system to attack the virus, preventing the development of the disease.
To read more about this  article: https://irispublishers.com/aphe/fulltext/the-inside-story-of-coronavirus-pandemic.ID.000504.php
Indexing List of Iris Publishers: https://medium.com/@irispublishers/what-is-the-indexing-list-of-iris-publishers-4ace353e4eee
Iris publishers google scholar citations:
https://scholar.google.co.in/scholar?hl=en&as_sdt=0%2C5&q=irispublishers&btnG=
0 notes
open-access-public-health · 4 years ago
Text
Annals of Public Health & Epidemiology - APHE
Table 1 shows that the growth rate of both area and production was very high for all years from 2002-03 to 2011-2012. Production growth can be explained by HYV, but area growth is clearly alarming and a threat to food security. The share of GDP value of tobacco manufacturing was 5.44 percent in 1996-97 in all manufacturing while it increased to 6.22 percent in 2004-05.Chart 1 shows that both tobacco production and land under tobacco production were almost stable from 1995 to 2008. After that both showed an upward jump. Land for tobacco was being retained or increased over time where total agricultural land gradually decreased in BangladeshFarmer’s Response to Tobacco Processing Company in Tobacco Cultivation in Bangladesh
Authored by  Khan Mehedi Hasan*
Tumblr media
Abstract
Several tobacco processing companies patronize tobacco cultivation in Bangladesh. The major objective of the project was to assess farmers’ response against the strategies of tobacco companies in the Jhenaidah district of Bangladesh. Farmers’ perceptions were taken through focused group discussion. Tobacco companies’ activities and strategies were discussed initially. After that, farmers’ reactions on companies’ strategies in both positive and negative directions were analysed in the context of traditional agriculture of the district. Tobacco companies’ sales guarantee at prefixed price motivated farmers to go for contract with tobacco cultivation. Tobacco company provided full revenue at a time which was another point of motivation for tobacco cultivation that was not generally observed in other crops. Though cost of cultivation from tobacco was very high, it provided higher profit in compared with other crops cultivated in the tobacco season. Some crops provided higher profit than tobacco but those were not guaranteed in every season due to high price fluctuation. Companies input support free of cost and at loan but was repaid after harvest attracted many famers into tobacco. In addition to these company supported incentives, farmers considered the option of using family labor, and unique features of tobacco like high cash crop, safe from disease as motivation for cultivating tobacco. Farmers’ negative considerations over tobacco companies were high charge for inputs, subjective grading system of cured tobacco, less address of health safety issues, wide range of hidden and unaccounted cost associated with tobacco cultivation. Non-guaranteed sales and unstable price were main challenges for traditional crops in the district for which tobacco companies’ offers become lucrative to cultivate tobacco. As tobacco is the substitute to the food security of the overburden people of Bangladesh, the research suggests making control over tobacco companies by limiting quota on acres of land to be contracted for tobacco and forcing companies to internalize all health and environmental related costs. Farmers’ awareness need to be created by assessing and publicizing all the unaccounted cost components associated with tobacco farming. At the same time, it is essential to assure sales of general crops at stable price.
Keywords: Contract farmer; Deforestation; Tobacco curing; Tobacco processing company; Sales guarantee
Introduction
Background of the study
Tobacco started to replace general crops in Bangladesh in the mid-sixties of the last century. The process was speeded up by the British American Tobacco (BAT) in Teesta silt in the Rangpur area after liberation in 1971. The study of PROGGA [1] showed that tobacco was grown in 70,000 hectares in the last season of 2013. In that year about 38,000 hectares of additional land was used than the year 2012, 108,000 hectares of land was cultivated for tobacco in 2014. The largest tobacco growing areas of the country include Rangpur, Kustia and Chittagong Hill Truck [2]. In addition to converting new land under tobacco, new farmers are also entering tobacco cultivation in each year, though there is skipping tendency too. The shares of tobacco in GDP value and employment have also increased in several years in recent decades.
Apart from processed tobacco exports from Bangladesh, domestic tobacco is used as input of bidi, cigarettes, and other semi-processed tobacco products. Many tobacco companies have established semi-processing units near the tobacco growing regions. In the country, bidi manufacturing annual employment amounts to 621,000 labour with 155,000 direct labour and 466,000 indirect labour [3]. Tobacco companies across Asia and Africa get engaged with tobacco farmers for getting assurance about the target amount of tobacco. As a component of the contract, companies provide different inputs and technical support free of cost or at loan. Support normally includes high inputs like seeds, fertilizers, chemicals and other production implements during cultivation and the curing stage. Farmers are contracted to sell a certain amount of tobacco to these companies. In this way companies avoid intermediaries to collect tobacco leaves and thus become able to reduce tobacco procurement costs [4]. Moreover, contract farming allows companies to control tobacco variety, volume, production costs, and creates asymmetric bargaining powers between tobacco firms and farmers [5].
Diverse factors, including region specific factors are liable for spreading tobacco cultivation. Naher and Chowdhury [6] pointed out that family labour, guaranteed sales and ready cash attract farmers to cultivate tobacco. Akhter [7] found that high profit, cash earning, guarantee of inputs and guarantee of sale, play roles in continuing tobacco cultivation. Tobacco companies move in different regions for mining fertile soils. Van Minh et al. [8] showed that in rural Vietnam the average tobacco farmer gets a greater financial benefit from tobacco cultivation than other crops. Furthermore, the myopic view of the farmers about the immediate gain from tobacco cultivation is blamed for expansion of tobacco culture. There is causality between low education level (not completed primary level) of children and tobacco growing of most of the farmers that grow tobacco [9].
It is proved in scientific research that tobacco consumption causes diverse health problems. Globally about 63 percent of all deaths are caused by non-communicable diseases, among which tobacco products are considered one of the major risk factors. Mortality as well as morbidity is increased by tobacco smoking. Both tobacco production and supply are a cause for increased tobacco consumption. Globally, about 600,000 people die from second-hand smoking exposure each year. Among these deaths, the large majority occur in densely populated underdeveloped countries [10]. In Bangladesh, about 58 percent of men and 29 percent of women consume any sort of tobacco, either smoked or smokeless [11]. A significant number of people from Bangladesh are at great risk of use of tobacco products, especially smoking tobacco. Easily availability of tobacco products in Bangladesh is a liable factor which is linked to tobacco production. Over time, tobacco production is increasing in Bangladesh. The [12] identified tobacco-related illness for Bangladesh.
Tobacco cultivation has tremendous social and health bearing which goes often unnoticed. Tobacco farming requires large amounts of wood for a variety of purposes, such as curing, and poles and sticks for barn construction. Tobacco growers cut homestead forests or purchase wood to cure tobacco which creates huge pressure on forest resources [13]. The study of Siddiqui and Rajabu [14] showed that, on average, 4 kg of fuel wood is consumed to obtain 1 kg of cured tobacco. Those who have direct involvement in tobacco cultivation and processing suffer from various health problems. The victims of this health risk include children, pregnant women and older people who participate in tobacco production or live near tobacco-growing fields [15]. Many dimensions of social and environmental costs of tobacco cultivation are grossly unaccounted. For instance, tobacco cultivation puts constraints on other crop farming, adversely affects land fertility and creates external costs on health hazards. Child involvement in tobacco cultivation also leads to increased school drop-out rates.
Many farmers observed a decline in soil fertility for long term tobacco cultivation in the same land which caused lower yield in other crops too. After adding the opportunity cost of family labour, Van Minh H [8] found very minimal profit for one region and negative for another region in Vietnam. There exists some anti-social movement in some regions in Bangladesh where some regions also showed growing interest for tobacco. Sometime farmers partially shifted from tobacco production due to government initiatives to get farmers back into food crop production. Some farmers stopped cultivation completely. At the same time many remained in cultivation but reduced the amount of land for tobacco cultivation. Some farmers continued and some also entered tobacco cultivation. Though Bangladesh is an agrarian economy, it is characterized with many limitations. In that case, farmers try to find crops which have less challenge in the input and output market. It is urgent to examine the reasons why conventional agriculture (mostly crop agriculture) fails to arrest the increased adoption of tobacco culture. As tobacco is a socially undesirable product, there is a need to discourage farmers. In order to prescribe some policies, it is essential to identify the dynamics among traditional crops, tobacco companies and tobacco farmers. Unless tobacco production is reduced, it would be hard to reduce food insecurity and public health risk. Withdrawing poor farmers from tobacco cultivation is not an easy task. In order to minimize tobacco cultivation, the first step would be exploring the tobacco firm-farm linkage. Along with exploring the strategies of tobacco companies, it is equally essential to identify factors that influence farmers to adopt, continue or to quit tobacco in the context of traditional agriculture. It will help to understand how tobacco companies are defining strategies to encourage tobacco cultivation. This research will contribute to policymaking as it is blamed that there is no direct policy to regulate tobacco cultivation in Bangladesh [1]. This research output will give a clear direction for designing a participatory tobacco control policy in Bangladesh.
Objectives of the research
The research firstly aimed to explore strategies of tobacco companies that support tobacco cultivation. Main research questions were to explore how do tobacco companies implement their strategies and what package they offer to farmers. Second objective of the research was to identify farmers’ responses to tobacco companies’ strategies. In this case, research questions were to know why some farmers decide to be contracted with tobacco companies and what benefit and cost do they consider. This research question helps in understanding the role of various supports from tobacco companies to arrest the expansion and adoption of tobacco cultivation by farming households. At the same time, it will help to stereotype farmers who want to be engaged in tobacco cultivation and to what extent. And third objective was to explore the context of traditional agriculture and compare those in the context of tobacco and tobacco companies. Research questions posed in this research was to explore why some farmers prefer tobacco cultivation over traditional crop agriculture while others do not do so. This research question helps to understand the strengths and challenges of tobacco over tradition crop cultivation.
Research Methodology
Research context
The study is a field based qualitative research to understand tobacco companies’ incentive packages and technical assistance provided towards farmers in different levels of cultivation and curing. At the same time, it explores the response of the farmers and links how their responses are affected by the context of traditional agriculture and farmers’ prototype. More specifically it analyzes whether limitations of traditional agriculture also encourage tobacco cultivation. Farming households are the unit of analysis. Data are collected from both tobacco growers and non-tobacco growers. A ‘tobacco grower’ is a farming household that cultivates tobacco irrespective of whether that cultivates other crops or not. Tobacco growers include both contract and non-contract farmers. On the other hand, a ‘non-tobacco grower’ is a farming household who does not cultivate tobacco currently but is engaged in another crop cultivation. This category includes farmers from both never and ever tobacco cultivating category.
study is confined to the Khulna division of Bangladesh. Among the ten districts of Khulna division, tobacco is widely cultivated in Chuandanga, Jhenaidah, Kushtia and in Meherpur district. In Jhenaidah district, both the number of farms and areas under tobacco cultivation have increased alarmingly during the last couple of years. Renewed interest in tobacco cultivation has been observed in Harinakundo, Mohespur, Sailkupa and Sadar upazila of Jhenaidah district. In a similar tone, BBS [16] reported that in Chudanga district, wider adoption of tobacco is observed in Alamdanga upazila followed by Damurhuda, Jibonnagar and Sadar upazila. For instance, during the year 2010-2011, 1695 metric tons of tobacco was produced from 1342 hectors of land in Alamdanga upazila alone. In Kushtia district, increased adoption of tobacco culture is observed in Daulatpur and Mirzapur upazila [2] (Chart 2).
Therefore, it is seen that tobacco is widely grown in Shailkupa upazila (Jhenaidah district), Alamdanga upazila (Chuadanga district), and Daulatpur upazila (Kushtia district). In these three upazilas there are a good number of farmers who have skipped out of tobacco cultivation as well. In fact, these upazilas are rich in producing cereal crops, vegetables, beans and other crops. Tobacco is also cultivated in very few areas of Jessore district. TPCs have their processing and operational units in all mentioned districts except Jessore.
Considering the presence of different TPC and farmers’ different level of attachment to tobacco cultivation, the research is confined in the Jhenaidah district of Khulna division. This district also comprises traditional crop growers, new tobacco farmers and tobacco skippers. Based on the same ground, the research is focused on Harinakunda upazila and then Raghunathpur union of the upazila. Considering the extent of presence of both tobacco growers and non-growers, and presence of TPC, Garabaria, Sohagpur, and Srifoltola villages were selected as study sites.
Sample and data
Reconnaissance surveys were conducted in different tobacco growing areas to get an insight into the phenomenon. Based on the reconnaissance survey in 2018, a draft checklist was prepared and pre-tested. A number of three FGD sessions were also carried out in three villages. A number of twelve participants were incorporated in each FGD session where contract and non-contract tobacco farmers, general crop producers, and tobacco skippers were included. Inclusion criteria were current attachment to direct agriculture and permanent residence of concerned village so that they can have sufficient idea about agriculture market, transportation system, infrastructure, middleman, tobacco company’s operation etc. Contract farmers covered member of Abul Khair Tobacco, BAT and Dhaka Tobacco processing companies. Participants covered various age groups, years of attachment to agriculture, crop category, different family settings, different membership category of contract tobacco farmers etc. Participants’ perceptions and experiences were narrated in the arrested issues. In some issues, secondary information applied to check validity of the statements. The research focused farmer’s perception on tobacco company’s strategies in the context of tradition agriculture. As the research focused on the very general issues of the agriculture, it is considered as low risk. That’s why researcher didn’t seek for ethical approval, but informed consent was taken from FGD participants. All sessions were carried out in respective villages.
Results and Discussions
The dealings of tobacco company with farmer
Tobacco companies play an important role to motivate farmers towards tobacco cultivation. Each tobacco company has field staff to visit tobacco farms during cultivation and advice farmers to produce better quality tobacco. The company get in touch with all stages of cultivation, curing, sales and payment.
Promotion and agreement before cultivation
Tobacco companies appoint field officers to visit farmers two to three months prior to the starting of the cultivation. Officers meet farmers, ask for their plan, discuss about seeds type, incentives, potential prices etc. to be offered from concerned tobacco. Tobacco companies maintain month-wise annual tobacco calendars that include promotion, technical assistance in cultivation, curing, sales, processing and export to sending to final processing units. During June-July, field officers start moving to the farmers. Many field officers are recruited from local areas easing any time communication between them formally or informally.
After the initial discussion with the farmers, companies contract with farmers to ensure targeted amount of tobacco from a particular region. As part of the contract, farmers apply for membership which has different categories based on acres of land to be cultivated for tobacco. Membership starts from one acre and its multiple such as two, three, four acres of land. There are instances of more than ten acres too. Farmers are required to submit a mortgage as contracted farmers can receive inputs at loan. Agreement also assures tobacco to be cultivated and to be sold to the companies. There is no unique standard of mortgage where blank signed bank cheque and land documents are widely observed as security. As legal aspects, farmers are to submit a photograph, a bank cheque of the contracted farmer, and a membership fee, normally BDT 500 irrespective of membership category. The membership matures at an annual basis. When sales and transactions are done, farmers get the mortgaged document back. For an acre of membership, farmer is supposed to sell 12 bells (840 kg) tobacco to the contracted companies. The contract can be cancelled easily in the next year.
Input and technical support during cultivation
Tobacco companies provide some inputs free, and at loan. Companies provide seed generally free of cost. Farmers reported that companies provide enough seeds planted during August- September. About one and a half months are required for growing tobacco plants. Farmers then pluck and plant those in other field where harvest occurs in January-February. In the past, farmers were supposed to select seed by themselves. But they faced a challenge to select the right seed where yield affected negatively in many cases. Now companies patronize seed where farmers showed satisfaction as TPC provided seeds are high yielding and manageable to cultivate. In addition to seed, farmers are supposed to receive 200 kg fertilizer of different categories for an acre of land. Abul Khair Tobacco (a concern of Abul Khair Group) provides four sacks of fertilizer for an acre of contracted land. Farmer can receive 50kg of Diammonium phosphate (DAP), 100kg of Sulfate of Potash (SOP) and 50kg of Phosphorus. Farmers are independent of receiving inputs as higher amount is to be paid for. Large members can receive multiple of inputs as per land size. Farmers receive fertilizer during November-January. There is no exact date of delivery. But field officers notify farmers about delivery a few days before. They receive those in different installments. One farmer in the Sohagpur village informed that he holds a two-acre tobacco card but cultivated tobacco in one acre of land. He received inputs of two acres and used additional inputs in other crops.
Field staffs visit tobacco farms and advice farmers on optimum seed amount, planting time, amount of land required, length between plants, number of plants to be planted in an acre of land, time and hours of irrigation, insecticides doses and time, safety issues and so on. They also suggest additional amount of fertilizer to be used and timing of that. One BAT contract farmer told that a field officer became available every week in the village and became available any day on call. More importantly, when new species are introduced, field officers visit farmers more often to guide farmers accordingly.
Curing and sales support after harvest
After plucking green leaves farmers cure those. In the Jhenaidah district, tobacco is cured by fire not by sun heat. Tobacco varieties produced in Khulna division (K2 and some other species) is cured by fire. For curing, farmers make a special room called a curing house. Some households have a brick-built curing house, and some make temporary curing houses with jute straw. In either category, curing houses become airtight so heat cannot go out. They decorate green leaves and hang them inside. Curing is the most sensitive phase of the whole tobacco cultivation as just a bit of over-heating can destroy tobacco. Bad smoke and improper heating can destroy tobacco color. In this regard, the company directs farmers about building efficient curing houses, when to heat, when to stop fire, when to tie leaves, when to untie, how to store, how long to wait after firing to get fine color. One contract farmer of Abul Khair Tobacco reported that farmers get meters that can read temperature levels of the curing house. Farmers can get these on loan, but FGD responders reported that many farmers don’t use. They argued that the temperature reading meter is not the only way of securing tobacco curing. Moreover, the meter is not free and generally cannot be used in the subsequent year. Meter price and installation cost becomes another consideration. During tobacco curing, field officers randomly visit farmer’s home and at least they become available on call. To ease the supervision of the curing process, farmers make curing houses at the homestead. As farmers aim to complete the whole sale before rainy season, all farmers are in a hurry. Farmers generally don’t share curing house with others.
As per contract, farmers are to sell tobacco to the companies. There are eight grades of cured tobacco, where announce price of each grade initially. When tobacco is taken in the depot of the TPC, grading officers decide grade of tobacco. From tobacco bell, a company officer takes samples and decides the grade for that bundle. At the same time company weighs the tobacco and settles payment.
Farmers’ Responses to Tobacco Company Strategies
Farmers have mixed experience on the strategies and offers of the tobacco companies. They consider and compare those in the context of general crops in the tobacco season.
Farmers’ positive considerations
Sales guarantee at stable price: As per contract companies are to purchase 12 bells (840 kg) cured tobacco for an acre of land. Companies purchase fine quality tobacco and not lowers grades. This year the company-provided price was BDT 155 for grade 1, BDT 130 for grade 2, and 120 for grade 3. In Jhenaidah, three companies operate named Dhaka tobacco, BAT and Abul Khaer Tobacco and farmers report that all companies offer the same price of same graded tobacco. The announced price for a particular season remains the same for the whole season. Even a bumper crop does not affect tobacco price like other crops. Some farmers reported that they were not so confirmed about exact price even when they were contracted. But survey finding suggests that farmers are not much concerned about it because they can have a standard guess about price, even earlier than contract. Normally companies increase the price from BDT 3 to BDT 10 per kg in different grades than the previous year which has become a trend. They never experienced decline in price. In FGD, farmers in Sohagpur village reported that they can grow upto 1000 kg tobacco in an acre of land. Farmers experience some lower grade tobacco each year but they become able to sell those to some local companies.
Market mechanism works properly between demand and supply of tobacco. From the demand side, each company decides the amount of tobacco targeted for coming season. Based on expected productivity, companies determine how much land is required to ensure the target amount. Thus, farmers also get assurance of full sale. Magati et al. [17] from Kenyan experience reported that farmers contract with tobacco companies because companies are guaranteed buyer. But in the case of other crops, equilibrium of price is determined on the spot and varies in every single transaction. There is huge uncertainty from both demand and supply as weather, transportation, social occasion and natural calamity affects all aspects.
Some other uncertainties of traditional agriculture force farmers to consider tobacco production. Lack of a fair price is another consideration. In many seasons, farmers experienced banana, pulses, onions, betel leaves, arum etc. as more profitable than tobacco but in some season, prices of those crops showed a very reduced prices also. Some farmers of Sohagpur village reported that garlic experienced huge profit in the last year (2018) than tobacco. By observing that, many farmers cultivated garlic, but the price was low in this year. Thus, some seemingly high profitable crops like garlic, onion, ginger etc. face price fluctuation. Some of these are imported which also shows high fluctuation in price and quantity. Since there is no certainty of price of these domestic crops, farmers remain confused and consider crops with stable price like tobacco.
Assurance of full revenue immediate after sale: Farmers can sell any amount of cured tobacco within contract any day. After adjusting incentives cost, companies provide full payment through cheque. Farmers consider this payment option as forced saving for them and becomes a source of some investment in social functions such as marriage, and in development expenditure such as building a room etc. After receiving tobacco, company asks farmers to come on certain day to receive a cheque. There is no unique standard of fixing the payment date. It can vary from two to fifteen days and vary across companies. FGD responders reported that BAT gives payment to the farmers within 7 days, Dhaka tobacco and Abul Khaer company pays within 15 days. There are instances of early or delayed payment also.
Even in the case of single yield traditional crops, in most cases, farmers don’t receive their full income at one time. Many crops are consumed at home, partly or fully. Farmers sell a good portion immediately after harvest, even at a low price in order to meet loans and other obligations. Farmers hold some portions to sell later when prices are expected to go up. In most cases, other products are sold in credit. There are no contractual buyers in the case of traditional crops to whom farmers can sell. For these reasons, farmers don’t get a total amount at one time. In case of general crops, there is no such agreement between buyer and seller, credit sale is frequently observed, and there is no specific time limit to get money back. Sometimes credit sales become non-recoverable too. Many farmers reported that they cannot save a large amount from small payments and face problems in tackling high money requiring issues like building a new room in the house, expenditure of a marriage, purchase an irrigation pump etc. So, getting a large amount of money at a time is another motivating factor for tobacco cultivation.
Some crops provide yield at a time like paddy, wheat, jute etc. but many crops provide repeated yields like chili, tomato, brinjal etc. The nearest market for Garabaria village is Charatola where only Monday is the main trading day of the week. Farmers the village reported that middlemen controls price of the marketing chain, sucking the blood of farmers. Many farmers remain unaware of product prices of other market due to information gaps. Even when some farmers are informed about higher prices in a nearby market, many small farmers are forced to sell in the local market as they cannot carry small amount to other market because of transportation and labor overhead cost. Many famers’ crop remains relatively both immature and over matured as they target Monday.
With transportation overhead, many consider Haldani bazar as well where Friday is the main trading day. From this market, vegetables and crops are ultimately transported to different districts. When there is bad weather like severe rain, farmers often fail to transport products to market. Local transportation also becomes a problem for them. Even many farmers of Srifoltola village must cross a small canal to reach the main road and then to move to Charatola or Haldani bazaar. On other days of the week, farmers can bring products to the market but sell to local clients in piecemeal. In these cases, per unit transport cost also gets increased. All farmers in the Sohagpur village reported that they are habituated to sell both in cash and credit. In case of small sales earning, they spend almost fully to purchase other grocery and other needed products for households, even on the same day. Thus, they cannot gather a handsome savings.
Higher profit from tobacco: Compared with other crops, though tobacco cultivation cost is much higher than other crops, tobacco provides higher profit. One farmer responded that he enjoyed BDT 70,000 as profit from an acre of land even after paying land rent. However, he mostly used family labor. If the curing goes well and farmers can produce a maximum amount of ‘Grade 1’ tobacco, profits also increase. Another farmer reported that per decimal profit for tobacco is about BDT 450 which is the highest for any crop he cultivated in the last season. Hassan et al. [18] found that in Rongpur region tobacco was the most profitable crop. Per acre profit from wheat becomes about BDT 15,000. Rice is not considered so profitable in any season but farmers cultivate those at least for domestic consumption. Price of wheat as well as rice remains relatively stable where profits from those remain very low. That’s why farmers target some high profit-making crops to maintain the family expenditure well.
Input and technical support from company: Though tobacco is the most expensive crop to cultivate in the tobacco season, farmers cultivate tobacco for high profit. To cultivate tobacco in one acre, more than BDT 50,000 is required, on average. The cost of cultivation of an acre of land for wheat and garlic is about BDT 18,000 and BDT 32,000 respectively. Company’s free inputs become important consideration. In addition to the company supplied 200kg of fertilizer, a farmer uses 12 sacks (600 kg) of additional fertilizer (total 800 kg per acre, on average). If land rent is considered, then the cost of cultivation becomes even higher. Even though tobacco companies charge higher price for 200 kg of fertilizer, farmers prefer this support. Primarily it reduces monetary burden during cultivation. Moreover, price is adjusted after harvest which is generally not the case of other loans from NGO and banks. Contract tobacco farmers get technical support from the field officers during cultivation and curing free of cost. Even when new tobacco species are launched, farmers don’t get worried as they expect to get solutions from the tobacco companies’ field officers.
But in case of other traditional crops, farmers showed their resentment about not getting proper services from government agriculture officers. When farmers go to the upazila agriculture, many officers are not found. Moreover, in the available cases, they get the treatment without any direct observance of the problems. As agriculture office is located only in the upazila headquarters, farmers of remote villages don’t feel incentivized to go to the agriculture office. In most cases, farmers try to get technical solutions from local agricultural input sellers who are not expert.
In traditional agriculture, farmers try for new varieties for which they need to be trained properly and get technical solutions from time to time. Traditional farmers’ next most important problem is a lack of crop specific training from any organization.
Other factors affecting tobacco cultivation
Along with company provided facilities, there are other reasons for which farmers get interested to cultivate tobacco.
Employment of family labour: Tobacco cultivation requires huge labour for which other members including children and women get involved in the curing phase. They are employed to tie green leaves, to add firewood and to release cured tobacco from the curing house. Family members help to store the tobacco at home, to make bundles for sale etc. To avoid payment for hired labor cost, family labour use become commonly followed alternate in the Jhenaidah district. In the rural area of the Jhenaidah district, an amount of BDT 350 to 400 is required for a hired laborer who works normally 6 to seven hours in a day. Sometimes Laboure’s are required for only a few hours in a day. In such cases, it becomes monetary burden to hire Laboure’s as they demand full payment. In contrast, family Laboure’s have more flexible working hours. Even when farmers are involved in other jobs, they can work in their own tobacco field in the early morning, late afternoon or at their convenience.
Some traditional cultural factor works in Bangladesh. Children and women generally don’t work out in the fields but works in their own crop field. Otañez and Glantz [19] identified use of child labor in tobacco production. Even many unemployed male adults don’t work in others’ fields as paid labourers. In the absence of tobacco cultivation, those unemployed or underemployed family labourers generally work in others field as paid labourer. Tobacco curing continues for several weeks and requires tremendous effort over days and nights. Due to its sensitivity, farmers aim to look after curing phase themselves. As curing is done at home, many households don’t feel comfortable to allow outside labourers to reside in their home for a few days. In traditional rural Muslim society, privacy is a concerning issue. Food, accommodation, and hired labour payment become other considerations for not employing hired labor. FGD findings support that when both family and hired labourers are used, in most cases the amount of total work hours from family members is larger by far than that of hired labourers. Though family labour is also used in other sorts of agriculture, the amount of labour used in tobacco cultivation is much higher than in any other agriculture. In another sense, it is an option of selling family labour in one’s own field.
Demonstration effect: Tobacco cultivation is widely influenced by household neighbor’s and land neighbors. It was reported that tobacco gets concentrated in the tobacco surrounding land rather than in scattered plots. One plot’s crop selection also affects nearby plot’s crop selection. Unique crops in nearby land can help to manage tillage, irrigation, pesticide use and other activities efficiently. When field officers come in a field, other farmers including non-contract farmers get technical support also free of cost. A farmer can take help from the other farmers too. Farmers of Srifoltola village reported that tobacco producers use excessive amount of pesticides compared with other agro-products. Then insects move to surroundings other crops that follow a different pesticides schedule. But farmers try to match pesticide use with the tobacco which becomes difficult.
Neighbours’ cultivation practices also influence farming behaviour. Some FGD participants in Srifoltola village informed that non-contract farmers can sell tobacco to the tobacco companies via other contract farmers. Rashid Sheikh said that “I started tobacco as my neighbors were cultivating for years and doing some development expenditure at the end of tobacco season’’. Tobacco farmers can get help from neighbours in receiving inputs from the depot. Abul Khair’s depot is situated in Jhinbaria which is about 10 km distance from Garabaria village. BAT’s depot is situated in Vennatola which is 10 km distance from Garabaria village. Farmers of these two companies collect inputs and sell tobacco within Jhenaidah district. But in the case of Dhaka Tobacco, the selling spot is located in Kushtia district, but field officers develop different convenient mechanisms for receiving inputs, generally from local areas. With the assistance of the field officers, one farmer can receive input of others. But at the time of tobacco sale and payment farmer need to be present bodily. Sometimes farmers enjoy economies of scale in purchasing firewood as well as transporting cured tobacco to companies.
Unique features of tobacco: Tobacco generally shows a very high sales-production ratio as no crop is kept for household consumption. Producers can sell about 100 percent of the produced tobacco. Card holding members get assurance of sale to tobacco companies. Farmers can sell the lower grades tobacco to other local producers of jarda, gul and bidi. Betel leaf consumption is very high in Bangladesh. People chew it with betel nut and different processed tobacco. It is very common in the region that non-contract farmers can sell tobacco to TPC via another contract farmers’ card. This percentage is by far lower in case of other crops. Many crops experience high household consumption, gift to relatives, damaged after production by bad weather, lack of storage facilities, etc. As agriculture provides cattle food, rural people enjoy advantages in rearing cattles. In most cases, they fix cattle in the field to feed on grass. But the loosening of cattle’s rope pin is common which allows cattle eating and destroying different crops. There are many instances where crops were totally damaged by cattle attack of only a few minutes. But cattle don’t eat tobacco leaves, which is an important consideration for farmers. Tobacco has a high tolerance on insect attack. Being a winter crop, it is also safe from monsoon, which causes problems in many crops. Moreover, farmers don’t mind using excessive fertilizer and excessive pesticides unlike with other vegetables and crops.
Farmers’ Negative Considerations
High cost of fertilizer
Generally, farmers don’t pay for seeds and for technical solutions from the TPC but other inputs are provided at loan are adjusted when farmers receive sales earnings from companies. All participants in the FGD sessions conducted in Garabaria village complained that all companies charge a higher price for fertilizer than the current market price. Though receiving inputs are optional, experiences suggest that almost all contract farmers farmers receive those at alone due to their poor economic condition. One contract farmer of Srifoltola village reported that he is to pay BDT 6,000 for contracted 4 sacks of fertilizer (200 kg) which comprises a mix of DAP, SOP and Phosphate. Since larger leaves have high demand and are considered for higher grade, farmers often use excessive fertilizer so that leaf size increases. One farmer told that he paid BDT 6,000 for fertilizer again actual market price of BDT 4,875. Tobacco farmers of Garabaria reported that they don’t mind paying a higher amount, but the amount becomes higher for only 2/3 months delays.
Poor tobacco grading system
There is no technical test of tobacco grading, instead, company officers determine grade by their experience and judgement. One farmer in Garabaria told that he took three bundles of the finest tobacco to a company out of which, grading manager treated one bundle as grade 1, and two bundles as grade 2. He bargained and the manager then treated another bundle as grade 1. This seems to be a win for the farmer, but he was completely dissatisfied because the remaining bundle (treated as grade 2) was also the finest quality in his opinion. He believed that that could certainly have been considered as grade 1. In most cases farmers have little say over the company decided grade. One farmer mentioned that muscle power, political power, relationship with company, influence of the field officer, membership size of the farmer etc. also sometime makes a difference in deciding grades. Some influential farmers purchase tobacco from non-contract farmers and sell those to the company in their own name. Farmers have resentment towards the tobacco price they get from companies. Though companies increase tobacco price each year, the rate of increment is lower than the increment of cost of cultivation. Companies set the prices and farmers are the price takers as there is no organized association of the tobacco growers.
No measure to tackle adverse health effects
Tobacco curing is hazardous for health. As curing house is located at the homestead, all the family members are affected by the harmful gases. They inhale and smell harmful gases when they hang green leaves inside the curing house, fire the oven and releases dried leaves. Normally someone has to add fire and check leaves continuously for day and night to add firewood, to check temperature level, to observe leaf condition, to tie and untie leaves and then to preserve those at home before selling. To complete a shift of burning, farmers need to fire about 72 hours (3 days) continuously. If someone skips overnight heating, then more days are required for drying a shift. After firing, additional days are also required in the curing house to get fine color which is an important factor for tobacco to be considered for higher grade. After releasing cured tobacco from the curing house, farmers immediately prepare for another shift. Depending on the amount of tobacco leaves, total required curing days vary from household to household, but it may last for up to two months for big farmers. As cured leaves are stored at home even in the sleeping room, all family members are affected for inhaling noxious gases. When tobacco growing family members go out, people can identify them by the smell of tobacco from their clothes and by their exhausted appearance. Apart from inhalation of noxious gases very thin, light and small tobacco dust easily gets mixed with food stuffs. Even when family members don’t participate in tobacco curing, they also become victims. One farmer of Srifoltola village reported that he experienced food poisoning from accidentally mixing tobacco dust particles in the food cooked for the daily meal at homestead. Cured tobacco is released from curing house either at night or in the very early morning to avoid the sun’s heat. Because of variation of heat level tobacco might get ruined, they continuously check heal level and cannot sleep well at night. No safety masks or other protective equipment are provided by any TPC. Though some field officers ask farmers to adopt protective measures, rarely farmers do so by their own accord. It is unfortunate that TPC doesn’t provide any safety equipment for the curer let alone for the other family members. There is almost no instance of using gloves and masks, or sunglasses by the farmers themselves. Sometimes they cover their nose and mouth with old clothes.
Tobacco cultivation exerts both short term and long effect on mental health [20]. Surveyed households reported that members get sick from various diseases. Frequent use of pesticide in the tobacco field causes pesticide poisoning [21]. Tobacco related diseases such as: Green Tobacco Sickness (GTS) including dizziness, nausia, acute cough, chronic cough, respiratory problems, pesticide poisoning etc. were reported in higher proportion in tobacco cultivating households. Riquinho and Hennington [20] found that respiratory disorder, symptoms of mental disorders, pesticide poisoning musculoskeletal injury and other diseases were associated with tobacco cultivation and processing. It reduces stamina and working hour and working days. While attending patients, family members’ working hours or wages also get lost. Families need doctor fees, medicine costs and associated transportation costs when consulted by a doctor. Thus, households incur health costs which include both wage loss for illness and morbidity.
Unnoticed and unaccounted cost of tobacco cultivation
Many farmers consider for short-term profit but overlook many cost components like unpaid family labour. Farmers don’t not consider labor opportunity cost like wage earnings from other work, leisure cost etc. Among different shifts of tobacco, farmers experience bad curing in some shifts. There are instances of complete destroyal of tobacco. If it happens even after some years it can become a ruinous burden for farmers. About 50 maunds of firewood are required to cure tobacco from one acre of land. Some farmers of Srifoltola villages also reported 70 maunds for curing tobacco from same size of land. When they use home supplied firewood they usually don’t measure, neither they consider as cost.Since all these costs accrue because of tobacco farming, should be deducted from profit of tobacco farming to calculate real gain. They cut many trees or purchase firewood for curing which is an environmental concern. Thus, tobacco augments deforestation [19,22]. In Malawi tobacco expansion has been occurred at the expense of replace of forestry [23]. There are similar examples in many countries also. In terms of tobacco driven deforestation, Bangladesh ranks number three in the world [24]. If those were added to give a financial value, the profit from tobacco cultivation may be seen as minimal. Tobacco crop per year is very high. From seed plant to sale, about eight months are required. Though land is not occupied in tobacco curing phase, but farmers’ mental and physical attachment to tobacco continues.
After three to four years of continuous tobacco cultivation, farmers observe some tobacco roots (locally called gara) in the soil. Another farmer reported that they can cultivate upto five years before tobacco roots are observed. This tobacco plant residue remaining in the field gradually degrades the soil quality [7]. After observing these roots in the soil, production falls for both tobacco and other crops. Tsadilas et al. [25] found that soil pH was reduced by 0.5 units after four years of tobacco cultivation. To nullify the effect farmers, stop cultivation for a season. This is another cost component which is not generally addressed. FGD responders reported that farmers are to use additional than the previous year to retain production level. In per bigha (33 decimal) tobacco cultivation, roughly they use 5 kg more fertilizer than in the past season, and this is becoming a trend. So, this gives a sense of the reduction of fertility. In a field experience Farmers reported that they use so much fertilizer that tobacco cultivating land requires less fertilizer in post tobacco season (kharif I), than the land in which tobacco was not cultivated in the rabi season.
About two months is required for tobacco curing and other processing activities. School-going children also get involved in curing activities. As most adults remain involved in curing, the children’s workload increases in components also. Curing activities done at night disrupts sleep. For harmful gases, dizziness, nausia and weakness are mostly observed among small children. So, they cannot study properly as well. Many skip for school for dizziness, GTS and for excessive workload.
The study found that there is less movement of relatives to and from tobacco cultivating households especially during tobacco curing stages which last about two months. Moreover, family members’ business for curing don’t allow them to have leisure and gossiping time with relatives. Many relatives don’t like the bad smells and gases accrued from the curing process and from cured tobacco stored for sell. Relatives staying overnight in many cases are lodged in tobacco storing rooms. All clothes gain a bad flavor which remains even when members go out and people can easily identify them. Relatives also consider health issues. Tobacco households also cannot visit their relatives during curing time. Many reported that many families hesitate to give marriage of daughter with tobacco cultivating households for excessive workloads.
Some tobacco growers have skipped from tobacco cultivation by considering some of these cost aspects. One respondent in Sohagpur village told that he gets too tired from a few hours work for the last two years. He used to cultivate tobacco for six years. From last year he has stopped tobacco because of his illness. He is neither interested to continue tobacco with hired laborer’s. Many farmers strive to cultivate many HYV crops. In many of those cases, crop years have been reduced. If managed well, non-tobacco land can produce up to five seasonal crops and vegetables. Many farmers aim to be early reapers of seasonal vegetables and get high prices.
Policy Recommendations
Minimizing limitations of traditional agriculture
It is evident that tobacco has displaced food production especially in fertile land in Bangladesh. The fertile region of Kushtia, the second largest tobacco-producing district in Bangladesh, was once a food-surplus region. Now tobacco occupies fertile lands in the district. This trend is also taking place in other districts including the CHT. Bangladesh is struggling to attain food sufficiency for more than 160 million people in an area of 147,570 square kilometers. If more land is devoted to non-food production, this would be a threat to food security because Tobacco producing community faces loss of local food production [26]. Bangladesh is losing 1 percent of arable land every year, in part due to erratic rains and land degradation (UNWFP, 2011). There is pressure from urbanization and other pressure on cultivable land. Many tobacco producing countries also face problems of food scarcity. Moreover, 12 out of the top 25 tobacco leaf producing countries had higher levels of malnutrition in their populations between 2011 and 2013 ASH [27]. At the same time, we observe the decline of tobacco from developed countries. The steepest decline has been experienced by the United States of America, from 20 percent of world tobacco output to 8 percent between 1970 and 2000, followed by the European Union, whose share of production fell from 14 percent to 7 percent [28]. To gain renewed interest in traditional agriculture and bring back from tobacco, limitations of traditional agriculture are to be minimized in many respects. Proving technical assistance and crop specific training, price and sales guarantee are mentionable. Government has an existing organization named UAO to guide farmers. Government should ensure that officers remain at the office during office hours, the number of officers is adequate, and officers arrange training for farmers regularly and get in touch with the farmers. Agriculture officers are to be trained about recent crops and practices. Government field officers are also supposed to visit crop field in applicable cases. Farmers’ awareness is essential so that they don’t depend on local medicine sellers to get treatment. For assuring famer’s sale at stable price, there are provision of government purchase but that mostly targets some non-perishable crops like rice, wheat etc. But there is no such step in the case of perishable products. In this case, government can pursue the same for other perishable crops especially which is produced in the tobacco season. Sales assurance at a stable price will convert many farmers from tobacco to general crops. Crop insurance for traditional crops in the tobacco season can be a solution.
Publicizing the health costs and assessing real gain of tobacco farming
Tobacco cultivation cause many health problems. But most tobacco-growing farmers reported that neither government nor any other organization has advocated them to stop tobacco cultivation. Initiative should be taken to increase awareness by alerting famers on short and long-run health costs. Tobacco companies sometimes inform farmers, but main focus goes on the effects of smoking. So, health and or agriculture ministry need to initiate mechanisms to inform farmers. Media broadcasting can have a positive effect. It also requires scientific research to determine what sorts of gases are produced when green tobacco leaves are burnt and to what extent these are injurious to health.
Tobacco has both short-run and long-run encouraging and unpleasant aspects. Family Laboure’s remain and unaccounted. Farmers don’t calculate the opportunity cost of labor and preference. Further research is needed for long run gain from tobacco by assessing unaccounted and unobserved cost components like unpaid family labor, opportunity cost of leisure, land degradation, cost of keeping land uncultivated for tobacco, health cost of pesticide and tobacco curing etc. There are cost on child health and education and women health. Many don’t consider cost of home supplied firewood. Cutting many trees for fuel is another area of concern. Thus, all tangible and intangible benefits and costs area to assessed holistically for assessing real gain from tobacco.
Imposing control over tobacco companies
Many countries of the world reduced tobacco production through different government actions. The shifting of tobacco production from developed to developing countries, especially Brazil, has been favored by a decrease in government subsidies [29]. Tobacco is not patronized by the government of Bangladesh nor does it prohibit farmers in the region to cultivate. Firstly, government initially can limit acres that tobacco companies can contract for tobacco cultivation. In the presence of tobacco cultivation, government can impose some other control on tobacco companies. Schmitt et al. [30] suggested adopting effective intervention to address need of tobacco harvesters to minimize risk. For example, tobacco Companies must have to internalize health and environmental damage which will also increase cost of tobacco procurement and thus reduce tobacco cultivation. No companies need to provide safety equipment’s at the curing stage and for spraying insecticides. Since tobacco cultivating households incur health burdens in the form of medical cost, workday loss etc., companies must internalize these in the business model like financial incentives for health burden. As tobacco is called a ‘forest killer’ [31], for environmental safety alternative to existing curing approaches are to be considered. In this regard, [32] found that tobacco cultivation led deforestation and loss of biodiversity in Tanzania. Riquinho and Hennington [20] also highlighted deforestation issue. Many African countries are facing similar problems at present too. Many farmers reported tobacco about discriminatory practices for tobacco grading. Government can impose technical grading to safeguard the interests of the tobacco growers. In case of incentives, government should dictate where companies charge excessive price for inputs offered at loan.
Conclusion
Farmers feel an interest towards tobacco due to short run profit. They can get a large amount of cash in one go. Apart from profit, many farmers are continuing tobacco with dissatisfaction especially for huge labour hours and health pains. Farmers are gradually becoming conscious about health issues as well. Households with high income put weight to labor hours, health issues, and leisure and thus getting out of tobacco. On the other hand, finding no other alternatives, many landless, marginal and poor farmers are entering into tobacco. Minimizing problems and uncertainties of traditional agriculture may turn some farmers from tobacco to traditional crops. It is a matter of hope that chronological limitations of typical agriculture have been decreasing. HYV has increased production in many crops. Businessmen have started to transport goods to other markets even in remote locations. Due to increased population (more than 160 million in Bangladesh), demand for agricultural goods has been increasing. One farmer of Sohagpur village reported that many farmers manage four to five crops in a piece of land in a year. This has been possible for shorter crop years as some vegetables can be matured in one and a half months and even less. Some NGOs support and patronize different crops and seeds at different times though not in a consistent manner. But until the market of traditional crops is ensured, there is less likelihood that farmers skip tobacco cultivation. In the study area, it was observed that there is no direct policy to regulate tobacco. As there are many long-run cost and negative externalities occurred for tobacco cultivation, there must be a policy balance between imposing control over tobacco companies and minimizing challenges of traditional agriculture.
To read more about this  article: https://irispublishers.com/aphe/fulltext/farmers-response-to-tobacco-processing-company.ID.000503.php
Indexing List of Iris Publishers: https://medium.com/@irispublishers/what-is-the-indexing-list-of-iris-publishers-4ace353e4eee
Iris publishers google scholar citations:
https://scholar.google.co.in/scholar?hl=en&as_sdt=0%2C5&q=irispublishers&btnG=
0 notes
open-access-public-health · 4 years ago
Text
Annals of Public Health & Epidemiology - APHE
Quality Assessment of the Efficiency of the Warming Cabinet Method in Warming Fluids in Anaesthesia
Authored by  Essam Abdelrazek*
Tumblr media
Abstract
In this observational study, we assessed the efficiency of the use of a 40° C warming cabinet method for heating intravenous (IV) fluids administered to patients during surgery. 24 bags of Ringers lactate were placed in a warming cabinet set at 40° C. Fluid temperatures at two points of the delivery system (bag and patient end) were measured every 3 minutes for 15 minutes after being removed from the warming cabinet.
Results: The bag temperature (34-31 °C) was significantly lower than 40°C but remained above room temperature throughout the study period. The patient–end temperatures were significantly lower than bag temperatures for all time-periods (p<0.01) and were similar to room temperatures within 3 minutes. We conclude that the quality of the warming cabinet technique used in operating theatres to warm fluids prior to IV infusion, is clinically inefficient and fails to deliver the IV fluids at a warm enough temperature and may therefore put patients at risk of unanticipated hypothermia. We recommend that the practice of the studied method of fluid warming needs to be revised and replaced by another method which could be more clinically effective and reliable in keeping the patient’s temperature within normal limits under anaesthesia.
Keywords: Hypothermia; Prevention; Hypothermia; Physiology effects
Introduction
Hypothermia is a well-known complication of general as well as regional anaesthesia [1,2]. It could have detrimental effects on the patient’s hemodynamic, respiratory, neurological, and metabolic conditions [3]. Post-operative shivering in recovery is usually very unpleasant and distressing experience for patients as well as the staff in the recovery room [4]. This can be a cause of delayed recovery from general anaesthesia. It may increase the patient’s feeling of pain in the immediate post-operative period, which could be difficult to control without improving their postoperative temperature [5]. Intraoperative administration of pre warmed intravenous (IV) fluids (37- 40° C) has been shown to be one of the useful methods used to counter the anaesthesia/surgery induced hypothermia effects in patients undergoing surgery [6]. Different methods have been suggested to warm IV fluids given to patients undergoing surgery, with variable degrees of efficiency [7-10]. One of these methods is the use of electrically operated warming cabinets. This is considered to be a simple, easy, available, and relatively cheap method of warming fluids given to patients in the operating theatre. Therefore, it has become popular globally. However, evidence of the efficiency of this technique remains unclear [7-9].
At the Royal Bournemouth Hospital (RBH) using warming cabinets to warm the intravenous fluids given to patients under anaesthesia is a standard technique. However, clinical observation has suggested that the warming cabinet is not fully reliable because the warmed fluids appear to lose heat quickly on traversing down the giving set. This study was designed to assess the efficiency of using the warming cabinet technique as a method of pre warming IV fluids and by inference, it’s appropriateness for maintenance of patient temperature during surgery.
Methods
Approval of an Ethics Committee was not necessary as there were no patients involved. Based on the results of an initial pilot study, a mean difference of 10° C (standard deviation±2.25) between the starting temperature on leaving the warming cabinet and two points A (the fluid bag) and B (the distal part of the giving set - i.e. patient end) was deemed statistically significant. A sample size of 24 fluid bags would be required based on significance of p<0.05 and 90% power. According to the standard practice at our institution, the warming cabinet (W157, LEEC Limited, Nottingham, U.K.) was used to warm the fluid bags. Twenty-four fluid bags of equal volume (1 L) were placed in the warming cabinet set at 40° C 12 hours prior to testing. Using the wall mounted room temperature sensor (Sauter Ltd. Company®, Basil, Switzerland), the room temperature was kept at 21ºC±0.23ºC. The fluid sample temperatures were measured using the C21 Comark thermometer (Comark instruments company, Norwich, U.K.). This is a batteryoperated digital thermometer designed to measure temperature of wide range between -50 to 150° C with reliable accuracy.
Once removed from the warming cabinet, the tested fluid bag was attached to a standard line giving set with total capacity of 20 ml fluid volume (Fresenius Kabi, Homburg, Germany). The fluid temperature was checked at two different points
I. Temperature point A (The bag end): Via a 14 G cannula inserted into the bag and connected to a three-way tap for easy and frequent aspiration of 20 ml fluid sample to enable measurement of bag-fluid temperature every three minutes for fifteen minutes.
II. Temperature point B (The patient end): Via a three-way tap placed at the distal end of the infusion giving set to enable easy and frequent aspiration of 20 ml fluid sample at the patient delivery point every three minutes for fifteen minutes.
All the fluid samples were collected using a 20 ml syringe and placed into a plastic cup for immediate temperature measurement performed by blinded investigators (JW and JVR).
Statistical analyses were performed using IBM SPSS Statistics 22.0. The mean value of the two recordings made at each point and time was used in the analysis. Unless otherwise stated, reported statistics are mean (SD). Comparisons were made using one sample t-tests and Repeated Measures ANOVA with post hoc comparisons where appropriate and the measurements were compared at each time point:
I. Temperature difference between Point A and Point B.
II. Temperatures at Point A versus the warming cabinet temperature
III. Temperatures at Point B versus the warming cabinet temperature.
IV. Temperature difference between Point A and room temperature.
V. Temperature difference between Point B and room temperature.
Results
Throughout the study period, the cabinet temperature was maintained at 40° C and the ambient room temperature ranged from 20.7 to 21.5 ºC with a mean of 21±0.23ºC. All temperatures recorded at each sampling location for each sampling time are presented in Table 1 and Figure 1.
Table 1: Summary of mean (SD) temperature recordings (ºC) for each sampling time and location (n = 24) and the temperature difference between Point A and Point B. All times points were associated with a significantly lower temperature at Point B compared to Point A (p<0.01).
Differences in temperature between point A and point B (Table 1 and Figure 1)
There was an overall significant difference between the temperatures taken at Point A versus Point B (F1,46 = 4382.63, p < 0.001). Post hoc comparisons showed significant differences between Point A and B at each sampling time (p < .001). Temperatures were significantly lower at Point B for all time points with a minimum difference occurring at time 0 (7.8±0.9ºC); and a maximum difference at 3 minutes (9.21±0.73ºC). At the end of the study period, the difference between point A and Point B was 8.44±0.74ºC.
Bag temperature (point A) compared to warming cabinet temperature
The starting fluid bag temperatures at point A were significantly lower (p < 0.001) by a mean of 5.84 ºC (0.87) from the starting temperature of 40ºC after being attached to the giving set. However, throughout the study period, the mean bag temperature remained above 30 ºC (Figure 1).
Distal end delivery point temperature (point B) compared to warming cabinet temperature
The starting distal end delivery point temperatures at point B were significantly lower (p<0.001) than the warming cabinet temperature (40 ºC) by a mean of 13.6 (0.81) ºC. Throughout the study, the mean temperature measured at point B continued to decrease. At the end of the study period, the mean difference between the two temperatures was 17.21 (0.43) ºC (Figure 1).
Differences between Point A temperature and room temperature
The bag temperature was significantly greater (p<0.01) throughout the study period. It was 13.0±0.87ºC greater at time 0 and this difference gradually decreased to 10.0±0.66ºC greater than room at the end of the study period (Figure 1).
Differences between Point B temperature and room temperature
The Point B temperature was 5.2±0.81ºC greater than room temperature at time 0 but this difference quickly decreased with no significant difference (1.6±0.43ºC) at the 3-minute sampling time (Figure 1).
Discussion
It is acknowledged that perioperative hypothermia is a common and serious complication of surgery under general anaesthesia, regional anaesthesia or even deep sedation [1,2]. Interference with the normal thermoregulation processes that occur at two different levels is proposed. The hypothalamus, which is the primary center for thermoregulatory control of maintaining the patient’s temperature within normal limits and at the spinal cord level where some integration and thermoregulation may also occur. Peripherally, hypothermia under anaesthesia may occur secondary to peripheral vasodilation leading to heat loss from the patient through radiation, conduction, convection as well as evaporation [11]. Failing to counteract this anesthetic induced complication will put the patient at risk of several detrimental effects including, prolonging the duration of action of anesthetic agents and muscle relaxants. This may subsequently delay the post anesthetic recovery and even cause post anesthetic thermal discomfort. Mild perioperative hypothermia significantly increases perioperative blood loss and augments allogenic transfusion requirements [11]. A reduction of the patient’s core temperature by only 1.9ºC triples the incidence of surgical wound infection following colon resection and increases the duration of hospitalization by 20% [12]. This may be caused by a combination of hypothermia-induced adverse effects on antibody and cell mediated immune defenses, and alteration in oxygen availability in the peripheral wound tissues. Mild lowering in the patient’s temperature may also triple the incidence of postoperative myocardial complications [1,2,3,12]. In view of this, even mild perioperative hypothermia should be avoided because it can cause significant perioperative complications and add to the patient’s care costs. Responsibility for patient normothermia, invariably falls on the anesthetist.
There are many recommendations and suggestions for maintenance and monitoring of temperature. Skin surface warming for 20 minutes immediately before anaesthesia (pre-warming) minimizes initial redistribution hypothermia [13]. It is suggested that the patient’s body temperature should be above 36° C before induction of anesthesia and should be measured continuously throughout surgery. Active warming should be applied when anaesthesia time is >60 minutes [14]. Effective methods of active warming are forced-air warming or conductive warming, provided that enough skin surface is available [13]. Increasing the operating room temperature and warming of irrigation fluids are adjunctive methods [15]. Several methods have been already used to warm the IV fluids given to patients under anaesthesia with variable degrees of efficiency [7-10]. Evaluating all the different methods of warming the IV fluids given to patient under anaesthesia is out of the scope of this study.
Warming fluids in an electrically operated cabinet is commonly used in many hospitals including our own. In this study, we found that the measured mean temperature of the fluid bags kept in a warming 40-degree cabinet were lower than the set temperature but remained above 30 for at least 15 minutes when used in a room temperature of 21±0.23ºC. When compared to the environmental temperature, the mean fluid-bag temperature was 12.96±0.87ºC higher at time 0 min. At the end of the 15-minute period of exposure outside the cabinet was 10.03 ±0.66ºC higher than the room temperature.
This may suggest that the over the study period the environment did not have a significant effect on the temperature in the fluid bag, whilst they sat outside the warming cabinet. Therefore, the bag continues to feel warm to the touch. Most anesthetists would therefore assume that this is useful because the bag temperature remains warm and gives the impression that we are infusing warm fluid to the patient. However, the process of fluid running down the giving set was associated with a marked reduction in this temperature. Clinically, we could say that these temperatures were almost the same as the environment after 3 minutes of running the bag on the drip stand.
The effect of the IV giving line on the temperature of the fluid as it ran down the tubing was significant. In particular, we found that the mean temperature at the distal end of the giving set (point B) dropped to only 1.79 ±0.20° C above room temperature after 15 minutes. This would suggest that the process of warming fluids in a cabinet would probably be of minimal benefit in maintaining the patient’s temperature within normal surgery duration. However, contradictory results have been reported in other studies. In 76 adults undergoing short duration surgery, Andrzejwski and colleagues [16] found that pre-warmed fluid administered under pressure within 30 min of its removal from a warming cabinet where it was kept at 41° C for at least 8 hours, was as efficient at preventing perioperative hypothermia as that delivered through an in-line warming system.
Woolnough and colleagues. assessed the effect of warming IV fluids in 75 women undergoing combined spinal epidural anaesthesia for Caesarian Sections (CS) and they concluded that pre-warmed fluids stored in warming cabinet set at 45° C and given under pressure when needed, is as efficient and cheaper than a hot line fluid warmer but there was no significant difference in the incidence of chills between CS patients receiving warm IV fluids and those receiving room temperature fluids [17]. Yokoyama and colleagues found that administration of pre-warmed IV 1L colloid followed by pre-warmed 1L crystalloid maintained core temperature during CS under spinal anaesthesia and induced higher Apgar scores and umbilical arterial pH. To maintain the temperature of the given pre-warmed colloid and crystalloid, the fluid was given using an IV warmer coil kept in a water bath warmer [18].
Goya and colleagues also studied the effect of warming IV fluids in the maintenance of core body temperature during CS under spinal anaesthesia in 64 patients randomized between two groups- one group received room temperature fluid (22° C) and the other warm I.V. fluid (39° C). They found that the core temperature reduction was less among the mothers who received warm fluids compared to the other group (p <0.01). However, there was no significant difference in the incidence of chills in the two groups [19]. McCorroll and colleagues also failed to show that warming IV fluids prevented hypothermia among women undergoing elective CS [20].
However, in our study, measuring the temperature of the tested fluid bags has found that the fluid has lost the gained heat within 3 minutes only. After 15 min from the beginning of the test the temperature of the fluid at the patient end was almost equal to room temperature. This agrees with John and colleagues’ assessment in their review article assessing the performance and clinical application of different perioperative warming devices, where they suggested that the main set back with the use of pre-warmed fluids is the potential cooling effect, that lower flow rates through long thin tubing has, on the delivery fluid temperature. Their suitability in pediatric cases has been questioned [21]. In addition, we note that the studies supporting the efficiency of the warming cabinet technique in pre-warming fluids during surgery, differed to our technique by giving the fluids quickly under pressure within short time.
Conclusion
Warming cabinet technique usually used in theatre to warm fluids before giving them intravenously to patients as a part of the anesthetic management is still a controversial technique. Our study suggests that the warming cabinet method is inefficient as it does not deliver the intravenous fluids to the patients as warm as we think is required. Moreover, the use of the warming cabinet is probably allowing a false sense of security for the anesthetists and potentially subjecting the patient to the risk of unnoticed hypothermia especially if the patient’s temperature is not monitored during surgery. Therefore, we recommend that the practice of using the studied method of fluid warming needs to be revised and may need to be replaced by another method which could be more clinically effective and reliable in keeping the patient’s temperature within normal limits. Further studies would be useful.
To read more about this  article: https://irispublishers.com/aphe/fulltext/quality-assessment-of-the-efficiency-of-the-warming.ID.000502.php
Indexing List of Iris Publishers: https://medium.com/@irispublishers/what-is-the-indexing-list-of-iris-publishers-4ace353e4eee
Iris publishers google scholar citations:
https://scholar.google.co.in/scholar?hl=en&as_sdt=0%2C5&q=irispublishers&btnG=
0 notes
open-access-public-health · 4 years ago
Text
Annals of Public Health & Epidemiology - APHE
Prevalence of Hepatitis B Virus in Nigeria: Review Update
Authored by  Berinyuy B Eustace*
Tumblr media
Abstract
Hepatitis B virus is among the common viral infectious agents of public health concern. An estimated two billion people are infected worldwide with approximately 350 million others suffering the chronic form of the disease. Nigeria, a tropical country, has been documented as highly endemic for HBV infection and about 75% of its population is likely to have been exposed to the virus at one time or the other in their lives. Currently about 18 million Nigerians are infected. A prevalence rate of 4.3 % to 23.3% have been reported from different part of the country This paper presented up-date prevalence of hepatitis B virus from different part of Nigeria. The structure, mode of transmission, replication, pathogenesis, diagnosis, prevention and possible treatment of the disease were also mentioned.
Keywords: Hepatitis B; Prevalence; Nigeria; Infections; Blood
Introduction
Hepatitis B virus is among the common viral infectious agents of public health importance globally. An estimated two billion people are infected worldwide with approximately 350 million others suffering the chronic form of the disease [1,2] In Africa, more than 50 million people are chronically infected, with mortality risk of about 25%. The carrier rates of the virus in Sub-Sahara Africa range from 9% - 20% [3]. Hepatitis B virus (HBV) infection is a serious health problem worldwide. Once chronic infection is established, HBV may persist in the liver for lifetime [4], which not only causes severe HBV-related sequalae such as cirrhosis and hepatocellular carcinoma but also constitutes the reservoir of the virus [5].
The spectrum of the symptoms of HBV disease varies from subclinical hepatitis to icteric, hyperacute, acute and subacute hepatitis during the primo-infection phase and from an asymptomatic carrier state to chronic hepatic cirrhosis and hepatocellular carcinoma during the chronic phase. In the acute phase, the incubation period is 1-6 months [6]. Anicteric hepatitis is a predominant form of expression for this disease, at this phase most of the patients are asymptomatic. Patients with anicteric hepatitis have a greater tendency to develop chronic hepatitis. Icteric Hepatitis B is associated with a prodromal period, during which a serum sickness-like syndrome can occur [7].
The predominant routes of transmission is commonly through blood transfusion, blood products, body fluids (urine, semen, sweat, saliva, and tears), use of contaminated needles, vertical transmission (mother to child through infected birth canal), and sexual contact [8]. Neonates born of chronically infected mothers have a 70–90% risk of the infection progressing to a chronic phase [9]. The demand for safe blood or blood products in life-saving interventions is critical to avoiding non curable infectious diseases [3]. Blood transfusions carry the risk of transfusion-transmitted infections such as hepatitis B. In order to measure their severity, the World Health Organization has recommended a pre-transfusion blood test. The residual risk of infection from HBV is higher than that of hepatitis C virus (HCV) in non-endemic countries [10].
Nigeria, a tropical country, has been documented as highly endemic for HBV infection and about 75% of its population is likely to have been exposed to the virus at one time or the other in their lives [11]. Currently about 18 million Nigerians are infected [12]. A prevalence rate of 4.3 % was reported from Port Harcourt [13], 5.7% from Ilorin [14], 11.6% from Maiduguri [15] and 8.3% from Zaria [16] A seroprevalence of 23.3% was reported among patients attending all clinics at the Aminu Kano Teaching Hospital (AKTH) [17].
Historical Background of HBV
The hepatitis B virus was discovered in 1965 when Blumberg and co-workers found the hepatitis B surface antigen which was originally called the Australia antigen because it was found in serum from an Australian patient [18]. Dr Baruch Samuel Blumberg was awarded the 1976 Noble Prize in Physiology or Medicine for this discovery. The virus was fully described in the 1970s [19]. In recent times, the rapid and continuous discoveries of the viral disease around the whole world have improved our understanding of the complexity of this unusual virus. Although there has not been any substantial decrease in the overall prevalence of HBV, there is the hope that the next generation will see a decline in both the worldwide carrier rate and the incidence of new HBV infections if current HBV vaccinations are intensified [20].
Transmission
The HBI can be transmitted by the same modes as with the human immunodeficiency virus (HIV), even though the HBV is hardier and 50-100 times more infectious than the HIV (WHO, 2008). Unlike HIV, the virus can survive outside the body for at least 7 days. During that time, the virus can still cause infection if it enters into the body of a person who is not infected. Transmission of hepatitis B virus results from exposure to infectious blood or body fluids. Possible modes of transmission include but are not limited to unprotected sexual blood transfusions, re-use of contaminated needles and syringes, and vertical transmission from mother to child during childbirth. Without intervention, a mother who is positive for HBsAg confers a 20% risk of passing the infection to her offspring at the time of birth (WHO, 2008).
This risk is as high as 90% if the mother is also positive for HBeAg. The HBV infection can be transmitted between family members within households, possibly by contact of non-intact skin or mucous membrane with secretions or saliva containing HBV (Petersen et al., 1976). However, at least 30% of reported hepatitis B among adults cannot be associated with an identifiable risk factor (Shapiro, 1993). In many developed countries (e.g. those in Western Europe and North America), patterns of transmission are different from those mentioned above. Today, most infections in these countries are transmitted during young adulthood by sexual activity and injecting drug use. HBV is a major infectious occupational hazard of health workers (WHO, 2008). HBV is not spread by contaminated food or water and cannot be spread casually in the workplace. The virus incubation period is 90 days on average but can vary from about 30 to 180 days (AASLD, 2007). HBV may be detected 30 to 60 days after infection and persist for widely variable periods of time.
Stages of HBV Infection
Remarkable progress has been made in the understanding of the three (3) main natural stages of the HBV infection in hosts: acute infection, chronic asymptomatic and chronic symptomatic stages (AASLD, 2007). However, not all HBV-infected patients go through all the three stages. The risk to develop liver–related complications, such as cirrhosis and hepatocellular carcinomas increases as patient progresses from acute to chronic stage of the infection. Indeed, most HBV infections end up at the acute stage (~ 90%) with a few progressing on to the chronic stage.
Acute HBV infection
This is the initial stage of the infection and every HBV- infected patient goes through this, even though not all patients transit beyond this stage. Early phases of this stage of the infection are characterized serologically by the presence of HBsAg, high serum HBV DNA, HBeAg, and normal level of serum aminotransferase level (ALT), and minimal or insignificant inflammation on liver biopsy [21]. A later phase, also called immunity phase, is marked by increased serum titres of anti-HBsAg IgG (HBsAb), anti-HBcAg IgG, lowered or disappearance of HBsAg and HBV DNA, normal liver histology. This is true for those who recover fully from the infection after attaining full and permanent immunity through exposure. The duration of either phase differs among patients but generally lasts between 5-8 months (AASLD, 2007). However, those patients who fail to mobilize adequate immune response factors to combat the infection end up with the fate of living with the disease their entire lifetime. In this case, it is said the disease has become chronic. The physical signs and symptoms, such as jaundice, fever, dark-urine formation, nausea, among others, would occur, even though they will last shortly after which they get resolved following recovery. Generally, transition from the acute stage to the chronic stage depends on several factors including age, gender, viral genotype, and host immune competence.
Chronic HBV infection
This occurs as a progression of the early phase of the acute HBV infection due to the host‘s failure to mount the necessary immune stimulus to ensure total viral clearance and consequent resolution of the disease. It is serologically marked by relative rise in serum anti- HBcAg IgG, disappearance or lower titres of anti-HBsAg IgG, and either normal or significant liver damage as shown by ultrasonography (WHO, 2008). Also, this stage of the disease may be characterized by normal or elevated serum aminotransferase levels (aspartate amino transferase (AST) and alanine amino transferase (ALT)) and other markers of hepatic integrity (AASL, 2007).
The serological presence of HBeAg is real in all stages of the disease. The presence of this antigen together with elevated viral load (HBV DNA > 10 3 copies/ml) and higher ALT (> 60 IU/l) is a strong indication of viral activity, replication, and infectivity (WHO, 2008). Patients with such manifestations are put on retroviral. A key event in the natural history of HBeAg – positive CHB patients is HBeAg seroconversion (Sharma et al., 2005). It is believed that seroconversion of HBeAg to HBeAb is accompanied with cessation of HBV replication and remission of liver disease. Several studies have shown that seroconversion with a marked reduction in HBV replication is associated with biochemical and histological remission of inflammatory activity in the majority of patients [22].
Prevalence of Hepatitis B Virus in Nigeria
Prevalence of hepatitis B virus among blood donors
Mosley et al., suggested that anti-HBc screening of blood donations might prevent HBV transmission from HBsAg-negative blood donors that are positive for anti- HBc [23] The prevalence of OHB varies significantly between geographical regions as well as among various patient populations tested. Recent Evaluation of hepatitis B virus sero-positivity among 300 voluntary blood donors at a centralized blood service center in Nigeria by [24] revealed that Thirty-three (13.8%) of first-time donors were positive for hepatitis B markers while all retained donors were sero-negative. There were 32 (13.3%) sero- positive reactions to HBsAg and 3 (1.3%) reacted to HBeAg. In another study in Jos, Uneke and others reported a 14.3% HBsAg Seropositivity among their blood donors against a higher 25.9% among patients infected with HIV. They also noted higher infection rate of 44% in donors 51-60 years and 28% frequency within the age bracket of 31-40 years [25,26]. while studying Seroprevalence of hepatitis B e antigen (HBe antigen) and B core antibodies among hepatitis B surface antigen positive blood donors at a Tertiary Centre in Nigeria found a seroprevalence of 8.2% (22 of 267) HBeAg, 4 of 267 (1.5%) were indeterminate while 241 (90.3%) of their subjects tested negative. Only 27 out of 267 donors (10.1%) tested positive to IgM anti-HBcore, 234(87.6%) tested negative, while 6(2.2%) were indeterminate. A higher percentage of 60.7% (162 of 267) tested positive to IgG anti- HBcore, while 39.3% (105 of 267) tested negative. They concluded that there is a low seroprevalence rate of HBeAg-positive chronic hepatitis and relatively high IgG anti-HBcore and IgM anti-HBcore rates in South West Nigeria [26].
Another study among blood donors, in North Central Nigeria, at the Bishop Murray Medical Centre in Makurdi, age group prevalence of HBV was reported at 11.90%, 13.05% and 6.53% within the age ranges of 18-22, 23-27 and 28-32 years respectively [27]. Jeremiah and others reported a prevalence of 8.6% HBsAg in Maiduguri, Northeast Nigeria with anti HBc IgM in 18.4% suggesting that donors negative for HBsAg are not necessarily uninfected with HBV and recommended the mandatory screening of HBc in donor blood [28].
In Southwest Nigeria Salawu and others reported the occurrence of other HBV markers in HBsAg negative blood donors and recommended the inclusion of routine testing of markers such as antibody to hepatitis B core (HBC) antigen in donor blood before transfusion [29]. Japhet and his co-workers found an overall prevalence of transfusion transmissible infections of 32.6% in their study with 19.6% HBsAg positivity, 13.0% HBC antibody reaction and 8.9% hepatitis B envelop antigen (HBeAg) detection which marks infectivity of the virus and appears in blood after HBsAg [30].
In Benin City of Nigeria, Mutimer and others reported an overall 14% prevalence of TTIS. They concluded that screening of blood routinely may not reduce the incidence of HBV infections (Mutimer et al., 1994) Far in the North Eastern Nigeria, Harry and colleagues reported a high 22.0% HBsAg and 6.64% HBeAg among blood donors. They found only 11.6% and 1.39% of pregnant women subgroup of their study reactive for HBsAg and HBeAg respectively [15].
Prevalence of hepatitis B virus among pregnant women
Adabara et al., 2012 evaluated the Prevalence of Hepatitis B Virus among Women Attending Antenatal Clinic in the General Hospital, Minna, Niger State, there results revealed that Thirteen (6.5%) out of the 200 subjects investigated were found to be positive for hepatitis B infection. On the basis of age, the distribution of HBV infection among the subjects revealed that the age group 20-29 has the highest rate of infection of 10.3% followed in descending order by 40-49 (4.5%), 30-39 (4.2%) and 10-19 (0.0%). The authors linked the prevalence of the virus to low level of awareness and the poor standard of living observed among the subjects [31] carry out a cross-sectional study over a 3-month period (August-October 2009). On Prevalence and pattern of hepatitis B among 480 women attending antenatal clinics in Nnewi, Nigeria was done by simple random sampling using computer generated random numbers. Of these, 40 tested positive to HBsAg, accounting for 8.3% of the sample population. The age of the subjects studied varied from 14 to 45 years (mean age - 24.3 years) while the mean parity was 2.18. The HIV/HBV co-infection rate was 4.2%.
Agarry and Lekwot also evaluated the prevalence of hepatitis B virus surface antigen (HBsAG) and hepatitis C (HCV) antibody amongst 200 pregnant women attending ante-natal clinic in Gwagwalada, Abuja.Of the 200 blood samples tested, 19 (9.5%) and 1 (0.5%) were positive for the presence of hepatitis B and C respectively. No mixed infection of both viruses was observed in the pregnant women tested [32,33]. While studying the seroprevalence of hepatitis B virus (HBsAg) antibodies in pregnant women In Akure, Ondo State found that out of Eight hundred and sixty pregnant women. Only forty (4.7%) were positive while eight hundred and twenty (95.3%) were negative, indicating an overall prevalence of 4.7% [33].
The prevalence of Hepatitis B Virus (HBV) carrier and infectivity status among three hundred (300) pregnant women in Makurdi were evaluated [34] Maternal HBV infectivity status was determined by testing all HBsAg positive samples for the presence of hepatitis B e antigen (HBeAg). Overall, 33 (11%) pregnant women were identified as carriers of HBV and 10 of the 33 (30.3%) pregnant women identified as HBV carriers tested positive for HBeAg. Hence, 3.3% of the entire study population was found to have high viral replication as well as high risk of transmitting HBV to their neonates.
Prevalence of hepatitis B virus co-infections with other disease
Rescently Ejeliogu [35], evaluated the Prevalence of Hepatitis B Virus Co-infected Nigerian Children (2 months to 15 years) with Human Immunodeficiency Virus. Out of 452 Children that were screened, three hundred and ninety-four (87.2%) were monoinfected with HIV while 58 (12.8%) were co-infected with HIV and HBV (HIV/HBV). Egah et al while studying seropositivity to hepatitis B, C and the human immunodeficiency viruses among clergy men in training, in a seminary in Jos, found a 15.5% hepatitis B surface antigen positive reaction among their subjects who were a low risk blood donor group. They also documented a crude transfusion transmissible infection prevalence of 22.1% and HIV/ HBV co-infection rate of 0.4% in their study [36].
In the year 2011 Omalu et al., evaluated the Seroprevalence of Malaria and Hepatitis B (HBsAg) with Associated Risk Factors among Pregnant Women Attending Antenatal Clinic in General Hospital Minna, North-Central Nigeria. Out of the 269 pregnant women screened 216(80.30%) were positive for malaria, 22(8.18%) for hepatitis B and 21(7.81%) were co-infection of malaria and hepatitis B and 10 were negative, while non-pregnant women had 51(51.00%), 8(8.00%) and 6(6.00%) for malaria, hepatitis B and co-infection of both out of 100 screened [37,38] found out that out of 1535 sampled individuals analyzed for Hepatitis B Virus (HBV), 1319 (85.9%) showed a serological evidence of exposure to HBV infection, some through natural infection (22.7%) and others (13.0%) through vaccination; 12% of the exposed were inferred to be currently infected and 91.2% chronically infected. Hepatitis delta virus (HDV) antigen was also detected in 2.7% of the HBsAg positive individuals; and was encountered more (6.7%) in those with acute hepatitis than those with chronic disease.
Jibrin & Mustapha [39], screened, two hundred consecutively recruited HIV-infected individuals comprising 97 males and 103 females for HBsAg using ELISA. A total of Fifty-three of the patients tested positive for HBsAg giving an overall prevalence rate of 26.5% which was significantly higher (p < 0.001) than the 10.4% recorded among non-HIV-infected individuals. Co-infection rate in males (24.7%) did not differ significantly from that of females (28.2%). Co-infection was highest in the 40-49 years age group (41.6%), while no case of co-infection was recorded in the ≤ 19 years. Among the different occupational groups businessmen had the highest coinfection rate (44%) followed by long distance drivers (39.5%). In relation to marital status, divorcees/widows had the highest proportion of those with coinfection (53%) followed by those who were unmarried (32.5%) and those married (21.6%). The authors confirm the high prevalence rate of HBV co-infection in HIV-infected patients compared to the non-HIV- infected population. Therefore, there is a need to screen all HIV-infected patients for HBV infection.
According to Taiwo et al. [40] among patients in Lagos State University Teaching Hospital (LASUTH), Dual presence of HBsAg and anti-HCV was observed in 4(3.9%) of HIV infected patients, while 29(28.4%) and 15(14.7%) were repeatedly reactive for HBsAg and anti-HCV respectively. HIV negative blood donor controls have HBsAg and anti-HCV prevalence of (22) 6.0% and (3) 0.8% respectively. The prevalence of hepatitis co infection is higher among the male study patients 16(50%) than the female32 (45.7%). Salawu et al. [41] studied the Prevalence and trends of HBsAg, anti-HCV, anti-HIV and VDRL in blood donors in the last three and a half years in a tertiary health care facility in Ile-Ife, Nigeria. The screening records of all blood donors from January 2006 to June 2009were evaluated with respect to screening outcome for HBsAg, anti-HIV, anti-HCV and VDRL. Of the total 14,500 donors bled, 7.50% were positive for HBsAg, 0.96% for anti- HIV, 0.86% for anti-HCV and 2.61% for VDRL. There was a gradual decline in the prevalence rate of HBsAg from 9.20% in 2006, to 8.37 in 2007 and 6.25% in 2008; with a rise in the first half of 2009 to 6.32%. Similarly, HIV prevalence declined from 1.44% in 2006 to 0.94% in 2007 and 0.66% in 2008 but rose to 0.96% in the first half of 2009. HCV prevalence fluctuated throughout the period under study. Prevalence of syphilis declined from 2.93% in 2008 to 1.92% in 2009.
Bola et al. (2016) evaluated the sero-prevalence of HCV in HIV sero-positive children in Lagos, Nigeria. A total of 132 blood HIV sero-positive children aged 1-15 years were serological assay for HCV. Out of the 132 HIV sero-positive samples, 6 were positive for HCV with a prevalence of 4.54% Zero prevalence was recorded between age groups 1-3 years while a sero-prevalence of 20% was found among age groups 12-15 years. Ejeliogu et al. [35], evaluated the Prevalence of Hepatitis B Virus Co-infected Nigerian Children (2 months to 15 years) with Human Immunodeficiency Virus. Out of 452 Children that were screened, three hundred and ninety-four (87.2%) were mono-infected with HIV while 58 (12.8%) were coinfected with HIV and HBV (HIV/HBV).
Jibrin & Mustapha [39], screened, two hundred consecutively recruited HIV-infected individuals comprising 97 males and 103 females for HBsAg. A total of Fifty-three of the patients tested positive for HBsAg giving an overall prevalence rate of 26.5% Coinfection rate in males (24.7%) did not differ significantly from that of females (28.2%). Co-infection was highest in the 40-49 years age group (41.6%), while no case of co-infection was recorded in the ≤ 19 years. Among the different occupational groups businessmen had the highest co-infection rate (44%) followed by long distance drivers (39.5%).
Similarly, Hamza et al. (2013) evaluated the prevalence of HIVHBV- patients in Kano State and find out that 54/440 were HB-HIV coinfected [17], also evaluated the prevalence of HIV-HBV- patients in Kano State and find out that 211/300 were HB-HIV coinfected Udeze et al. (2015), evaluated the prevalence rate of HB and C infections among HIV-infected patients accessing healthcare at HIV and AIDS section of University of Ilorin Teaching Hospital, Ilorin, Nigeria. Of the 356 HIV-infected participants, 114 (32.0%) and 14 (3.9%) were respectively positive for HBsAg and anti-HCV antibody
Prevalence of hepatitis B virus among healthy individuals
James et al., 2011carried out a study to assess the seroprevalence of hepatitis B surface antigen (HBsAg) and associated risk factors among students of a secondary school in Jagindi Tasha, Kaduna State, Nigeria. Out of One hundred and ninety (190) apparently healthy students that were screened for HBsAg, 35 (18.4%) were sero-positive. Subjects aged 13-15 years recorded 6.8% positivity and male subjects had 25.5% positivity compared to 10.9% positivity for females. Risk factors such as blood transfusion was 32.0% among male subjects compared to 30.0% in females.
Moses et al., (2010), evaluated the prevalence of Hepatitis B virus infections in apparently healthy urban Nigerians. Of the 1,891 participants, 957 (50.6 %) were males and 934 (49.4%) were females. Overall 114 (6.0%) were positive, of whom 71 (7.4%) were males and 43 (4.6%) females. Those aged 21–30 years had the highest infection rate, and males were more likely to be infected with the virus than females. According to Gambo et al. [42] out of 182 Fulani nomads in Toro, North-Eastern Nigeria the genderspecific seroprevalence of HBsAg was found to be in the ratio of about 2:1 male-female. Infection rate was found to be higher in those between 25 and 29 years (8.2%) followed by those the age group 30-37 years (6.0%).
According to Olokoba et al., 2009, Five hundred and ninety-five consecutively recruited voluntary blood donors in Yola, Nigeria that were screened for hepatitis B and hepatitis C virus infections. Only 14 donors (male) each (2.4%) were positive for HBsAg and anti- HCV. The authors concluded that the seroprevalence of hepatitis B and C virus infection is low among voluntary blood donors in Yola, Nigeria [43], evaluated the Prevalence of Hepatitis B surface Antigen among the Newly Admitted Students of University of Jos, Nigeria. Out of the 300 newly admitted students that were screened, 50 (16.7%) were seropositive to HBsAg. The prevalence of HBsAg was higher in males 34(11.33%) compared to 16(5.33%) in females. Age specific prevalence was significantly higher in the age bracket 25-29, with 16(28.57%) and the lowest was found in the age bracket 15 -19 years with 12(17.39%).
In a study conducted by Ndako et al. [44], a total of 188 Health personnel, which constitutes Nurses, Doctors, Medical Laboratory Scientists, Technicians/Assistants, Pharmacists and Ward Assistance in Uyo Metropolis, were screened for HBV surface antigen (HBsAg). Out of the one hundred and Eighty-eight (188) respondents screened. Thirty-two (32) representing 17.0% were found to be seropositive, female subjects recorded (17.3%) prevalence compared to (16.7%) recorded by the Male subjects. Frank et al., 2004 carried out epidemiology study of HBV infection among 124 unvaccinated Dutch missionaries and family members who lived in a rural area of Nigeria. Antibodies to hepatitis B core antigen were found in 5 (9.8%) of 51 adults (incidence rate, 1.7 per 1000 person-months at risk [PMAR]) and 9 (12.3%) of 73 children (incidence rate, 2.8 per 1000 PMAR).
Prevention and Treatment of Hepatitis B
Even though HBV has become a major source of health concern worldwide, we should also be reminded by the good news that it is the only STD that can be prevented by vaccination (CDC, 2005).The prevention of HBV globally has become one of the topmost priorities of major political actors and decision makers in recent years. The disease is prevented using safe and effective vaccine which became available in 1982 through funding and implementation of hepatitis B immunization programs. Measures for HBV prevention have been geared towards avoidance of unsafe blood exposure or blocking of transmission before the advent of the vaccine. Unsafe blood transfusion has been a major force in the transmission of HBV globally [45].
The enactment of a law for the donation and management of blood in blood banks across the world has aggressively fought this channel of HBV transmission. This notwithstanding, current researches have showed that blood transfusion is regaining its position as one of the major risk factors for HBV transmission globally. This finding is attributed to the presence of occult HBV infection (OHBVI) among blood donors [46]. It is also worth mentioning that the global acceptance of the auto-disposable syringes (ADS) has considerably reduced the incidence of HBV infections that occur due to unsafe injections. Also, as a result of the extensive use of invasive medical procedures, iatrogenic HBV infections are no longer frequent. There have also been speculations that dental care operations which are capable of causing oral mucous membrane injuries is becoming a major route to HBV transmission if steps are not taken to prevent it [47].
HBV per se does not have a permanent treatment therefore, the surest antidote to the global epidemic is prevention. There has not been any universal agreement on drugs used for the temporary treatment of the HBV in the world even though two therapeutic agents such as interferon alpha (IFNa) and lamivudine are currently used by many countries for the treatment of the disease. Interferon-alpha is a potent cytokine with antiviral and immunomodulating actions which is produced in response to viral infection [48]. Temporary treatment of the disease is therefore aimed at suppressing viral replication, reducing the risk of progressing to advanced liver disease or inflammation of the liver and the development of complications such as liver failure or liver cancer [49-55]. Chronic hepatitis B is therefore easily managed rather than treated. Some of the general management strategies for HBV recommended by medical experts include the avoidance of:
• Heavy alcohol consumption.
• Unprotected sexual intercourse with partners who are not vaccinated.
• Sharing of needles or other items that potentially contain blood such as shavers or toothbrushes.
• Donation of blood or organs.
2. Screening of family members and sexual partners for HBV infection and vaccination of those who are sero-negative [56- 65].
3. Patient education and long-term follow-up with regular testing of liver biochemistry and surveillance of hepatocellular carcinoma in high risk groups [20, 66-70].
Conclusion
In conclusion, this review sheds light on many important aspects of HBV epidemiology in Nigeria, the prevalence of hepatitis B virus is high and varies with geographical region and gender different been higher in pregnant women. However, Reduction in hepatitis B virus infection could be achieved by public enlightenment campaign, mass immunization of the children and adults at risk while antiviral drugs and immunostimulatory therapy should be provided for those already infected.
To read more about this  article: https://irispublishers.com/aphe/fulltext/prevalence-of-hepatitis-b-virus-in-nigeria.ID.000501.php
Indexing List of Iris Publishers: https://medium.com/@irispublishers/what-is-the-indexing-list-of-iris-publishers-4ace353e4eee
Iris publishers google scholar citations:
https://scholar.google.co.in/scholar?hl=en&as_sdt=0%2C5&q=irispublishers&btnG=
1 note · View note