Tumgik
Geometric Simulation for 3D-Printed Soft Robots- Juniper Publishers
Tumblr media
Introduction
Robots fabricated by soft materials can provide higher flexibility and thus better safety while interacting with natural objects with low stiffness such as food and human beings. With the growth of three-dimensional (3D) printing, it is even possible to directly fabricate soft robots [1,2] with complex structures and multiple materials realizing highly dexterous tasks like human-interactive grasping and confined area detection [3]. However, with such increased degrees of freedom (DoF), the design of soft robots becomes a very difficult task. It can be made possible by integrating simulation into the design phase. However, the shape deformation comes from many different and complex factors including manufacturing process, material properties, actuation, etc. Especially with the limited understanding of layer-based additive process in AM, it is challenging to formulate a complete mathematical model for the simulation.
SOFA [4] is one of the most widely used frameworks for physical simulation. It is also applied in the simulation of soft robots that supports interactive deformation [5]. However, it may suffer from the problem of numerical accuracy, particularly if there is large deformation. Unfortunately, one benefit of soft robot is its capability of adapting to highly curved contact by large deformation, which needs to be precisely simulated for many applications. There is another type of simulation methodology to simplify the simulation model of deformation to a geometry optimization problem [6]. It was originally developed in the computer graphics area for visualization, but it has been proved to work superbly in physical simulation like self-transformation structures [7] and the computational efficiency is remarkable [8]. This geometric simulation is particularly suitable for soft robots [9] (Figure 1), because the actuation of soft robots is commonly defined by geometry variations (e.g., cable shortening and pneumatic expansion), and it is actually indirect to first obtain apply them in the conventional deformation simulation. It is shown that the geometric simulation gives better convergence and accuracy than the conventional methods. Therefore, the aim of this review is to share this technique with a broader audience in the robotic community, and discuss the potentials capabilities, and future works of this technology.
Geometric Simulation
The common way of Finite Element Analysis (FEA) is to apply Hooke’s law to each element and then assemble the equations to compute the deformation with the applied force:
where 𝐹 is the global nodal force vector, 𝐾 is the global stiffness matrix, and 𝛷 is the global nodal displacement vector. In the geometric simulation, the formulation is developed by shape projection on the elements in terms of point positions:
where 𝐕 ∈ ℛ𝑛×3 stacks all the point positions of 𝑛 vertices, Ci ∈ ℛ𝑒×𝑛 is the centering matrix for the 𝑖-th element among 𝑒 elements, Pi ∈ ℛ 𝑒×3 is the variables defining the shape projections for the element, and 𝑤𝑖 is the weight for the element, which is commonly set as the volume. This minimization can be solved by taking derivative and thus a sparse symmetric positive definite system:
This geometric optimization is formulated to minimize the elastic energy with reference to shape variations similar to the physical phenomenon during deformation. To compare Eq. (3) with Eq. (1), they have the same form with:
Therefore, the geometric simulation actually has the benefits as the FEA, but it should be noted the force vector 𝐹 here is defined purely by the shape projections. As a result, this is a direct approach to take the geometric actuation as input and compute the deformed shape of soft robots by numerical optimization using a geometry-based algorithm. To complete the formulation, the shape projections should be carefully defined to model different actuations in the simulation and to model the material properties geometrically. In the state-of-the-art work [9], the geometric constraints of actuations are modeled as a type of element, e.g., aligning the cable with the edge of elements and shorten the edges, or scaling the size of elements for volume expansion in pneumatic actuations. In such way, the actuation can be directly integrated in the optimization without additional computation burdens. In terms of modeling the material properties geometrically in the framework, a calibration step is done to learn the relationship between material properties and shape parameters between hard and soft assignments. Should the element be rigid or preserve its volume is determined by the shape parameters and modeled by the shape projection. It is shown that the calibration method can be used to simulate the deformation of objects with two materials. Different from using constrained nonlinear optimization, the geometry optimization can converge in a few iterations, thanks to the shape projection operator.
0 notes
Light-Weight Secure IoT Key Generator and Management- juniper Publishers
Tumblr media
Abstract
Security is a critical element for IoT deployment that affects the adoption rate of IoT applications. This paper presents a Light-Weight Secure IoT Key Generator and Management Solution(LKGM) for  industry automation and applications. Our solution uses minimum computing and memory resources that can be installed on half-credit-card-size embedded systems that enhances the securityof end-to-end communications for IoT nodes. A frequently changed randomly generated passphrase isused to authenticate each IoT node that is embedded with an encrypted unique authentication key. Fieldtest results were presented for an advanced manufacturing application that will only be activated whentwo authenticated IoT nodes are within the vicinity.
Keywords: Authentication; Authority; Secure key; IoT; Security; Industry automation
Introduction
Internet of Things (IoT) is a network of physical objects that have unique identifiers capable ofproducing and transmitting data across a network seamlessly. IoT system refers to a loosely coupled,decentralized system of devices augmented with sensing, processing, and network capabilities [1,2].IoT is projected to be one of the fastest growing technology segments in the next 3 to 5 years [3]. IoTapplications are being developed and deployed in an exponentially increase manner in many smart city’s initiatives around the world. Gartner Group has estimated that there will be 25 billion connectedIoT devices by 2020, and that IoT services will constitute a total spending of $263 billion.Unfortunately, this growth in connected devices brings increased security risks [4]. As indicated byFrost & Sullivan[5]; Miorandi et al., and Weber[6,7], security is the major hindrance for the wide scale adoption of IoT. Inaddition, the increasing use of multi-vendors IoT nodes which are often only have minimum securityprotection that resulted in more complex security scenarios and threats beyond the current Internet iswill arise.Constant sharing of information between “things” and users can occur without proper authentication and authorization. Currently, there are no trustworthy platforms that provide access control andpersonalized security policy based on users’ needs and contexts across different types of “things”.The “things” in any IoT network are often unattended; therefore, they are vulnerable to attacks.Moreover, most IoT communications are wireless that make eavesdropping easy [6,8]. The futurewidespread adoption of IoT will extend the information security risks far more widely than the Internethas to date [9].
In an ad-hoc IoT network where IoT nodes are localized and self-organized, network infrastructureis not required. Security of the IoT nodes that operate in such ad-hoc peer-to-peer networks areincreasingly becoming an important and critical challenge to solve as many applications in such IoTnetwork becomes commercially viable. As ad-hoc IoT network has a frequently changing networktopology, and the IoT nodes have limited processor power, memory size and battery power, acentralized security authentication server/node becomes impractical to be implemented.
Methods
In our applied research work, “KeyThings” was developed as part of the project title “Collaborative Cross-Layer Secure IoT Gateways” funded by the Singapore NRF-TRD. Our solution consisted of two main systems, namely the Security Key Generation System (SKG) and Security Key Management System (SKM). The objective of our project is to allow an IoT application (e.g. a web service, etc.) to be activated only when a pre-determined number of authenticated IoT nodes are within the vicinity. This enhances the security of the IoT application by authenticating the hardware (i.e. IoT nodes) instead of just authenticating based on the usual usernames and passwords. The authentication process is done in the system’s background without the need for human intervention which is critical in some operation environment (e.g. manufacturing, production, remote sites, etc.) where not all staff are given access to the sensors’ readings due to security issues. The staff are categorized into “non-authorized”, “operator” and “supervisor”.
Below are the features of our Solution
a. “Non-authorized” personnel who are not issued with the authenticated IoT node will not have 60 access to the sensors’ readings.
b. Only authorized “operator” who has an authenticated IoT node is able to view the sensors’ 62 readings only when the “operator” is in the vicinity.
c. The authorized “supervisor” with an authenticated IoT node that is with higher access rights, 64 can view the sensors’ readings and the summary report. If the “supervisor” leaves the vicinity, 65 the summary report will no longer be available.
d. All authentications are done in the solution’s background without the need for human 67 intervention.
Solution Setup
Equipment (Figure 1)
A. The setup consists of the following equipment
a. Authentication Server
b. Client device 1
c. Client device 2
d. Application Server
e. Tablet
Authentication server (KeyThings-Server): The authentication server is the “brain” of the security key management. It has the following 92 responsibilities:
A. Access point: Serves as the access point to the entire system.
B. Generate random passphrase periodically
i. If there is no authenticated device, the passphrase will remain the same.
ii. If there is one or more authenticated device, a new random passphrase will be generated at 98the end of each time interval (after every 5 MQTT broadcasts).
C. MQTT Server: It will broadcast the generated passphrase via MQTT to all subscribedKeyThings-Clients.
i. Once every 2 seconds.
ii. MQTT topic: authentication/challenge
D. Web Server via REST API.
E. For KeyThings
i. -clients to submit their encrypted passphrase.
ii. For application server to query the number of authenticated devices.
F. Authentication: The server stores the encrypted credentials and MD5 of the KeyThings-Clients that were generated from the Security Key Generation System.
Client devices (KeyThings-Client): Each client device contains the unique security key that is used for authentication to gain access to 113 different web services. The key must be generated from the Security Key Generation System. Thedevice has the following responsibilities:
A) MQTT client. Registers and listens to the broadcasted passphrase.
B) Encryption. Encrypts the passphrase that was received via MQTT.
a. If the received passphrase is the same as previous passphrase, the device will just ignorethepassphrase and does nothing.
b. If the received passphrase is different from the previous passphrase, then the passphrase will be encrypted.
c. HTTP Request / Response. Send the encrypted passphrase to the authentication server(KeyThings- Server) once the encryption has been completed.
Application server: The application server hosts the production webpage (i.e. the machine readings and summary report). It is currently running on Raspberry Pi, but it can be hosted on any environment (i.e., Windows or Linux) that has network connectivity to the Access Point. The application server has the following responsibilities:
a) HTTP Request / Response: Host the webpage that can be access via the tablet.
Tablet: The tablet is used to view the web page that contains the manufacturing data (machine readings andsummary report) from the application server.
Result
Below is what you will see when different numbers of devices have been authenticated Figure 2.
Discussion
The test was conducted successfully with results indicated that a light-weight security key generation and authentication method can be easily implemented in a distributed manner for a self-organizingnetwork to enhance IoT nodes and service level security in an industry automation environment. The method and the solution can be applied to provide features such as multi-level security for different stake holders in an advanced manufacturing environment, multi-factor security keys, user definable security- based services and policy, etc. The solutions can easily be scaled and adapted to suite various industry needs and expectation in enhancing the security of IoT nodes, sensors, PLC controllers, robots, etc. to meet their business needs.
Conclusion
In this paper, a Light-Weight Secure IoT Key Generator and Management Solution (LKGM) for industry automation and applications for enhancing the security of peer-to-peer communications among IoT nodes is presented. The LKGM is integrated to half-credit-card-size embedded systems. Our experimental results showed that the solution enhances secured peer-to-peer IoT communications amongst the IoT node. Field tests were conducted successfully for a manufacturing application that uses web services.
For More Open Access Journals Please Click on: Juniper Publishers
Fore More Articles Please Visit: Robotics & Automation Engineering Journal
0 notes
Comparative Analysis of the Use of the Pore Pressure and Humidity When Assessing the Influence of Soils in Transport Construction | Juniper Publishers
History of Civil Engineering
Tumblr media
Juniper Publishers-Open Access Journal of Civil Engineering Research
Authored by Kochetkov AV
Short Communication
Now there is a question of reasonable applicability of indicators of pore pressure and humidity at an assessment of influence of indicators of soil in various environments and conditions of transport construction. Such an analysis can be carried out within the framework of the molecular kinetic theory of gases taking into account the photon interaction.
The real gas is not described by the clayperon-Mendeleev equation for ideal gases. Therefore, due to the needs of practice, many attempts have been made to create an equation of the state of real gases.
The most well-known equations for real gases are given in Table 1.
Some equations are a refinement of the equation of van der Waals (Dieterici and Berta), and equation Beattie-Bridgeman and Redlich-Kwong are empirical, without physical justification. In this case, Redlich writes in his article that the equation does not have a theoretical justification, but is, in fact, a successful empirical modification of the previously known equations [1]. A large number of equations due to the fact that no equation describes the behavior of real gases under all possible conditions (temperatures and pressures). Each equation has areas in which it best describes the state of the gas. But in other conditions, the same equation has large deviations from the experimental data. In practice, only the van der Waals equation is used in the form of direct calculations due to its simplicity. For other equations, either charts or tables of calculated values are typically used.
The meaning of the van der Waals equation is not only that it is the simplest, but it also has a clear and simple physical meaning. “Despite the fact that the equation of van der Waals equation is an approximate, it is sufficiently well suited to the properties of real substances, so basic provisions of the theory of van der Waals forces remain in force to the present time, exposed to the modern theories that only certain clarifications and additions” [2, p. 59]. “It was indicated that the modern theory of the equation of state of real gases is based on the fundamental provisions of the theory of van der Waals and develop these provisions further, and, having powerful mathematical apparatus of statistical mechanics, it gets the ability to produce all calculations are approximate, but quite accurate” [2, p. 60].
The citation clearly indicates that statistical mechanics is used for accurate calculations based on physical models.
The fact is that thermodynamics is a phenomenological science. It is possible to obtain experimental data on the state of gases at different thermodynamic parameters and to approximate the most suitable thermodynamic equation even without having any idea of the internal mechanisms of the processes occurring in the gases. This is exactly what the authors of the gas laws did: Lavoisier, Boyle, MARRIOTT, etc.
But if you try to delve into the essence of the processes occurring in gases, then a mathematical apparatus is inevitably necessary, taking into account the interaction of the molecules and atoms that make up the gases. This is the apparatus of statistical physics. But, to statistical physics problems of their own. “The exact theoretical calculation of the statistical sum of gases or liquids with arbitrary Hamiltonian (2.6) is a problem that lies far beyond the capabilities of modern statistical physics” [2]. Although,” it is possible to make a number of reasonable and sufficiently good approximations that allow us to estimate the statistical sum (2.1) and the configuration integral (2.8) for real gases consisting of valence saturated molecules “ [2]. The task is still far from over. A reasonable choice of initial parameters plays an important role in the difficulties faced by researchers of real gases. We will carry out a methodical analysis of the initial parameters used in the theory of gases. Currently, the following thermodynamic parameters are used in the scientific and educational literature: P (pressure), V (volume) and T (temperature). If the temperature and volume are not in doubt, the pressure, as a thermodynamic parameter, raises certain questions. Consider the compressibility factor (a measure of pressure of gases). It is believed that “the most convenient measure of nonideality is the compressibility factor Z = pVm / RT, since for an ideal gas Z = l under any conditions” [2]. For example, it is proposed: “the thermal equation of state of a real gas can be represented in the form pv = zRT,
where z is the compressibility factor, which is a complex function of temperature and density (or pressure)” [3].
But as a parameter for determining the thermodynamic state of the gas system, compressibility is not very suitable, because firstly, it has a complex dependence on pressure and temperature. Any explanation of why and what determines the compressibility of gases is determined by the adopted model of the structure of gases. The form of the compressibility function for all real gases is given in [3] (Figure 1), and “for generality, the reduced pressure π=p/PK and the reduced temperature τ=T/TC are used as parameters here, where PK and TK are the parameters of the substance at the critical point. Since for an ideal gas at any parameters z=1, this graph clearly represents the difference between the specific volume (density) of the real and ideal gases at the same parameters”[3] Figure 2.
Secondly, the compressibility of gases cannot be directly measured during the working cycle of a thermodynamic system, but only during specially conducted experiments. Third, the compressibility of gases, in fact, is not a single curve, but a family of curves, at different temperatures and the same pressure, or at the same temperature, but different pressures, the compressibility of the gas is different. In practical work, the compressibility of gases is not measured, but calculated according to the appropriate calculation formulas, according to officially recognized methods. Therefore, the compressibility of gases may well characterize the nonideality of gases but cannot be accepted as an initial parameter in the theory of real gases. Let’s see what else can serve as a replacement for the pressure as the initial parameter. For this it is necessary to pay attention to the parameters that are used in the statistical theory of gases. The literature analysis shows that all statistical models of real gases are constructed using such parameter as concentration. Concentration is the number of atoms (molecules) per unit volume.
This is not surprising, since it is the concentration that determines the average distances between the gas molecules, and hence the potential long-range forces of the molecules. For example, according to the theory of van der Waals or any other. Then, after establishing the laws of behavior of the statistical system depending on the concentration, go to the usual thermodynamic characteristics: P, V, Etc. The application of pressure as a thermodynamic parameter is perfectly justified in the theory of ideal gases, in which atoms interact only at absolutely elastic collisions during chaotic thermal motion. According to the theory of ideal gases, the relationship between pressure and gas density is very simple [4]:
P = NkT ……………… (1)
here P-pressure, MPa; N-concentration (1/m3); T – temperature.
At a constant temperature, the dependence between concentration and pressure is linear:
P = const*N …………….. (2)
The concentration of N is equal to the number of molecules per unit volume. Concentration is related to density by a simple ratio:
ρ = N*m …………………. (3)
where ρ is the density, N is the concentration, m is the mass of the molecule.
In other words, in the theory of ideal gases, the gas pressure is linearly proportional to the gas density. For real gases this is never the case. The theory of real gases takes into account the forces of interaction of a potential nature. “Real gases differ from their model - ideal gases - in that their molecules are finite in size and between them the forces of attraction (at considerable distances between molecules) and repulsion (when molecules approach each other)” [5]. This leads to the fact that the density of real gases will be nonlinear, and a simple replacement of the concentration of the pressure in the volume, as is quietly done in the theory of ideal gases cannot do.
Tradition of determining the parameters of gas pressure is since when in science was not known that gases consist of atoms [5, p. 36-65], and therefore, scientists could not operate with concepts of the concentration of atoms in gases. We do not consider it advisable to continue this vicious practice. Moreover, as mentioned above in the theory of real gases directly refers to the value of concentration, but in the final equations go to the pressure, “the old-fashioned way.”
For Figure 3 graphs of helium density change depending on pressure are given. The fat line is a theoretical line based on the ideal gas model [6].
From Figure 3 it is clearly seen that for the graph of the function for helium, not only does not coincide with the theoretical, but the dependence of the density on the pressure is not linear. We chose helium, not only because the characteristics of this gas are well studied, but primarily because helium is the most chemically inert gas. Due to its inertness, it is not located to form compounds in molecules and other aggregations. That is, in its chemical properties, it is closest to the ideal gas. We chose helium, not only because the characteristics of this gas are well studied, but primarily because helium is the most chemically inert gas. Due to its inertness, it is not located to form compounds in molecules and other aggregations. That is, in its chemical properties, it is closest to the ideal gas. When discussing compressibility as an initial parameter, we said that the main disadvantage of compressibility as a parameter is not direct measurements, but calculated values. Unlike compressibility, the density of gases can be measured directly, both in stationary gases (in tanks) and in pipelines. There are several types of density meters for both liquids and gases. Although densitometers are more expensive than manometers, but the gain in practical applications can be tangible.
Thus, based on modern theories of real gases, it seems more logical to determine the properties of gases, depending not on pressure, but on density.
1. First, because all models of statistical physics are based on the concept of concentration, not on the concept of pressure.
2. Secondly, because it is the density that is closest to the concept of concentration. And easily translated into one another.
3. Third, even if any characteristic of the gas (heat capacity, thermal conductivity, etc.) linearly depends on the density, in accordance with the models of the statistical theory of gases, the translation of the values of these properties depending on the pressure will make an additional nonlinearity, depending on the density of the pressure. Especially if the dependence of the function is nonlinear
As an example, consider the graphs of the dynamic viscosity of helium. For Figure 4 graphs of helium heat capacity versus pressure in the temperature range from 100K to 1000K are presented. For comparison, graphs of dynamic viscosity versus gas density are presented [7] (Figure 5).
The difference in the representations of the same gas property depending on different parameters is clearly seen from the graphs. The first thing we can say is that schedules of dependence of dynamic viscosity to the density look easier than the dependence of dynamic viscosity on the pressure is because of the nonlinearity of the dependence of concentration on pressure. In particular, viscosity graphs from pressure change their direction of change. And at low temperatures so quickly that the lines even intersect. Moreover, if at low temperatures (lower lines in Figure 3), under high pressure upwards, at moderate temperatures, horizontal, and at high temperatures (top line) a bit, about 2% down.
The graphs also depend on the density - all the lines are almost parallel throughout. And they do not change their behavior at all. But the fact that the graphs depending on the density (concentration) look easier is not the most important thing. The main thing is the epistemological value of such a transition to another parameter [5].
Summary
1. Another proof of the influence of thermal photons on the behavior of gases under different conditions is the experimental data on the compressibility of gases under different conditions.
2. Modern theories of real gases are unable to explain the behavior of the compressibility function. Nothing to do with the change in the density, and especially at different temperatures. Because according to modern theories repulsion of molecules should be observed only when molecules are smaller than the size of molecules. And the compressibility of gases should not depend on temperature at all.
3. The hypothesis of a significant influence of thermal photons on the mechanical properties of gases can explain the behavior of the compressibility factor of gases.
4. With an increase in the density of the gas, the compressibility factor increases because, together with an increase in the density of gases, the number of photons having a mechanical effect on the gas molecules also increases.
5. As the temperature increases, the energy of thermal photons increases, so does the compressibility factor (resistance of the gas to compression). Because more energetic photons have a stronger mechanical effect (stronger push) on the gas molecules. That is why the compressibility factor increases in the temperature range from 10 to 150 K.
6. At temperatures of more than 150 K, the number of thermal photons that have a mechanical effect on the gas molecules decreases, since the radiation of the gas outside increases. Increasing the number of photons leaving the volume of gas.
7. Reducing the number of photons in the gas volume reduces the internal pressure of the gas and, accordingly, the compressibility factor decreases.
8. The moisture index (corresponding to the concentration or density) used in the measurement of soil properties more accurately and reliably (has a large proportion of the explained dispersion) shows a more significant change in its properties in comparison with the pore pressure.
9. Accordingly, there is no reason to move to foreign standards based on pressure indicators in soils and other environments of transport construction.
For more Open Access Journals in Juniper Publishers please click on: https://juniperpublishers.business.site/
For more articles in Open Access Journal of Civil Engineering Research please click on: https://juniperpublishers.com/cerj/
To know more about Open Access Journals Publishers
To read more…Fulltext please click on: https://juniperpublishers.com/cerj/CERJ.MS.ID.555704.php
34 notes · View notes
The Review of Reliability Factors Related to Industrial Robo- Juniper Publishers
Tumblr media
Abstract
Although, the problem of industrial robot reliability is closely related to machine reliability and is well known and described in the literature, it is also more complex and connected with safety requirements and specific robot related problems (near failure situations, human errors, software failures, calibration, singularity, etc.).Compared to the first robot generation, the modern robots are more advanced, functional and reliable. Some robot’s producers declare very high robot working time without failures, but there are fewer publications about the real robot reliability and about occurring failures. Some surveys show that not every robot user has monitoring and collects data about robot failures. The practice show, that the most unreliable components are in the robot’s equipment, including grippers, tools, sensors, wiring, which are often custom made for different purposes. The lifecycle of a typical industrial robot is about 10-15 years, because the key mechanical components (e.g. drives, gears, bearings) are wearing out. The key factor is the periodical maintenance following the manufacturer’s recommendations. After that time, a refurbishment of the robot is possible, and it can work further, but there are also new and better robots from modern generation.
Keywords: Industrial robot;Reliability; Failures; Availability; Maintenance; Safety; MTTF; MTBF; MTTR; DTDTRF
Introduction
Nowadays, one can observe the increasing use of automation and robotization, which replaces human labor. New applications of industrial robots are widely used especially for repetitive and high precision tasks or monotonous activities demanding physical exertion (e.g. welding, handling). Industrial robots have mobility similar to human arms and can perform various complex actions like a human, but they do not get tired and bored. In addition, they have much greater reliability then human operators. The problem of industrial robot reliability is like machine reliability and is well known and described in the literature, but because of the complexity of robotic systems is also much more complex and is connected with safety requirements and specific robot related problems (near failure situations, hardware failures, software failures, singularity, human errors etc.). Safety is very important, becausethere were many accidents at work with robots involved, and some of them were deadly. Accidents were caused rather more often by human errors than by failures of the robots.
The research about robot reliability was started in 1974 by Engleberger, with publication, which is a summary of three million hours of work of the first industrial robots–Unimate[1]. A very comprehensive discussion over the topic is presented by Dhillon in the book, which covers the problems of robot reliability and safety, including mathematical modelling of robot reliability and some examples[2]. An analysis of publications on robot reliability up to 2002 is available in Ref. Dhillon et al.[3], and some of the important newer publications on robot reliability and associated areas are listed in the book [4].The modern approach to reliability and safety of the robotic system is presented in the book, which includes Robot Reliability Analysis Methods and Models for Performing Robot Reliability Studies and Robot Maintenance[5]. The reliability is strongly connected with safety and productivity, therefore other researches include the design methods of a safe cyber physical industrial robotic manipulator and safety-function design for the control system or simulation method for human and robot related performance and reliability[6-7]. There are fewer publications about the real robot reliability and about occurring failures [8]. The surveyshows that only about 50 percent of robot users have monitoring and collect data about robot failures.
Failure analysis of approximately 200 mature robots in automated production lines, collected from automotive applications in the UK from 1999, is presented in the article, considering Pareto analysis of major failure modes. However, presented data did not reveal sufficiently fine detail of failure history to extract good estimates of the robot failure rate[9-10].
In the article11. Sakai et al.[11], the results of research about robot reliability at Toyota factory are presented. The defects of 300 units of industrial robots in a car assembly line were analyzed, and a great improvement in reliability has been achieved. The authors consider as significant activities that have been driven by robot users who are involved in the management of the production line. Nowadays, robot manufacturers declare very high reliability of their robots [12]. The best reliability can be achieved by the robots with DELTA and SCARA configuration. This is connected with lower number of links and joints, compared to other articulated robots. Because each additional link with serial connection causes an increase of the unreliability factors, therefore, some components are connected parallel, especially in the Safety Related Part of the Control System (SRP/CS), which have doubled number of some elements, for example emergency stops. Robots are designed in such way that any single, reasonably foreseeable failure will not lead to the robot’s hazardous motion [13].Modern industrial robots are designed to be universal manipulating machines, which can have different sort of tools and equipment for specific types of work. However, the robot’s equipment is often custom made and may turn out to be unreliable as presented in, therefore, the whole robotic system requires periodic maintenance, following to the manufacturer’s recommendations [14-15]. operators and robots in cooperative tasks, therefore, the safety plays a key role. Safety can be transposed in terms of functional safety addressing the functional reliability in the design and implementation of devices and components that build the robotic system [16].
Robot Reliability
The reliability of objects such as machines or robots is defined as the probability that they will work correctly for a given time under defined working conditions. The general formula for obtaining robot reliability is [2]:
Where:
Rr(t) is the robot reliability at time t,
λr(t) is the robot failure rate.
In practice, for description of reliability, in most cases the MTTF (Mean Time to Failure) parameter is used, which is the expected value of exponentially distributed random variable with the failure rate λr [2].
In real industrial environments, the following formula can be used to estimate the average amount of productive robot time, before robot failure [2]:
Where:
PHR – is the production hours of a robot,
NRF – is the number of robot failures,
DTDTRF – is the downtime due to robot failure in hours,
MTTF – is the robot mean time to failure.
In the case of repairable objects, the MTBF (Mean Time Between Failures), and the MTTR (Mean Time to Repair) parameters, can be used.
The reliability of the robotic system depends on the reliability of its components. The complete robotic workstation includes:
A. Manipulation unit (robot arm),
B. controller (computer with software),
C. equipment (gripper, tools),
D. workstation with workpieces and some obstacles in the robot working area,
E. safety system (barriers, curtains, sensors),
F. human operator (supervising, set up, teaching, maintenance).
The robot system consists of some subsystems that are serially connected (as in the Figure 1) and have interface for communication with the environment or teaching by the human operator.The robot arm can have different number of links and joints N. Typical articulated robots have N=5-6joints as in the Figure 2, but more auxiliary axes are possible.
For serially connected subsystems, each failure of one component brings the whole system to fail. Considering complex systems, consisting of n serially linked objects, each of which has exponential failure times with rates λi, i= 1, 2, …, n, the resultant overall failure rate λSof the system is the sum of the failure rates of each element λi[2]:
Moreover, the system MTBFS is the sum of inverse MTBFi, of linked objects:
There are different types of failures possible:
A. Internal hardware failures (mechanical unit, drive, gear),
B. Internal software failures (control system),
C. External component failures (equipment, sensors, wiring),
D. Human related errors and failures that can be:
a. Dangerous for humans (e.g. unexpected robot movement),
b. Non-dangerous, fail-safe (robot unable to move).
Also possible are near failure situations and robot related problems, which require the robot to be stopped and human intervention is needed (e.g. recalibration, reprograming).Because machinery failures may cause severe disturbances in production processes, the availability of means of production plays an important role for insuring the flow of production. Inherent availability can be calculated with the formula 7 [2].
For example, the availability of Unimate robots was about 98 % over the 10-years period with MTBF=500h and MTTR=8 hours [2].
The reliability of the first robot generation represents the typical bathtub curve (as in Figure3), with high rate of early “infant mortality” failures, the second part with a constant failure rate, known as random failures and the third part is an increasing failure rate, known as wear-out failures (it can be described with the Weibull distribution).
Therefore, the standard [17] was provided, in order to minimize testing requirements that will qualify a newly manufactured (or a newly rebuilt industrial robot) to be placed into use without additional testing. The purpose of this standard is to provide assurance, through testing, that infant mortality failures in industrial robots have been detected and corrected by the manufacturer at their facility prior to shipment to a user. Because of this standard, the next robot generation has achieved better reliability, without early failures, with MTBF about 8000 hours [16].In the articleSakai&Amasaka[11], the results of research about robot reliability at Toyota are presented. Great improvement was achieved with an increase of the MTBF to about 30000 hours.
Nowadays, robot manufacturers declare an average of MTBF = 50,000 - 60,000 hours or 20 - 100 million cycles of work [12]. The best reliability is achieved by the robots with SCARA and DELTA configuration. This is connected with lower number of links and joints, compared to other articulated robots.Some interesting conclusions from the survey about industrial robots conducted in Canada in year 2000 are as follows [9]:
A. Over 50 percent of the companies keep records of the robot reliability and safety data,
B. In robotic systems, major sources of failure were software failure, human error and circuit board troubles from the users’ point of view,
C. Average production hours for the robots in the Canadian industries were less than 5,000 hours per year,
D. The most common range of the experienced MTBF was 500–1000h (from the range 500-3000h)
E. Most of the companies need about 1–4h for the MTTR of their robots (but also in many cases the time was greater than 10h or undefined).
The current industrial practice show that the most unreliable components are in the robot’s equipment, including grippers, tools, sensors, wiring, which are often custom made for different purposes. This equipment can be easily repaired by the robot user’s repair department. But the failure of critical robot component requires intervention of the manufacturer service and can take much more time to repair (and can be counted in days). Therefore, for better performance and reliability of the robotic system, periodic maintenance is recommended.
Robot Maintenance
Three basic types of maintenance for robots used in industry are as follows [4]:
Preventive maintenance
This is basically concerned with servicing robot system components periodically (e.g. daily, yearly. …)
Corrective maintenance
This is concerned with repairing the robot system whenever it breaks down.
Predictive maintenance
Nowadays, many robot systems are equipped with sophisticated electronic components and sensors; some of them are capable of being programmed to predict when a failure might happen and to alert the concerned maintenance personnel (e.g. self-diagnostic, singularity detection).Robot maintenance should be performed, following to the robot manufacturer’s recommendations, which are summarized in the Table 1[15]. Preventive maintenance should be provided before each automatic run, including self-diagnostic of the robot control system, visual inspection of cables and connectors, checking for oil leakage or abnormal signals like noise or vibrations. The replacement of the battery, which powers the robot’s positional memory, is needed yearly. If the memory is lost, then remastering (recalibration, synchronization) is needed.Replenishing the robot with grease every recommended period is needed to prevent the mechanical components (like gears) from wearing out. Special greases are used for robots (e.g. Moly White RE No.00) or grease dedicated for specific application like for the food-industry. Every 3-5 years a fully technical review (overhaul) with replacement of filters, fans, connectors, seals, etc. is recommended.
Performing daily inspection, periodic inspection, and maintenance can keep the performance of robots in a stable state for a long period. The lifecycle of typical robot is about 10-15 years, because the wear of key mechanical components (drives, gears, bearings, brakes) causes backlash and positional inaccuracy. After that time a refurbishment of the robot is possible, and it can work further for long time. Refurbished Robots are also called remanufactured, reconditioned, or rebuilt robots.
Conclusion
Nowadays modern industrial robots have achieved high reliability and functionality;therefore, they are widely used. This is confirmed by more than one and half million of robots working worldwide. According to the probability theory, in such large robot population the failures of some robots are almost inevitable. The failures are random, and we cannot predict exactly where and when, they will take place. Therefore, the robot users should be prepared and should undertake appropriate maintenance procedures. This is important, because industrial robots can highly increase the productivity of manufacturing systems, compared to human labor, but every robot failure can cause severe disturbances in the production flow,therefore periodic maintenance is required, in order to prevent robot failures. High reliability is also important for the next generation of collaborative robots, which should work close to human workers, and safety must be guaranteed without barriers. Also, some sorts of service robots, which should help nonprofessional people (e.g. health care of disabled people) must have high reliability and safety. There have already been some accidents at work, with robots involved, therefore, the next generation of intelligent robots should be reliable enough to respect the Asimov’s laws and do not hurt people, even if they make errors and give wrong orders.
For More Open Access Journals Please Click on: Juniper Publishers
Fore More Articles Please Visit: Robotics & Automation Engineering Journal
0 notes
On the Alternatives of Lyapunov’s Direct Method in Adaptive Control Design- Juniper Publishers
Tumblr media
Abstract
The prevailing methodology in designing adaptive controllers for strongly nonlinear systems is based on Lyapunov’s PhD Thesis he defended in 1892 to study the stability of motion of systems for the solution of the equations of motion of which no closed form analytical solutions exist. The adaptive robot controllers developed in the nineties of the 20thcentury guarantee global (often asymptotic) stability of the controlled system by using his ingenious Direct Method that introduces a Lyapunov function for the behavior of which relatively simple numerical limitations have to be proved. Though for various problem classes typical Lyapunov function candidates are available, the application of this method requires far more knowledge than the implementation of some algorithm. Besides requiring creative designer’s abilities, it often requires too much because it works with satisfactory conditionsinstead of necessary and satisfactoryones. To evade these difficulties, based on the firm mathematical background of constructing convergent iterative sequences by contractive maps in Banach spaces, an alternative of Lyapunov’s technique was so introduced for digital controllers in 2008 that during one control cycle only one step of the required iteration was done. Besides its simplicity the main advantage of this approach was the possible evasion of complete state estimation that normally is required in the Lyapunov function-based design. Though the convergence of the control sequence can be guaranteed only within a bounded basin, this approach seems to have considerable advantages. In the paper the current state of the art of this approach is briefly summarized.
Keywords: Adaptive control; Lyapunov function; Banach space; Fixed point lteration
Abbrevations: AC: Adaptive Control; AFC: Acceleration Feedback Controller; AID: Adaptive Inverse Dynamics Controller; CTC: Computed Torque Control; FPI: Fixed Point Iteration; MRAC: Model Reference Adaptive Control; OC: Optimal Control; PID:Proportional, Integrated, Derivative; RARC: Resolved Acceleration Rate Control; RHC: Receding Horizon Controller;SLAC: Slotine-Li Adaptive Controller;
Introduction
There is a wide class of model-based control approaches in which the available approximate dynamic model of the system to be controlled is “directly used” without “being inserted” into the mathematical framework of “Optimal Control” (OC). A classical example is the “Computed Torque Control” (CTC) for robots [1]. However, in the practice we have to cope with the problem of the imprecision (very often incompleteness) of the available system models (in robotics e.g. [1,2], modeling friction phenomena e.g. [3-7], in life sciences as modeling the glucose-insulin dynamics e.g. [8-11] or in anesthesia control e.g. [12-14]). Modeling such engines as aircraft turbojet motors is a quite complicated task that may need multiple model approach [15-18]. Further practical problem is the existence and the consequences of unknown and unpredictable “external disturbances”. A possible way of coping with these practical difficulties is designing “Adaptive Controllers” (AC) that somehow are able to observe and correct at least the effects of the modeling imprecisions by “learning”. Depending on the above available information on the model various adaptive methods can be elaborated. If we have precise information on the kinematics of a robot and only approximate information is available on the mass distribution of a robot arm made of rigid links the exact model parameters can be learned as in the case of the “Adaptive Inverse Dynamics” (AID) and the “Slotine-Li Adaptive Controller” (SLAC) for robots that are the direct adaptive extensions of the CTC control. An alternative approach is the adaptive modification of the feedback gains or terms [19]. The “Model Reference Adaptive Control” (MRAC) has double “intent”: a) it has to provide precise trajectory tracking, and b) for an outer, kinematics-based control loop they have to provide an illusion that instead of the actually controlled system, a so called “reference system” is under control (e.g. [20-22]).
The traditional approaches in controller design for strongly nonlinear systems are based on the PhD thesis by Lyapunov [23] that later was translated to Western languages (e.g. [24]). (In this context “strong nonlinearity” means that the use of a “linearized system model” in the vicinity of some “working point” is not satisfactory for practical use.) In Lyapunov’s “2nd” or “Direct Method” a Lyapunov function has to be constructed for the given particular problem (typical “candidates” are available for typical “problem classes”), and non-positiveness of the time-derivative of this function has to be proved. Besides the fact that the creation of the Lyapunov function is not a simple application of some algorithm –it is rather some creative art–, this method has various drawbacks as a) it works with “satisfactory conditions” instead of “necessary and satisfactory conditions” (i.e. often it requires too much as guaranteeing really not necessary conditions), b) its main emphasis is on global (asymptotic) stability of the motion of the controlled system without paying too much attention to the “initial” or “transient” phase of the controlled motion (for instance in life sciences a “transient” fluctuation can be lethal).
To cope with these difficulties alternatives of the Lyapunov function-based adaptive design were suggested in [25] in which the primary design intent is keeping at bay the initial “transients” by turning the task of finding the necessary control signal to iteratively so solving a fixed point problem [“Fixed Point Iteration” (FPI)] that in each digital control step only one step of the appropriate iteration can be realized. The mathematical antecedents of this approach were established in the 17th century (e.g. [26-28]), and its foundations in 1922 were extended to quite complicated spaces by Stefan Banach [29,30]. In [25] the novelty was the application of this approach to control problems. In contrast to the “traditional” “Resolved Acceleration Rate Control” (RARC) in which in the control of a 2nd order physical system only lower order derivatives or tracking error integrals are fed back (e.g. [19,31-33]) in this approach the measured “acceleration” signals are also used as in the “Acceleration Feedback Controllers” (AFC) (e.g. [34-38]).
In general, the most important “weak point” of the FPI-based approach is that it cannot guarantee global stability. The generated iterative control sequences converge to the solution of the control task only within a bounded basin that in principle can be left. To avoid this problem heuristic tuning rules were introduced for one of the little numbers of the adaptive parameters in [39-41]. In [42] essentially the same method was introduced in the design of a novel type of MRAC controllers the applicability of which was investigated by simulations for the control of various systems (e.g. [43-46]). Observing the fact that in the classical, Lyapunov function-based solutions as the AID and SLAC controllers the parameter tuning rule obtained from the Lyapunov function has a simple geometric interpretation that is independent of the Lyapunov function itself, the FPI-based solution was combined with the tuning rule of the original solutions used for learning the “exact dynamic parameters” of the controlled system. Alleviated from the burden of necessarily constructing some easily treatable quadratic Lyapunov function, the feedback provided by the FPIbased solution was directly used for parameter tuning. This solution resulted in precise trajectory tracking even in the initial phase of the learning process in which the available approximate model parameters still were very imprecise [47,48]. In the present paper certain novel results are summarized on the further development of the FPI-based approach.
Discussion and Results
The structure of the FPI-based adaptive control
The block scheme of the FPI-based adaptive controller is given in Figure 1 for a 2nd order dynamical system as e.g. a robot [48]. In this case the 2nd time-derivative of the generalized coordinates (joint coordinates). qcan be instantaneously set by the control torque or force Q On this basis, in the kinematic block an arbitrary desired joint acceleration can be designed that can drive the tracking error N q (t) − q(t) to 0 if it is realized. In the practice this joint acceleration cannot be realized due to the imprecisions in the dynamic model the CTC controller uses for the calculation of the necessary forces. Therefore, instead introducing this signal into the Approximate Model to calculate the necessary force its deformed version, is introduced into it. The necessary deformation iteratively is produced in the form of a sequence that is initiated by it, i.e. by During one digital control step one step of the iteration can be realized. If there are no special time-delay effects in the system, the contents of the delay boxes in Figure 1 exactly correspond to the cycle time of the controller Δt The “chain of operations” resulting in an observed realized response q(t) for the input mathematically approximately can be considered as a response since –though it depends on q and q − only slowly varies in comparison to that quite quickly can be modified. In the Adaptive Deformation Block of Figure 1 a function is used as in which [49]. Since due to the proportional, integral and derivative error feedback terms varies only slowly, we have an approximation as Regarding the convergence of this iteration, we have to take it into account that a Banach Space (accidentally denoted by B is a complete, linear, normed metric space. It is a convenient modeling tool that allows the use of simple norm estimations. Its completeness means that each self-convergent or Cauchy sequence has a limit point within the space. A mapping F :β β is contractive if ∃ a real number 0 ≤ K < 1 so that, It is easy to show that the sequence generated by a contractive map as is a Cauchy sequence: in the norm estimation given in (1)∀Lε in high order powers of K occur as n → ∞ therefore Due to the completeness of arbitrary element of the sequence n x according to (2) it holds that .
Consequently, it is enough to guarantee that the function F(.) is contractive, since in this case the sequence converges to the fixed point of this function if it is so constructed that its fixed point is the solution of the control task.
Construction of the adaptive function
In the original solution in [25] (3) was suggested for the special case qε IRwith three adaptive parameters Kc, Bc, and Ac.
Really, when we just have the solution of the control task and it is obtained that that is the solution is a fixed point. To obtain convergence in the vicinity of the fixed consider the 1st order Taylor series approximation as
leads to the approximation
On the basis of (5) it is easy to set the adaptive parameters for convergence: by choosing a great parameter and a small Ac it can be achieved that therefore the mapping is contractive and the sequence converges to the solution. The speed of convergence depends on setting Ac, and too great value can cause leaving the region of convergence.
For qε IRn (multiple variable systems) a different construction was introduced in [50,51] the convergence properties of which were more lucid than that of the multiple variable variant of (3):
in which the expression can be identified as the “response error in time t”, and with the Frobenius norm corresponds to the unit vector that is directed into the direction of the response error, ζ : IR  IR is a differentiable contractive map with the attractive fixed point ( )* * ζ x = x and c Aε IR is an adaptive control parameter. By using the same argumentation with the1st order Taylor series approximation it was shown in [52] that if the real part of each eigenvalue of is simultaneously positive or negative, an appropriate Ac parameter can be selected that guarantees convergence.
in which Qε IR2 denotes the control force and qε IR2 is the array of the generalized coordinates of the controlled system.
The parameter 1 σ , and 2 σ > 0 “modulate” the springs’ stiffness, the direction of the spring force is calculated by the use of the “signum” function as sign ( ) 1 01 q − L while its absolute value is The approximate and exact model parameter values are given in Table 1.
In the Kinematic Block for the integrated error the prescribed “tracking strategy” was that lead to a PID-type feedback that choice guarantees the convergence in the simulations ∧ = 6s−1 was chosen with ζ (x) = atanh (tanh (x + D) / 2),D = 0.3 in (6). The choice 5 10 1 c A = − × − resulted in good convergence. The Figure2–6 illustrate the effects of using the adaptive deformation. It is evident that the tracking precision was considerably improved without any chattering effect that are typical in the also simple Sliding Mode / Variable Structure Controllers (e.g. [53,54]. Figure 5 reveals that quite different control forces were applied in the non-adaptive and in the adaptive cases.
The essence of the adaptivity is revealed by Figure 6. In the non-adaptive case considerable PID corrections are added totherefore it considerably differs fromthat is identical to in the lack of adaptive deformation. However, the difference between the desired and the realized 2nd time-derivatives are quite considerable if no adaptive deformation is applied. In contrast to that, in the adaptive caseis in the vicinity of because only small PID corrections are needed if the trajectory tracking is precise. This desired value is very close to the realized 2nd time derivatives that considerably differ from the adaptively deformed value. That is, quite considerable adaptive deformation was needed for precise trajectory tracking due to the great modeling errors.
Further Possible Applications and Development
The applicability of the FPI-based adaptive control design methodology was investigated in various potential fields of application. In 2012 in [55] an adaptive emission control of freeway traffic was suggested by the use of the quasistationary solutions of an approximate hydrodynamic traffic model. In [56] an FPI-based adaptive control problem of relative order 4 was investigated. In [57] FPI-based control of the Hodgkin-Huxley Neuron was considered. In [58] the possible regulation of Propofol administration through wavelet-based FPI control in anaesthesia control was investigated.
In [59] the application of the FPI-based control in treating patients suffering from “Type 1 Diabetes Mellitus” was studied. The simplicity of the FPI-based method opened new prospects in the possible design of adaptive optimal controllers. In [60] the contradiction between the various requirements in OC was resolved in the case of underactuated mechanical systems in the following manner: instead constructing a “cost function contribution” to each state variable the motion of which needed control, consecutive time slots were introduced within which only one of the state variables was controlled with FPI-based adaptation. (The different sections may correspond to different relative order control tasks.) In [61] it was pointed out that the FPI-based control can be easily combined with the mathematical framework of the “Receding Horizon Controllers” (RHC) (e.g. [62]). (A combination with the Lyapunov function-based adaptive approach would be far less plausible and simple.) In [49] the applicability of this approach was introduced into the control of systems with time-delay. The possibility of fractional order kinematic trajectory tracking prescription in the FPI-based adaptive control was studied, too [63].
In [64] its applicability was investigated in treating angiogenic tumors. In [65,66] further simplification of the adaptive RHC control was considered in which the reduced gradient algorithm was replace by a FPI in finding the zero gradient of the “Auxiliary Function” of the problem. In [67] the applicability of the method was experimentally verified in the adaptive control of a pulse-width modulation driven brushless DC motor that did not have satisfactory documentation (FIT0441 Brushless DC Motor with Encoder and Driver) and was braked by external forces simply by periodically grabbing the rotating shaft by one’s two fingers. The solution was based on a simple Arduino UNO microcontroller with embedding the adaptive function defined in (3) into the motor’s control algorithm. In spite of using 2nd time-derivatives in the feedback no special noise filtering was applied. The measured and computed data was visualized by a common laptop. As it can be seen in Figure 7, the rotational speed was kept at almost constant (in spite of the very noisy measurement data), and the adaptive deformation and the control signal were well adapted to the external braking forces in harmony with the simulation results belonging to the “Illustrative Example” in subsection 2.3.
Figure 7: The experimental setup used for the verification of the FPI-based adaptive control in the case of a pulse-width modulated brushless electric DC motor; The nominal and the realized rotational speed (the average of the whole data set was 59:9383rpm, the nominal constant value was 60rpm); The “Desired” and adaptively “Deformed” 2nd timederivatives of the rotational speed; The control signal (from [67], courtesy of Tamás Faitli) In [68] the novel adaptive control approach was considered from the side of the Lyapunov function-based technique and it was found that it can be interpreted as a novel methodology that is able to drive the Lyapunov function near zero and keeping it in its vicinity afterwards. On this basis a new MRAC controller design was suggested in [69] that has similarity with the idea of the “Backstepping Controller” [70,71].
Conclusion
The FPI-based adaptive control approach was introduced at the Óbuda University with the aim of evading the mathematical difficulties and restrictions, furthermore the information need related to the traditional Lyapunov function-based design. Its main point was the transformation of the control task into a fixed-point problem that was iteratively solved on the firm mathematical basis of Banach’s fixed point theorem. In the center of the new approach, instead of the requirement of global stability, as the primary design intent, precise realization of a kinematically (kinetically) prescribed tracking error relaxation was placed. In contrast to the traditional soft computing approaches as fuzzy, neural network and neuro-fuzzy solutions that normally apply huge structures with ample number of the parameters of the universal approximators of the continuous multiple variable functions on the basis of Kolmogorov’s approximation theorem (e.g. [72-74]) this approach has only a few independent adaptive parameters that can be easily set and one of them can be tuned for maintaining the convergence of the control algorithm. It was shown that the simplicity of this approach allows its combination with more “traditional” approaches as that learning the exact model parameters of the controlled system and at various levels of the optimal controllers as the RFC control. On the basis of ample simulation investigations, it can be stated that the suggested approach has wide area of potential applications (in the control of mechanical devices, in life sciences, traffic control, etc.) where the presence of essential nonlinearities, the lack of precise and complete system models, and limited possibilities for obtaining information on the controlled system’s state are present as main difficulties. It seems to be expedient to invest more efforts into experimental investigations.
Acknowledgement
The Authors express their gratitudes to the Antal Bejczy Center for Intelligent Robotics, and the Doctoral School of Applied Informatics and Applied Mathematics for supporting their work.
For More Open Access Journals Please Click on: Juniper Publishers
Fore More Articles Please Visit: Robotics & Automation Engineering Journal
0 notes
Beauty, A Social Construct: The Curious Case of Cosmetic Surgeries | Juniper Publishers
Tumblr media
Juniper Publishers-Open Access Journal of Dermatology & Cosmetics
Authored by Vandana Roy
Abstract
In this article we deconstruct the social norm of beauty and cosmetic beauty treatment, an issue that is seldom discussed in medical circles and is often lost to popular rhetoric. In doing so, we also reflect on the institutionalized system of social conditioning.
A Historical Perspective
Cosmetic surgery, as with reconstructive surgery, has its roots in plastic surgery (emerging from the Greek word ‘plastikos’, meaning to mold or form). The practice of surgically enhancing or restoring parts of the body goes back more than 4000 years. The oldest accounts of rudimentary surgical procedures is found in Egypt in the third millennia BCE. Ancient Indian texts of 500 BCE outline procedures for amputation and reconstruction. The rise of the Greek city-states and spread of the Roman Empire is also believed to have led to increasingly sophisticated surgical practices. Throughout the early Middle Ages as well, the practice of facial reconstruction continued. The fifth century witnessed a rise of barbarian tribes and Christianity and the fall of Rome. This prevented further developments in surgical techniques. However, medicine benefited from scientific advancement during the Renaissance, resulting in a higher success rate for surgeries. Reconstructive surgery experienced another period of decline during the 17th century but was soon revived in the 18th century. Nineteenth century provided impetus to medical progress and a wider variety of complex procedures. This included the first recorded instances of aesthetic nose reconstruction and breast augmentation. Advancements continued in the 20th century and poured into present developments of the 21st century.
Go to
Desires and Demands in Contemporary Times
In recent years, the volume of individuals seeking cosmetic procedures has increased tremendously. In 2015, 21 million surgical and nonsurgical cosmetic procedures were performed worldwide. In the United Kingdom specifically, there has been a 300% rise in cosmetic procedures since 2002. The year 2016 witnessed a surge in the number of such treatments with the United States crossing four million operations. Presently, the top five countries in which the most surgical and nonsurgical procedures are performed are the United States, Brazil, South Korea, India, and Mexico. Such demand can be viewed from different perspectives. At one end it is a product of scientific progress, growing awareness, economic capacities and easier access and on the other, something on the lines of a self-inflicted pathology. This article dwells on the latter and attempts to address a deep-rooted problem of the social mind.
Lessons from History
History is witness to a number of unhealthy fashion trends, many of which today appear extremely irrational and even cruel. Interestingly, the common thread connecting all of them is the reinforcement of social norms and stereotypes. Forms of socialization which lie at the intersection of race, class and gender-based prejudices. To elaborate, hobble skirts and chopines restricted women’s movement and increased their dependence on others. Corsets deformed body structures, damaged organs and led to breathing problems. The Chinese practice of binding women’s feet to limit physical labor was regarded as a sign of wealthiness. Dyed crinolines and 17th century hairstyles made people vulnerable to poisoning and fire related injuries. Usage of makeup made of lead and arsenic, eating chalk and ‘blot letting’, reflected a blatantly racist obsession with white and pale skin. Lower classes faked gingivitis to ape tooth decays of the more privileged who had access to sugar. Furthermore, other practices like tooth lacquering, radium hair colors, mercury ridden hats, usage of belladonna to dilate pupils and even men wearing stiff high collars, all furthered societal expectations and notions of class superiority. Till the 1920’s, there was rampant usage of side lacers to compress women’s curves. Even today many ethnic tribes continue with practices which inflict bodily deformations. In the urban context as well, trends like high heels, skinny jeans and using excessive makeup dominate the fashion discourse. Cosmetic procedures are the latest addition to the kitty.
The Social Dilemma
What is it that leads the ‘intelligent human’ of today to succumb to archaic and regressive notions of beauty? What motivates them to risk aspects of their lives to cater to selflimiting rules of ‘acceptance’? The surprising part is that this anomaly is often placed in the illusory realm of ‘informed consent’. In common parlance, ‘to consent’ implies voluntary agreement to another’s proposal. The word ‘voluntary’ implies ‘doing, giving, or acting of one’s own free will.’ However, when the entire socio-cultural set up and individual attitudes validate certain behaviors, there is very less space left for an alternate narrative. Let alone free will.
Pierre Bourdieu once argued that nearly all aspects of taste reflect social class. Since time immemorial, societal standards of beauty have provided stepping stones to social ascent and class mobility. Better ‘looking’ individuals are considered to be healthier, skillfully intellectual and economically accomplished in their lives. Such an understanding stems from well entrenched stereotypes in complete disregard of individual merit and fundamental freedoms. An inferiority complex coupled with external pressures and self imposed demands, subconsciously coerce individuals into a vicious cycle of desire or rejection. Active and aggressive media has played a key role in forming societal perceptions of what is attractive and desirable. In addition, lifestyle changes reflect an image obsessed culture, reeking of deep-rooted insecurities. At the root of a submissive and conformist attitude lies a subconscious mind lacking selfesteem and self-worth. People continue to look for remedies in the wrong places. The only difference is that corsets and blot letting have given way to surgeries and cosmetic products. The biggest question is, how have ideas otherwise seen as deviant, problematic and inadequate retained control over minds of millions of individuals?
A Gendered Culture
‘Beauty’ is understood as a process of ongoing work and maintenance, its ‘need’ unfairly tilted towards the fairer sex. History has demonstrated the impact of dangerous beautification practices on women. Contemporary ideals aren’t far from reaching similar outcomes. Today, there is a powerful drive to conform to the pornographic ideal of what women should look like. There has been a growth in the number of adolescents who take to cosmetic surgeries to become more ‘perfect’. In many countries, the growth of the “mommy job” has provoked medical and cultural controversies. Presumably there is an underlying dissatisfaction which surgery does not solve. Furthermore, where does the disability dimension fit in here? What happens to the ‘abnormal’ when the new ‘normal’ itself is skewed? For those with dwarfism and related disorders, new norms become even more burdensome.
The massive pressure to live up to some ideal standard of beauty, particularly for women, reeks of patriarchal remnants of a male dominated society. This kind of conformity further nurtures objectification and sexualization, reducing women to the level of ‘chattel’ to feed the male gaze. There is a also a power struggle at play where biased standards help maintain the unequal status quo. Today, there is idolization of celebrities, beauty pageants and advertisements by cosmetic companies over sane medical advice. They set parameters of size, color and texture to be followed by the world at large. Moreover, people who deviate from such norms are made to feel stigmatized or ostracized from social spheres. The existence of male-supremacist, ageist, hetero sexist, racist, class-biased and to some extent, eugenicist standards reflect a failure of society as a whole. It is thus high time that we revisit and deconstruct skewed standards of beauty.
Mind Over Matter: Psychological Dimensions
Culturally imposed ideals create immense pressure of conformity. Consequently, they have been successful in engendering insecurities via their influence on perception of self and body image. Such perceptions often become distorted and discordant with reality, leading to serious psychological disorders. One such disorder is the body dysmorphic disorder (BDD). This is a psychiatric disorder characterized by preoccupation with an imagined defect in physical appearance or a distorted perception of one’s body image. It also has aspects of obsessive-compulsiveness including repetitive behaviors and referential thinking. Such preoccupation with self-image may lead to clinically significant distress or impairment in social and occupational functioning. With reference to cosmetic surgeries, patients with BDD often possess unrealistic expectations about the aesthetic outcomes of these surgeries and expect them to be a solution to their low self-confidence. Many medical practitioners who perform cosmetic surgery believe themselves to be contributing towards construction of individual identity as well. The notion that beauty treatments can act in much the same way as psychoanalysis has led countries like Brazil to open its gate of cosmetic procedures to lower income groups. This happens while the country continues its battles with diseases like tuberculosis and dengue. The philosophy behind such ‘philanthropy’ is that ‘beauty is a right’ and thus should be accessible to all social groups. While on one hand we may applaud such efforts of creating a more ‘egalitarian’ social order, on the other hand it is hard to overlook the self-evident undercurrents of social prejudice and capitalistic propaganda.
Medicalization of Beauty
Traditional notions of beauty embody a kind of hierarchy and repression which alienate individual agency and renders them as powerless victims. Such is the societal pressure which normalizes cosmetic procedures and subverts serious health effects. These include adverse effects due to cosmetic fillers like skin necrosis, ecchymosis, granuloma formation, irreversible blindness, anaphylaxis among others. Other dangers like heightened susceptibility to cancer and increased suicide rates. However, patients are often unaware of the risks which are hidden behind a veil of expectations and reassurances. Furthermore, quackery and inadequate standards such as lack of infection control also compound the problems of this under regulated field.
Role of Stakeholders
At the heart of any successful social transformation lies the power of united will and collective action. Thus, the consolidated and sustained effort by all stakeholders is the key to realizing an ecosystem conducive to tackle negative social norms. At the outset, government regulation is needed with respect to cosmetic procedures and the cosmetics industry. These regulations should encompass all private and public avenues and should also work against misleading advertising. Spreading awareness is the key to a better informed society. The state should fund and run specialized awareness sessions pertaining to psychological problems and aid mechanisms, gender sensitization as well as those aiming at spiritual and introspective personal development of individuals. NGO’s, medical professionals, academicians and members of the civil society, must come together to eradicate forms of social discrimination which undermine social institutions and individual agency around the world. This would help facilitate discussion, data collection, coalition building, and action that may eventually lead to behavioral changes.
Aesthetic surgery today seems to be passing through an ethical dilemma and an identity crisis. And rightly so for it strives to profit from an ideology that serves only vanity, bereft of real values. Nevertheless, there are exceptional cases where medical-aesthetic inputs have been vital in restoring morale by subverting stigmatization.
The Way Ahead
Beauty is unfair. The ‘attractive’ enjoy powers gained without merit. The perfectionist in humans seeks outward validation of external beauty over inner virtues. Scientific progress and an increase in human expertise to manipulate natural phenomena has paved the way for these desires to become a reality. There is no denying that advances in plastic and reconstructive surgery have revolutionized the treatment of patients suffering from disfiguring congenital abnormalities, burns and skin cancers. However, the increased demand for aesthetic surgery falls short of a collective psychopathology obsessed with appearance. This article expresses trepidation about such forms of social consciousness that first generates dissatisfaction and anxiety and then provides surgery as the solution to a cultural problem.
We have to work towards a social order which embraces people as they are and facilitates free choice, individual liberty and informed decision making. This is particularly pertinent when these decisions work towards framing cultural perceptions and expectations for millions around the world. We should open our hearts to diversification of beauty and aesthetic. Let our entertainment, fashion, capital and media revolve around heterogeneity of ideologies and cultures. In the words of Eleanor Roosevelt, “No one can make you feel inferior without your consent”. So, let us all come together and create a better society. A society, where principles of justice, equity, good conscience and humanity override primitive and archaic ideologies of naive men. A society where individual will be truly free and, discourse a product of informed thought.
For more Open Access Journals in Juniper Publishers please click on: https://juniperpublishers.business.site/
For more articles in Open Access Journal of Dermatology & Cosmetics please click on: https://juniperpublishers.com/jojdc/
To know more about Open Access Journals Publishers
To read more…Fulltext please click on: https://juniperpublishers.com/jojdc/JOJDC.MS.ID.555556.php
31 notes · View notes
Analysis of Depression and Anxiety Among Patients Undergoing Surgery for Breast Cancer | Juniper Publishers
Tumblr media
Juniper Publishers-Open Access Journal of Otolaryngology
Authored by Muhammad Ahmad 
Abstract
Introduction: Pakistan is a developing country where up to 70% of women present when breast cancer is in its advanced stage. Advanced breast cancer is cancer that is metastatic. Advanced breast cancer is a life threating disease with a poor prognosis profile. Women with a diagnosis of advanced breast cancer engage in a multi-stage cancer treatment cycle often involving surgery, radiation treatment and chemotherapy.
Aims and Objectives: The basic aim of the study is to analyze the sources of distress among patients undergoing surgery for breast cancer in Pakistan.
Material and Methods: This study was conducted at Niazi Teaching Hospital, Sargodha during Dec 2017 to May 2018. Using a purposive sampling strategy, 14 adult female breast cancer patients were selected for this study with variations in their age, educational level, socioeconomic status, and number of exposures to RT. Data were collected through the recording of the face-to-face in-depth interviews, using a semi-structured interview guide.
Results: A total of 14 female breast cancer patients participated in this study. Their age ranged between 20 and 60 years, with an average of 35 years. Majority (79%) of them were married. About 50% of them were illiterate, whereas 43% were matriculate. All of them were Muslims and of Pathan ethnicity. Before RT, all of them had mastectomy of the affected breast, followed by chemotherapy
Conclusion: It is concluded that a substantial number of adult cancer patients were depressed and have a suicidal ideation, causing a significant functional impairment.
Introduction
Pakistan is a developing country where up to 70% of women present when breast cancer is in its advanced stage. Advanced breast cancer is cancer that is metastatic. Advanced breast cancer is a life threating disease with a poor prognosis profile. Women with a diagnosis of advanced breast cancer engage in a multi-stage cancer treatment cycle often involving surgery, radiation treatment and chemotherapy. These cycles of treatment are not free of side effects. Women must face possible disfigurement, surgical pain, the side effects from chemotherapy which can include feelings of anger, frustration, fear, isolation, fatigue as well as burns from targeted radiotherapy [1]. Globally, breast cancer is the most common cancer among women, and a leading cause of cancer-related deaths in this gender. It accounts for 23% of all cancer cases worldwide. The incidence of breast cancer has been increasing rapidly in the developing countries. Among the Asian countries, Pakistan has the highest prevalence of breast cancer, where one in every nine women is at risk of developing breast cancer [2]. In western countries, breast cancer is prevalent among women aged 60 years and above, whereas, in Asian countries, including Pakistan, it occurs during the reproductive age between 30 and 50 years. Hence, women with breast cancer, in Pakistan, may face more challenges due to household and child-rearing responsibilities, as compared to those living in the western countries [3]. According to the American Society for Radiation Oncology, radiotherapy (RT) is a common treatment modality for cancer which is prescribed to about two-thirds of the cancer patients, either before or after surgery. In breast-conserving surgery, RT reduces the chances of recurrence as well as the risk of metastasis and death from breast cancer [4,5].
Aims and Objectives
The basic aim of the study is to analyze the sources of distress among patients undergoing surgery for breast cancer in Pakistan.
Material and Methods
This study was conducted at Niazi Teaching Hospital, Sargodha during Dec 2017 to May 2018. Using a purposive sampling strategy, 14 adult female breast cancer patients were selected for this study with variations in their age, educational level, socioeconomic status, and number of exposures to RT. Data were collected through the recording of the face-to-face in-depth interviews, using a semi-structured interview guide.
Statistical Analysis
Student’s t-test was performed to evaluate the differences in roughness between group P and S. Two-way ANOVA was performed to study the contributions. A chi-square test was used to examine the difference in the distribution of the fracture modes (SPSS 19.0 for Windows, SPSS Inc., USA).
Results
total of 14 female breast cancer patients participated in this study. Their age ranged between 20 and 60 years, with an average of 35 years. Majority (79%) of them were married. About 50% of them were illiterate, whereas 43% were matriculate. All of them were Muslims and of Pathan ethnicity. Before RT, all of them had mastectomy of the affected breast, followed by chemotherapy. Analysis of the interviews data each category and its subcategories are described below with some excerpts from the participants’ narratives (Tables 1 & 2).
Discussion
Breast cancer is second most prevalent type of cancer and is equally common in developing as well as developed countries. The treatment expenditure of breast cancer is a burden not only for people diagnosed with cancer but also for their families and society as a whole. According to American Cancer Society (2010) breast cancer is one of the top three types of cancer that caused the most economic impact ($88 billion) [6]. Though successful treatment options are available to deal with breast cancer, pain and suffering associated with available treatment modalities is significant. Chronic, persistent pain acts as an additional stressor for a person already suffering from many psychological, social and medical stressors [7].
Research has demonstrated association between clinically relevant pain and breast cancer surgery in 10-50% patients. There are pathogenic mechanisms involved in breast cancer like nerve damage and certain sensory disturbances (e.g., burning and sensory loss) are part of side effects of surgical processes [8]. Breast cancer surgery is followed by chronic neuropathic pain syndrome like Phantom breast pain (a sensory experience that is present even after removal of breast and is painful), Inter costo brachial Neuralgia (pain in the distribution of inter costo brachial nerve) and Neuroma pain (pain in the region of scar on breast, chest or arm). Radical mastectomy is most disfiguring type of breast cancer surgery and it involves removal of breast, major and minor chest muscles, and lymph nodes [9]. Breast conserving techniques, another treatment option, were expected to reduce psychiatric morbidity and sexual dysfunction, but none of the studies involving appropriate assessment of psychiatric morbidity showed any advantage of breast conserving therapy [10].
Conclusion
It is concluded that a substantial number of adult cancer patients were depressed and have a suicidal ideation, causing a significant functional impairment. This study clearly demonstrated a significant association between pain complaint and depression among adult cancer patients.
For more Open Access Journals in Juniper Publishers please click on: https://juniperpublishers.business.site/
For more articles in Open Access Journal of Otolaryngology please click on: https://juniperpublishers.com/gjo/
To know more about Open Access Journals Publishers
To read more…Fulltext please click on: https://juniperpublishers.com/gjo/GJO.MS.ID.556003.php
29 notes · View notes
Use of Molecular Markers (Ssrs) and Public Databases in Vitis Vinifera L. as the Main Case of Efficient Crop Cultivar Identification | Juniper Publishers
Tumblr media
Juniper Publishers-Open Access Journal of Horticulture & Arboriculture
Authored by Ornella Calderini 
Keywords
Keywords: Biodiversity; Autochtonal grape varieties;Morphological descriptors; SSR Markers; Database
Mini Review
Grape (Vitis vinifera L.) is a main crop providing both fresh produce (table grape) and a main transformed beverage (wine) of ancient origin. It is grown in several countries in the world producing relevant business value as 8 million hectares of vineyards are estimated at the global level (http://www.fao. org/faostat). Despite the slowdown of the wine market after the financial crisis, the new century brought about a growth of 75% in volume and a doubling in value in 15 years in the international market [1].
Loss of biodiversity has occurred in grape as in many other crops because of the large cultivation of a limited number of highly productive and widely adapted cultivars, therefore the scientific community supported by different stakeholders has undertaken a large effort to preserve minor vines in different types of collections (at public institutions but also in private farms).
The number of existing grape cultivars is generally estimated as several thousands [2], however the Working Group on Vitis referring to the European Cooperative Programme for Plant Genetic Resources (ECPGR) reports 27,000 accessions of grape held only in European collections (first meeting, 2003) and in a second report (2012) the group is still seeking to solve the problem of cultivar synonyms and the presence of duplications in spite of the passport data available for about 35,000 accessions from European countries.
In general, local produce has gained a large appeal with consumers, this including local wines. Consumers perceive local foods as a mean to improve the sustainability of the system by reducing the carbon and water footprint and by providing new sale opportunities for local wineries, expecially of small-medium size. Marketing of traditional varieties is also exploiting the perception of consumers to be part of biodiversity conservation connected to historical aspects of the region and it uses adjectives such as “indigenous, rare, neglected, recovered”. Concerning grape, the development of oenotourism has raised awareness in consumers of the so-called terroir and the request of local wines produced using autochthonal grape varieties. In addition, there is a general trend to drink wines with a lower alcohol and phenolics content which is typical of older varieties [3].
In view of the large number of vines reported worldwide a proper identification system is necessary. The aims of plant variety identification are several including breeding programs, cultivar registration and protection and subsequently its seed (or cutting) production and trade. Cultivar identification is usually addressed via morphological descriptors and (more recently) molecular markers. Discrimination between varieties within a species may be difficult. PBR/PVR registration requires varieties to be morphologically distinct, uniform and stable (DUS) [4], however field trials and testing for varietal identification or verification based on morphological criteria may be costly and time-consuming. Environmental factors can influence Morphological descriptors have been used for many years in grape cultivar identification and they have been coded by the Organization Internationale de la Vigne et du Vin (OIV) for more than 600 different traits (2nd Edition of the OIV Descriptor list for grape varieties and Vitis species). Ampelography is the term of use to refer to grape morphological analysis as a science to distinguish grapevines from a phenotypical point of view [5]. In more recent times softwares such as Superampelo have been developed to aid the morphological scoring assisted by Image analysis [6]. Interestingly a morphometric analysis of leaf shape via Elliptical Fourier Descriptors (EFD) and generalized Procrustes analysis combined to Genome Wide Association Analysis (GWAS) of 1,200 USDA cultivars allowed the discovery of genomic regions associated to leaf morphological traits [7].
Recently, molecular markers based on DNA, have become an essential tool for genotyping and therefore for the identification of plant (grape) varieties. Molecular markers have the advantage of being stable and independent from the environmental conditions compared to morphological descriptors. Among the several types of markers (reviewed recently in [8]) the grape scientific community has chosen Simple Sequence Repeats (SSR) for cultivar identification because of their combination of polymorphism, reproducibility, and their codominant nature [9,10]. Two European projects (Genres081/GrapeGen06) allowed the same community to agree on a set of 6 SSR markers (VVS2, VVMD5, VVMD7, VVMD27, VrZag62, VrZag79) added to the OIV register in 2009 which were lately increase of 3 more (VVMD25, VVMD28 e VVMD32) as reported in Table 1. Alleles are conveniently expressed as relative base pair distance to the shortest allele size (n) found within the Genres081. It is considered that two different plants having the same profile for the 9 SSR loci represent the same grape genotype. The mentioned SSR set is not able to distinguish among somatic variants which are common in grape as natural mutations occurring because of the vegetative type of propagation; those showing modifications in major traits can be commercialized as new cultivars and their identification is of economic importance.
The important step undertaken by the grape community was to establish public databases that store the OIV descriptors with a particular focus on the SSR data. Every laboratory can undertake cultivar analysis by using a reference cultivar from the database itself to adjust the length of the 9 SSRs, such length in fact may vary of few bases due to experimental error depending on laboratory conditions. Databases accept novel SSR data after proper standardization. There are a few databases available as reported in Table 2. Many thousand genotypes are stored and by standardization of the analysis it is possible to compare the SSR profile of an unknown vine to have the chance to verify its identity; it is worth mentioning that SSR profiles are updated constantly
In Italy the CREA-VE Center at Conegliano Veneto (TV) offers a service of grape identification (http://sito.entecra.it/ portale/cra_avviso.php?id=13755&tipo=&lingua=IT) based on an updated core set of 11 SSRs and analysis of its own database of genotypes [11]. The new SSR toolkit can distinguish somatic variants with particular reference to berry colour [12].
Given the large effort of the international scientific grape community to collect, standardize and share several types of cultivar descriptors, we envisage grape as the main crop whose cultivar identification is best coded and approachable by small- medium size laboratories at the molecular level. Pomarici E (2016) Recent trends in the international wine market and arising research questions. Wine Economics and Policy 5: 1-3.
For more Open Access Journals in Juniper Publishers please click on: https://juniperpublishers.business.site/
For more articles in Open Access Journal of Horticulture & Arboriculture please click on: https://juniperpublishers.com/jojha/classification.php
To know more about Open Access Journals Publishers
To read more…Fulltext please click on: https://juniperpublishers.com/jojha/JOJHA.MS.ID.555576.php
23 notes · View notes
Epilepsy and its Association with Depression | Juniper Publishers
Tumblr media
Juniper Publishers-Open Access Journal of Toxicology
Authored by Sukaina Rizvi
Abstract
Depressive disorder is a frequent comorbid psychiatric condition in patients with epilepsy. It is more common in patients with temporal lobe epilepsy and frontal lobe seizures. Research has revealed a strong correlation between these two conditions. The early recognition of depressive symptoms in an epileptic patient is a predictor to improve quality of life. Besides treating epilepsy, antiepileptics have a role in treating nonepileptic conditions like mood disorders and pain syndromes. However, it is to be considered that certain antiepileptics decrease seizure threshold and increase seizure frequency.
Introduction
Over the years, a significant amount of research has been conducted showing relationship between epilepsy and depression. Epilepsy and depression are common conditions and often they occur together. Approximately, 40-60% people with epilepsy are affected with depressive symptoms [1]. This is a review article highlighting a strong association between two entities. The main idea behind this review article is to encourage practitioners to keep a close eye on symptoms of depression in people with epilepsy and to treat them accordingly which can impact positively on their quality of life.
Discussion
what is epilepsy? It is important to understand epilepsy on individual basis before moving on further. Epilepsy or a “seizure disorder” is a neurological condition affecting people of all ages. It involves a spectrum of various kinds of seizures, each presenting in a unique way in person to person. The two terms Epilepsy and seizure are used interchangeably; however, these are different in a context of frequency of occurrence as seizure is a single occurrence and epilepsy is two or more unprovoked seizures. According to Epilepsy foundation there are about 3.4 million people in United states who have epilepsy and there are 150,000 of new cases of epilepsy in the United States each year. It is also evident from a systemic review and meta-analysis research by A. K Ngugi, et al that median incidence of epilepsy as almost twice as high in low income countries than of high income countries [2]. The cause of epilepsy could be familial or could be secondary to stroke, brain infection, traumatic brain injury or idiopathic. Diagnosis requires a multidisciplinary approach including clinical presentation along with EEG, CT scan head, MRI, Neuropsychological testing, blood work. There are some seizures which present with normal finding on EEG. These are called pseudo-seizures and require a detail evaluation by a psychiatrist.
Epilepsy tend to impact a person on physical and psychological grounds as the occurrence of seizure is often uncertain. This could lead to an increase risk of mood disorders, physical trauma, cognitive issues, behavioral disturbances, depression, hospitalizations and mortality [3]. It is evident from a survey in UK that people with epilepsy tend to suffer from anxiety and sleep disorders more than the people without epilepsy [4]. This sleep disturbances and anxiety can significantly affect the quality of life in a negative way predisposing a person to develop depression.
As the focus of medicine has transitioned to research, we are now able to uncover that depression and epilepsy often coexist together. It is approximated that life time prevalence of depression in correlation with epilepsy is about 55% [5]. The exact cause of this association is still debatable as various mechanisms explain this link. People with depression have sleep deprivation which can decrease seizure threshold and increase seizure frequency. Preictal psychiatric symptoms usually consists of a constellation of symptoms preceding seizure and can last from minutes to days including prodromal symptom of depressed mood and irritability which is relieved after the onset of seizure or in some cases after few days of seizure activity [6]. Inter Ictal depression or dysphoria consist of brief episodes of crying spells, feeling of worthlessness, anhedonia, helplessness, hopelessness which usually last less than 30sec. In addition, inter ictal depression is also manifested by agitation, psychotic disturbances and impulse control issues which can ultimately predispose to increased suicidal tendencies [5,7]. It is important to recognize all these phases as their prompt recognition and their immediate treatment can lead to prevention of a seizure activity and would also improve quality of life.
It is stated that Depression affects some parts of limbic system of brain which includes amygdala which is a center for emotional/stress responses and hippocampus which has a role in cognition. This results in reduced hippocampal volume and functional or physical alteration of amygdala. Research publication have demonstrated increased risk of depression in patients with temporal epilepsy [5].
This is supported by the temporal lobe epilepsy refractory to antiepileptic medications that could lead to hippocampal sclerosis [8]. Studies have shown some correlation showing that people who have hippocampal sclerosis had a history of febrile convulsant seizure in childhood. Also, there is a study on infants with complex febrile seizures validating that sometimes complex prolonged febrile seizures can lead to acute hippocampal injury which later evolves to hippocampal atrophy [9]. This phenomenon could also explain an association between epilepsy and depressive symptoms secondary to reduced hippocampal volume.
Antiepileptics also have a significant role in various psychiatric disorders where they are primarily used for mood stabilization and for treating anxiety. However, effects of antiepileptics in terms of their therapeutic benefits and side effect profile varies from person to person. It is important to consider that studies performed on one group of people on AEDs should not be implied to another group. This is even more of significance in patients with epilepsy where there is a considerable variation in response to these drugs based on different reactions. Research has shown that people with epilepsy on antiepileptics are more predisposed to increased risk of depression as compared to the other populations [10]. According to Siddhartha, certain antiepileptics are notorious for this behavior which includes levetiracetam, ethosuximide, phenytoin, topiramate etc., which may precipitate underlying depression or anxiety. However, it is interesting to note that some AEDs like lamotrigine have beneficial effects of antidepressants [10,11]. It is stated in publications that each of AEDs act through unique mechanisms which alters the electrochemical gradient resulting in positive or negative behavioral changes. These mechanisms include GABAergic modulation either through stimulating chloride channels or inhibiting GABA uptake and inhibition of voltage gated sodium channels [12]. Landolt hypothesis of forced normalization should also be taken into consideration regarding behavioral manifestation of AEDs which states possibility of depressive symptoms after diminution of epilepsy either through surgery or use of AEDs [5].
There is evidence suggesting that tricyclic antidepressants and MAOI have a dose- dependent potential to decrease seizure threshold. Bupropion has also shown to decrease seizure threshold at all doses. Now there are cases reported in which Bupropion has led to seizure activity even at its extended release formulation. Alternatively, second generation antidepressants SSRI like sertraline, paroxetine, escitalopram does not lower seizure threshold and can be safely used for treating depression in epileptic patient [13,14]. There could also be a strong connection among depression, epilepsy and suicide as a people with MDD harm themselves by over ingesting antidepressants which could be lethal causing seizures or on the other hand, a people with epilepsy can become depressed over time with their illness and try to commit suicide.
Conclusion
In the light of above review article, it is concluded that epilepsy and depression share a unique bidirectional relationship as depression is a most frequent comorbidity in patients with epilepsy. Given their strong correlation a clinician should use a holistic approach to identify depressive symptoms in epileptic patients. There is also a need to investigate about any history of seizure disorder as there is evidence suggesting hippocampal changes in these patients predisposing to depression in later life. It is imperative for practitioners to obtain a through drug history, monitor their drug levels and to make correct choice of antidepressants if treating epilepsy. This also necessitates a need of collaboration between a neurologist and a psychiatrist to manage these conditions.
For more Open Access Journals in Juniper Publishers please click on: https://juniperpublishers.business.site/
For more articles in Open Access Journal of Toxicology please click on: https://juniperpublishers.com/oajt/
To know more about Open Access Journals Publishers
To read more…Fulltext please click on: https://juniperpublishers.com/oajt/OAJT.MS.ID.555616.php
37 notes · View notes
Pleomorphic Adenoma of the Tongue Base: A Case Report | Juniper Publishers
Tumblr media
Juniper Publishers-Open Access Journal of Head Neck & Spine Surgery
Authored by Mutsukazu Kitano 
Abstract
An 86-year-old woman underwent bronchoscopy after developing aspiration pneumonia. She was found to have a tumor of the tongue base and was referred to our department. Fiberscopy revealed a pendulous mass at the tongue base. On computed tomography, a smooth pendulous mass (2cm × 1.7cm) was seen at the base of the tongue, with no deep invasion. The biopsy report indicated possible mucoepidermoid carcinoma. The risk of surgery was high due to her age and co-morbidities, so the patient and her family did not agree to resection of the tumor. Aspiration pneumonia recurred several times over several months, after which she could not take anything orally and became bedridden for weeks. To improve her quality of life by minimally invasive surgery, the tumor was excised transorally under general anesthesia. The pathological diagnosis was pleomorphic adenoma, and the surgical margins were negative. The patient’s postoperative course was good. Pleomorphic adenoma often arises from the major salivary glands, especially the parotid gland, but pleomorphic adenoma of the tongue base is rare. This case is reported along with a review of the literature.
Keywords: Pleomorphic adenoma; Tongue base; Surgery
Introduction
Pleomorphic adenoma is a common benign tumor in the field of otolaryngology/head and neck surgery. It often arises from the major salivary glands, especially the parotid gland, but also from the minor salivary glands of the oral cavity. Benign and malignant tumors of the minor salivary glands are usually found on the palate, upper lip, gums, cheek, floor of the mouth, pharynx, larynx, and trachea [1]. In contrast, pleomorphic adenoma of the tongue base is rare and only 13 cases have been reported (Table 1). Here we report a patient with pleomorphic adenoma of the tongue base and review the relevant literature.
Case Report
An 86-year-old woman had noted discomfort on swallowing for several years. She developed slight dysphagia and fever six months previously. Aspiration pneumonia was diagnosed by her local physician and she was treated with antibiotics. Although her symptoms resolved within a few days, bronchoscopy revealed a tumor at the tongue base and she was referred to our department. Transnasal fiberscopy demonstrated a pendulous mass at the tongue base (Figure 1). Computed tomography revealed a smooth-surfaced pendulous mass (2cm × 1.7cm) at the tongue base without deep invasion (Figure 2). Biopsy of the tumor gave a diagnosis of possible mucoepidermoid carcinoma. It was considered that this tumor might have caused her dysphagia and aspiration pneumonia. She had a history of diabetes mellitus, schizophrenia, femoral fracture, and dementia.
Surgery was considered to be high risk due to her age and co-morbidities. Because the patient and her family did not agree to resection of the tumor, she was followed up by her local physician. Aspiration pneumonia recurred several times over several months, after which she could not take anything orally and became bedridden for weeks. To improve her quality of life by minimally invasive surgery, the tumor of her tongue base was excised transorally under general anesthesia. The working space in the oral cavity and pharynx is limited, so we resected the mass by using laparoscopic instruments. The postoperative pathological diagnosis was pleomorphic adenoma and the surgical margins were negative (Figure 3). After surgery, she could eat without discomfort on swallowing or recurrence of aspiration pneumonia. The tumor has not recurred after followup for seven months (Figure 4).
Discussion
Pleomorphic adenoma was first described by Missen in 1874. About 80% of pleomorphic adenomas arise in the parotid gland, followed by 10% in the submandibular gland and 10% in the minor salivary glands [2]. Tumors of the minor salivary glands usually arise on the palate, upper lip, gums, cheek, floor of the mouth, pharynx, and trachea [1]. The most frequent site for pleomorphic adenomas of minor salivary glands is the palate (50%), followed by the upper lip [3]. In contrast, pleomorphic adenoma rarely arises from the tongue base and only 13 cases have been reported previously (Table 1) [2,4-15].
Surgery is the accepted treatment for pleomorphic adenoma and the tumor was subjected to surgical resection in all of the previous reported cases. Because of its anatomical features, approaching the tongue base for surgery raises several problems. In particular, the site is difficult to view by direct vision and the working space is narrow. The surgical approach depends on the size and location of the tumor, so the surgeon should plan treatment carefully. Various surgical approaches have been used, including the transoral, transhyoid, transpharyngeal, transmandibular, and combined transoral-transcervical approaches. We performed transoral excision to minimize surgical invasion, because the patient was elderly and had a history of schizophrenia and dementia, suggesting that brief hospitalization was required. The tumor was pedunculated and not deeply infiltrative, so we decided that transoral resection was reasonable. Because the working space in the oral cavity and pharynx is very narrow, laparoscopic instruments were used. However, the devices were actually too long for the transoral approach, so a new approach such as robot support is needed for resection of tongue base tumors [16].
For more Open Access Journals in Juniper Publishers please click on: https://juniperpublishers.business.site/
For more articles in Open Access Journal of Head Neck & Spine Surgery please click on: https://juniperpublishers.com/jhnss/
To know more about Open Access Journals Publishers
To read more…Fulltext please click on: https://juniperpublishers.com/jhnss/JHNSS.MS.ID.555614.php
27 notes · View notes
Impact of Acute Kidney Injury on the Survival of Subjects Receiving Noninvasive Ventilation | Juniper Publishers
Tumblr media
Juniper Publishers-Open Access Journal of Pulmonary & Respiratory Sciences
Authored by César Cinesi Gómez
Abstract
Objective: the main objective was to determine the presence of acute kidney injury (AKI) and the 90-day survival of subjects with acute respiratory failure (ARF) receiving noninvasive ventilation (NIV) in the Emergency Department (ED)
Method: We performed a prospective observational study. AKI was defined as the growth of level of creatinin between the one made in the ED, which had to be 1.5 times higher than the “basal value” (within the previous 3 months). Subjects were contacted by telephone at hospital discharge and at 30, 60 and 90 days after the initiation of NIV.
Result: We analyzed 174 cases: 30(17.3%)subjects with AKI and 144(82.7%)subjects no-AKI. Fifty-three percent of the subjects (16 subjects)with AKI died versus twenty percent (30 subjects)with no-AKI (RR 3.276; CI 95%: 1.74-6.16. P<.001) Cox regression analysis showed the following to be statistically significant: subjects with AKI (HR 2.808; CI95%: 1.497-5.291. P=.001), mean blood pressure (HR 0.969; CI95%: 0. 926-0.994. P=.044)and age (HR 1.039; CI 95%: 1.007-1.71. P=.015).
Conclusion: The presence of AKI is an independent factor of mortality in subjects with ARF requiring NIV in the ED.
Abbreviations: ARF: Acute Respiratory Failure; ED: Emergency Department; IMV: Invasive Mechanical Ventilation; NIV: Non Invasive Ventilation; COPD: Chronic Obstructive Pulmonary Disease; APE: Acute Pulmonary Edema; AKI: kidney Injury; AKIN: Acute Kidney Injury Network; IPAP: Inspiratory Positive Airway Pressure
Introduction
The management of acute respiratory failure (ARF)in the Emergency Department (ED)is evolving from classical invasive mechanical ventilation (IMV)with endotracheal intubation to the "more recent” noninvasive ventilation (NIV)[1]. From almost the beginning of the implementation of the latter technique EDs were considered as fundamental strategic areas since early initiation of NIV reduces patient mortality [2]. Exacerbations of chronic obstructive pulmonary disease (COPD)and acute pulmonary edema (APE)are the two most frequent diseases in the ED [1]. Factors associated with failure of NIV and lower patient survival include a low level of consciousness, high respiratory rate, pH values less than 7.25, high scores in severity scales(APACHE, SOFA)or hemodynamic instability [2,3-7]. On adding IMV to these prognostic, it is observable a worsening in the kidney function, which leads to a higher mortality [8-10]. The evaluation of renal function consolidates acute kidney injury (AKI)which is based on the RIFLE [11] and Acute Kidney Injury Network (AKIN)criteria [12]. These criteria present two fundamental points: the first is a dynamic study of renal function according to changes over time, and the second is a grading of AKI depending on the relative decline in renal function. Both scales adequately determine the prognosis of the patient [14-17]. However, studies on factors associated with NIV do not evaluate in depth the relationship between the renal function and the survival of these subjects [2-5,14]. Therefore, the aim of the present study is to determine the survival of subjects receiving NIV and the presence of AKI in the ED.
Methods we performed a prospective observational study carried out in the ED of the Hospital General Universitario Reina Sofia of Murcia (Spain)which attends a population of 202,000 inhabitants, with 92,297 emergencies having been attended in 2014. The study began on November 10, 2012 and finished on June 28, 2014. Patient recruitment was dynamic and consecutively included all the subjects attended in the ED. The inclusion criteria for these subjects were:
a. Aged above 18 years.
b. ARF defined by pO2/FiO2<300.
c. NIV during the ED visit.
d. A diagnosis in ED that were of APE or COPD exacerbation.
The diagnosis for APE was base on clinical’s foundings made by the ED’s physician with a chest radiography compatible with APE. The COPD exacerbation is defined as a worsening of the patient’s respiratory symptoms that is beyond normal day-to- day variations in subjects with known COPD.
A patient with a serum creatinine value carried out in the last 3 months. The exclusion criteria were: requirement for a lifesaving or emergency intervention, the necessity IMV before beginning with NIV and subjects receiving hemodialysis. The study followed the prevailing laws and regulation and was approved by the Ethical Committee of Clinical Investigation of our hospital. All the participants of the study provided informed consent. Confidentiality of all personal data was managed according to the Spanish Organic Law 15/1999, of 13 December on the protection of personal data. The main objective of the study was to determine the mortality at 90 days after the initiation of NIV and the presence of AKI. We determined serum creatinine (mg/dl) levels at admission to the ED. To define the "basal value” we had to use the last measurement of serum creatinine made to the patient. This measurement had a deadline of three months. AKI was defined as the growth of level of creatinine between the one made in the ED, which had to be 1.5 times higher than the “basal value”. The subjects that were discharged were telephoned at 30, 60 and 90 days after the initiation of NIV. The study included secondary objectives were the mortality rate during hospitalization, admission to the intensive care unit (ICU), mortality at 30 and 60 after the initiation of NIV and mortality according diagnosis(APE, COPD exacerbation). Mortality was also taken into account according to the AKIN criteria [13]. NIV was the administration of continuos positive airway pressure (CPAP) or Bilevel positive airway pressure (BiPAP) applied through a interface. All subjects were continuously monitored. The ventilators used for NIV were either BiPAP model ST or Trilogy 202 (Respironics; Murrysville, PA). The initial ventilator.
Settings are as follows:
I. BiPAP Mode: Inspiratory Positive Airway Pressure (IPAP) between 10-16cm H2O and an Expiratory Positive Airway Pressure (EPAP) of 4cm H2O. After, the pressure (increasing the pressure support in 2 cm H2O each time) was adjusted to achieve an expiratory mean tidal volume of at least 5 ml/kg.
II. CPAP Mode: EPAP of 5cm H2O, increasing the pressure to 10-15 cm H2O. The IBM SPSS Statistics v-21 was used for the statistical analyses. Categorical variables were expressed as absolute values and percentages. Continuous values were expressed as mean, standard deviation and median. In addition, the type of distribution was determined using the Kolmogorov- Smirnov test. Differences between categorical variables were evaluated using the Chi square and continuous variables were analyzed with the Student’s T test if the distribution was normal or the Mann-Whitney U test if otherwise. The relative risks were calculated with their 95% confidence intervals (CI 95%). To determine associations between the continuous variables and the different groups the ANOVA or Kruskall-Wallis test was used. To discriminate the confounding power of the variables, Cox regression analysis was used performing univariate analysis of the statistically significant variables (not only regarding AKI, but also the mortality) and including the subjects with AKI. Kaplan- Meier analysis was performed for survival analysis, and the curves were compared using the Log rank test. A p value < 0.05 was considered significant.
Results
ARF: Acute Respiratory Failure.
NIV: Noninvasive Ventilation.
COPD: Chronic Obstructive Pulmonary Disease.
APE: Acute Pulmonary Edema.
AKI: Acute Kidney Injury.
A total of 291 cases were included, 52(17.8%) of which were excluded for lacking serum creatinine determinations in the previous 3 months, 61(20.9%) presented a different diagnosis exacerbation of COPD or APE, in 2(0.68%) cases data were lacking and 2(0.68%) were less than 18 years of age. Thus, 174 cases (59.7% of the subjects) were finally analyzed (Figure 1). One hundred forty-four subjects (82.7%) didn’t have Acute Kidney Injury (no-AKI), while 30 subjects (17.2%) developed an Acute Kidney Injury (AKI).Table 1 shows the basal characteristics of the subjects comparing those with AKI with subjects with no-AKI. Of the 174 subjects studied, 45(25.9%) died within 90 days of ED discharge. Among subjects with AKI the mortality at 90 days was 53.3% (16 subjects) compared to 20.1% (29 subjects) in those with no-AKI (RR 3.276; CI 95%:1.74- 6.16.P<.001). The global in-hospital mortality was 14.3% (25 subjects), 30.0% with AKI vs 11.1% no-AKI (RR 2.554; CI 95%:1.32-4.92.P=0,007). Tables 2 & 3 show the data related to mortality. Urea, serum creatinine, creatinine clearance and previous serum creatinine values showed no statistically significant relationship with 90-day mortality, with P values of .133, .269, 0118 and .527, respectively. However, the difference in serum creatinine levels (1.14±0.89 vs. 0.28±0.51 mg/dl; P=.024) were found to be related to mortality at 90 days. Regarding the in-hospital mortality, a significative statiscal difference was found as regards the serum creatinine (1.76±0.93 vs. 1.37±0.68 mg/dl; P=0.014) and the difference in serum creatinine levels (0.57±0.91 vs. 0.21±0.35 mg/dl; P=.039).
* Continuous values are presented as mean ± SD (median)
† p-value to contrast both groups.
‡ Any type of domiciliary noninvasive ventilator
S Inspiratory oxygen fraction.
‖ Creatinine obtained from analyses in the last three months.
¶In the lowest line the percentage amoung the group are shown.
AKI: Acute Kidney Injury.
NIV: Noninvasive Ventilation.
COPD: Chronic Obstructive Pulmonary Dissease.
SOFA: Sequential Organ Failure Assessment.
* AKI: Acute Kidney Injury.
† p-value to contrast both groups..
‡ ICU: Intensive Care Unit
S The number of event is shown/ total amount within the diagnosis.
‖ COPD: Chronic Obstructive Pulmonary Diseases.
With regard to the remaining variables studied, among the categorical variables none of them showed a significative statiscal relationship with 90-day and in-hospital mortality. To the contrary, of the variables the following were statistically significant: aged (75.27±11.4 vs 80.40±10.4 years. P=.004), mean blood pressure (100.7±22. vs 91.3±21.4 mmHg. P=.010), pro BNP (4627.0±6299.5 vs 7244.1±9264.3 pg/ml. P=.040); procalcitonin 0.48±4.4 ng/dl vs 0.59±1.5 ng/dl. P=.001)and SOFA score (3.5±1.1 vs 4.6±1.7. P=.023) No relationship was observed between the presence of AKI and the probability of admission to the intensive care unit (ICUj, with ICU admission of 8.3% in subjects with no-AKI, in opposition to a 6.7% in subjects with AKI (RR: 0.816; CI 95%: 0.21-3.07; P=.760)(Table 2). The Kaplan-Meier curves (Figure 2) showed a greater mortality in subjects with AKI on comparing no-AKIN (P<.001) Cox regression analysis showed statistical significance in the presence of AKI (HR 2.808; CI 95%: 1.49-5.29; P=.001), mean blood pressure (HR 0.969; CI 95%: 0.926-0.994; P=.015)and age (HR 1.039; CI 95%: 1.007-1.071; P=.015) (Table 4).
*P < .001. AKI: Acute Kidney Injury.
*HR: Hazard Ratio.
CI: Confidence interval.
AKI: Acute Kidney Injury.
Discussion
This is the first prospective study to evaluate the association between worsening in renal function and the mortality of subjects with severe ARF. The severity of both hypoxemic and hypercapnic ARF is related to the need for NIV. The results of the present study indicate a lower survival among subjects with worsening in renal function compared to basal values and the greater the renal failure the worse the survival. The strength of our study lays in the use of the initial serum creatinine value compared with a reasonable recent basal determination. Therefore, the prognostic value of renal function may be determined at the time of initiating NIV. In the present study we demonstrate that the presence of renal failure triples the probability of death. The importance of this study is that it suggests that AKI has a previously unappreciated relationship among the prognostic factors of subjects receiving NIV.
The classical study on prognostic factors by Confalonieri et al. [8] did not include either the presence of renal failure or measurements of renal function such as serum creatinine or urea. Indeed, serum bicarbonate values reflecting acid-base equilibrium are often not included [1,3,7-9]. This study even proposed a prognostic scale including the APACHE. Thus, a greater mortality is observed with values >29 in the APACHE scale. In our trial, we have used the SOFA scale in order to predict the mortality. As it was expected, taking into account the reference literature, the mortality in subjects with high SOFA was higher mortality. However, when the multivariate analysis was carried out, the SOFA scale disappeared from the model because of the presence of AKI. As the APACHE, SAPS II and SOFA scales included the kidney function, it is likely that the worsening of the kidney function is an independent variable and very important in the mortality of subjects undergoing a NIV. A recent study by Pacilli et al. [3] reported 18.2% of moderate or severe renal failure in COPD subjects with hypercapnic ARF requiring NIV. This value is closer to that observed in our study in which the mean age of the subjects was also over 75 years.
However, although this study determined the success of NIV as discharge to a hospital ward from the respiratory ICU, they observed 28.6% vs. 14.9% of moderate or severe renal failure in subjects with technique failure. Although only almost statistically significant (P=.069j, this result is similar to our results. The present study has the advantage of being prospective and having renal function as its principal objective. However, on analyzing the relationship between serum creatinine levels and mortality, again no differences were found, except for the in-hospital mortality where it could be found. This corroborates the argument that acute worsening in renal function is a fundamental factor for prognosis, being stronger than punctual measurement of renal function [14,18-19]. Therefore, in cases in which basal serum creatinine levels are not available, the use of serum creatinine can help the emergency physician to come to a decision. With respect to IMV the study by Nim et al. [19] reported that subjects with an increase in serum creatinine levels above 0.3 mg/dl within 24 hours and basal serum creatinine levels ≤1.4 mg/dl carried an in-hospital mortality of 56%. The mortality in the group without an increase in basal serum creatinine was 36%. Although the mortality rates in our study were not as high, our results support the results of this group since the mortality rate tripled in our study (30% in the group with AKI and 11% in subjects without). The “low” mortality rate in our study is probably due to its having been performed in subjects with NIV, because the IMV behaves as independent factor of mortality in subjects with AKI [10]. This proves that the kidney function is a determinant factor in the prognosis made to subjects with ARF undergoing mechanical ventilation, both invasive and non invasive. Our trial wasn’t designed to monitorizated the creatinin’s levels after the initiation of NIV. Hence, the weight of the NIV as factor in the development of AKI is unknown nowadays. Therefore, more researches are needed. As mentioned previously, bicarbonate levels are often not included as a prognostic factor of NIV, but have a low predictive power [5].
However, studies describing a relationship have observed that high levels of bicarbonate (greater than 25 mmol/L) carry a better prognosis [3,4]. We did not find a relationship between serum bicarbonate levels and mortality but we did observe lower levels in the presence of AKI. Traditionally, the pH has been given a very important factor to prognosis. Therefore, the lower the pH, the worst the prognosis [6,7]. However, in our trial as well as in others [1], a relationship between and the prognosis was not observed. Similary, the same can be said for the pCO2. The classical hypothesis was that a higher level of pCO2 led to a lower pH and, therefore, a worse prognosis [7-8]. Moreover, our study as well don’t prove this. Hence, there has to be another important factor that influenciates in the pH as well as in the prognosis. One explanation for the behave of the pH, pCO2 and HCO3-, may be that subjects with mainly hypercapnic ARF require an increase in serum bicarbonate to compensate for the associated acidosis. If these subjects presented AKI they would not be able to increase bicarbonate levels and consequently their pH levels would be lower. Since AKI carries a higher mortality this would explain the association between the higher mortality, low pH levels and not high serum bicarbonate levels as described in previous studies [1-6].
The main limitation of the present study is the different use of the AKIN criteria. These criteria are based on changes in serum creatinine or diuresis once the patient has been admitted. However, we evaluated the changes between serum creatinine levels at admission and within the previous 3 months. These “recent” serum creatinine levels were chosen with the aim of detecting subjects with AKI earlier since strict use of the AKIN criteria leads to a delay during which these time-dependent pathologies may worsen. The second limitation of the study is the cutoff of 3 months for creatinine levels. This time point was selected with the aim of having relatively “recent” values to carry out the study. Notwithstanding, in almost 20% of the cases the previous serum creatinine levels were not available. The last limitation is the relative small number of subjects with AKI. This fact is made worse in subjects classificated as AKIN 2 and AKIN 3. Therefore, it is fundamental for the research of more studies that valorate the behave in the changes in the kidney function in subjects undergoing NIV. Strangely enough subjects with AKI do not possess a higher possibility of being in ICU admission. Once more, the kidney function is the forgotten one in subjects undergoing a NIV. This study is just the first step. It is clear that the way we deal with AKI subjects has to change. The best path is yet to be discovered. In conclusion, the presence of AKI measured according to the AKIN criteria is an independent factor of mortality in subjects with ARF requiring NIV in the ED.
For more Open Access Journals in Juniper Publishers please click on: https://juniperpublishers.business.site/
For more articles in Open Access Journal of Pulmonary & Respiratory Sciences please click on: https://juniperpublishers.com/ijoprs/
To know more about Open Access Journals Publishers
To read more…Fulltext please click on: https://juniperpublishers.com/ijoprs/IJOPRS.MS.ID.555594.php
40 notes · View notes
Modelling and Simulation of AGVs Using Petri Nets- Juniper Publishers
Tumblr media
Introduction
It is a technologically anticipated development that robotics in our age will become widespread and find common application areas in almost all sectors such as defense, transportation, health, service and manufacturing. This process is still expanding and the role of robots in everyday life as well as in industrial applications is increasing. One of the important applications of robotic systems in manufacturing environments is automated guided vehicles (AGVs). AGVs are autonomous systems controlled by a central control unit, operate autonomously without the need for an operator, and are used for transporting materials from one point to another [1-3]. An AGV reduces the number of occupational accidents that occur due to human beings because it fulfills all kinds of transportation operations without human interference in departments such as production, logistics, warehouse and distribution. These systems, which are used to transport all kinds of goods in industrial environments, are one of the most suitable systems to reduce costs and increase productivity. The AGVs can see the obstacles in their way due to the highest level of security measures used and sensors, slow down and stay at a safe distance. So, they can work safely in the same environment with people. Because of all these features and modularity, AGVs are frequently used in modern and flexible manufacturing systems today.
AGVs carrying products/parts/materials between workstations are controlled by their own (embedded) computers/processors and they are connected to the central computer. The design and operation of AGVs, which are quite complex and expensive systems, are of great importance for achieving high performance in their use. AGVs, which are finding increasingly more application areas and capable of carrying tasks at different scales in various environments, are today’s one of the major mobile robotic systems. For this reason, modeling and simulation of behavior of AGVs in working environments is of great importance and one of the most powerful tools for behavioral modeling of AGVs is Petri nets. Petri net is a bi-partite mathematical and graphical formalism consisting of place, transitions and arc as main constructs. Generally, places represent resources, transitions transformations and tokens holding of conditions, but any concept can be assigned to constructs depending upon modelling approach. Petri net can be used to model discrete event dynamical systems (DEDS) [4-8]. Petri net has the capability to model and analyze a discrete event dynamical system (DEDS) and an AGV embedded in a flexible manufacturing system is a good candidate to qualify for implementation of Petri net method. This paper summarizes two research studies for modelling, simulation and analysis of AGV behaviors using Petri nets. One of them is a Petri net model for the intended behavior of a demonstrative AGV for transferring packs between stations in a pre-defined manufacturing environment. The second is a Coloured Petri net model of a loop layout automated guided vehicle system embedded in an automated manufacturing system.
Petri Net Based Behavioural Simulation of a Pick Packing AGV
Petri net is used in our behavioural modelling and simulation study for a demonstrative pick packing AGV as an implementation of systematic behaviour-based design approach [9]. The AGV is considered as required to transfer some packs between four stations in a pre-defined environment. A Petri Net model for the intended behaviour with three pre-specified missions is developed based on rough conceptual structuring of the AGV. Model development and simulation has been done successfully using Artifex modelling and simulation environment [10,11].
The general design requirement in the present case study is considered as the conceptualization of an AGV model to transfer some packs between finite number of stations and therefore, the required system is called as a “pick packing AGV”. The operational behaviour of the pick packing AGV is based on a scenario and it is described as follows: The AGV will be able to travel in a predefined physical environment between a “Start Point” and four stations, and to transfer some packs between these stations. The AGV is expected to accomplish predefined missions without any external interference. A Petri Net model used to represent and simulate the intended behaviour of the pick packing AGV. The model is developed using ArtifexTM Graphical Modelling and Simulation Environment which a C-based development platform for discrete event simulation. Figure 1 shows the Top-Level Behavioural Model of the AGV in the ArtifexTM environment. The Top-Level Behavioural Model consists of six objects, namely units, which are Power Source and Processor Units, Right and Left Wheel Units, Robot Arm Unit and Gripper Unit. All units are connected to each other by interconnecting links that models the communication between these units. Petri Net structure of each unit is embedded in the top-level model.
Modelling and Analysis of AGV for an Automated Manufacturing System using Coloured Petri Nets
A loop layout automated guided vehicle system embedded in an automated manufacturing system has been modelled and analyzed by applying colored Petri net method. This study focused at developing of automated guided vehicle system for serving a flexible manufacturing system comprising of six workstations arranged in a loop layout. AGVs are serving workstations moving sequentially in a loop layout. AGVS has been modelled and analyzed using Petri net method, specifically using color Petri net class implemented by CPN Tools. Modelling and analysis have been extended in this study by developing a lab scale prototype of an AGV which has been experimentally tested and verified in an automated manufacturing system. Simulation experiment results have been validated by developing and employing an AGV prototype inside an automated manufacturing system. The results show that an increase in number of AGVs in an automated manufacturing system increases system throughput whereas an increase in AGV speed, for a fixed number of AGVs in the system, is causing a decrease in throughput. The approach developed in this study can be employed to different system configurations to determine system performance.
Conclusion
As AGV systems find more application areas in industry, their modelling and simulation becomes more significant for industrial system design. Petri net tools are practical and highly useful tools to study and develop model design before expensive implementation. It is highly versatile at moderate sophistication and reasonably reliable output. A physical prototyping after modelling and simulation would be highly useful to foresee any probable problem location within the system before full scale testing. It is also a highly efficient approach for educational purposes at graduate levels.
For More Open Access Journals Please Click on: Juniper Publishers
Fore More Articles Please Visit: Robotics & Automation Engineering Journal
0 notes
Synergy of IoT and AI in Modern Society: The Robotics and Automation Case- Juniper Publishers
Tumblr media
Abstract
The Internet of Things (IoT) is a recent revolution of the Internet which is increasingly adopted with great success in business, industry, healthcare, economic, and other sectors of modern information society. In particular, IoT supported by artificial intelligence enhances considerably the success in a large repertory of every-day applications with dominant one’s enterprise, transportation, robotics, industrial, and automation systems applications. Our aim in this article is to provide a global discussion of the main issues concerning the synergy of IoT and AI, including currently running and potential applications of great value for the society. Starting with an overview of the IoT and AI fields, the article describes what is meant by the concept of ‘IoT-AI synergy’, illustrates the factors that drive the development of ‘IoT enabled by AI’, and summarizes the concepts of ‘Industrial IoT’ (IIoT), ‘Internet of Robotic Things’ (IoRT), and ‘Industrial Automation IoT (IAIoT). Then, a number of case studies are outlined, and, finally, some IoT/AI-aided robotics and industrial automation applications are presented.
Keywords: Artificial intelligence (AI); Internet of things (IoT); Machine learning (ML); Cognitive IoT; Internet of robotic things (IoRT); IoT- aided robotics; Industrial IoT (IIoT); IoT- aided industrial automation; IoT-aided manufacturing
Introduction
As the IBM Institute for Business Value has pointed out, the full potential of the Internet of Things (IoT) can only be realized with the introduction of Artificial Intelligence (AI). Actually, IoT and AI are umbrella terms. The IoT can be described as things/objects in our environment being connected to provide seamless communication and contextual services. IoT involves a tremendous number of connections of things to things and things to humans, and therefore it is more complex and dynamic than the Internet. According to IDC’s Worldwide IoT taxonomy (2015), the IoT market place is estimated to be worth 1.7 trillion US Dollars, with the biggest portion (35%) being hardware followed by services (27%), connectivity (22%), and software (16%).
As originally described by Minsky and McCarthy (the fathers of AI), AI is any task carried out by a program or machine that, if a human performed the same task, one would say that human had to apply intelligence to accomplish the task. Today AI has a ubiquitous use in a large variety of applications of modern information society. Scientifically, AI is distinguished in:
A. Narrow AI that involves all intelligent systems that can carry-out specific tasks without being explicitly programmed how to do so, and
B. General AI which is a flexible form of intelligence that can learn how to perform a variety of different tasks.
Looking at IoT and AI one can easily see what both have in common, viz. data enhanced to information, to knowledge, to intelligence, and finally to decisions for specific purposes across a variety of everyday, enterprise, and industry/automation situations. With the AI synergy, IoT becomes smarter. Today the number of companies that embed AI (e.g., machine learning, intelligent reasoning) into their IoT endeavors is rapidly increasing. These companies see their capabilities to grow and their operational efficiency to improve, including a big reduction of unplanned down time. This indicates that companies that develop an IoT strategy, or evaluate a potential new IoT-based activity, or seek to obtain more value from an existing IoT application will get many benefits from the incorporation of AI methods and tools in their IoT endeavors.
The purpose of this article is to provide a global conceptual overview of the synergy of AI and IoT with emphasis on its application in robotics and automation. Specifically, the article:
a) Discusses the ontological questions ‘what is IoT’ and ‘what is AI’.
b) Presents fundamental issues about the ‘synergy of IoT and AI’ or ‘IoT enabled by AI’.
c) Outlines the concepts of ‘Industrial Internet of Things’ (IIoT), ‘Internet of Robotic Things’ (IoRT), and ‘Industrial Automation Internet of Things’ (IAIoT).
d) Outlines a number of case studies (home automation, oil-field production, smart robotics, smart manufacturing, and smart factory).
e) Summarizes the field of IoT-aided robotics.
f) Discusses an application of IoT-aided industrial automation.
What is IOT?
The term “Internet of Things” (IoT) is now widely used, but so far there is not a unique common definition or understanding of what this term encompasses. The term Internet of things was first used in 1999 by Kevin Ashton, director of Auto- ID Center (MIT), working on networked “radio-frequency identification” (RFID) infrastructures [1-5]. He coined this term in order to reflect his envisioning of a world in which all electronic devices are networked and every object (physical or electronic) is tagged with information pertinent to that object [2]. The Internet of things, which is sometimes referred to as Internet of Objects (IoO), is actually a new enhancement of the Internet, and the things/objects make them recognizable by communicating information about them. They can get information about them accumulated by other objects and things, or they can be elements of high-level services.
From the many alternative definitions of the term IoT, we select here the definition given by the IEEE (IoT Initiative, 2015) which is divided in two parts [6]:
a. Part 1: D efinition f or a s mall e nvironment scenario: “An IoT is a network that connects uniquely identifiable ‘Things’ to the Internet. The ‘Things’ have sensing/actuator and potential programmability capabilities. Through the exploitation of unique identification and sensing, information about the ‘Thing’ can be collected and the state of the ‘Thing’ can be changed from anywhere anytime, by anything” [6].
b. Part 2: Definition for a large environment scenario (where a large number of ‘Things” can be interconnected to provide complex services and enable the execution of complex processes): “Internet of Things envisions a selfconfiguring, adaptive, complex network that interconnects ‘things’ to the Internet through the use of standard communication protocols. The interconnected things have physical or virtual representation in the digital world, sensing/actuator capability, a programmability feature, and are uniquely identified. The representation contains information including the thing’s identity, status, location or any other business, social or privately relevant information. The things offer services, with or without human intervention, through the exploitation of a unique identification, data capture and communication, and actuation capability. The service is exploited through the use of intelligent interfaces and is made available anywhere, anytime, and for everything taking security in consideration” [6].
Actually, IoT is distinguished in three interaction categories [1]:
I. People to people IoT.
II. People to things (objects, machines) IoT.
III. Things/machines to things/machines IoT.
‘Things’ refer in general to everyday objects that are readable, recognizable, locatable, and addressable via information sensing devices, and/or controllable via the Internet, irrespectively of the communication means employed (RFID, wireless LAN, WAN, etc.). IoT is interdisciplinary, and according to Atzori et al. [2], falls in the following three paradigms:
i. Internet-oriented (middleware).
ii. Things-oriented (sensors).
iii. Semantic-oriented (knowledge).
It is remarked that IoT is particularly important and useful in application domains that belong to all the above paradigms. Actually, IoT is a new development of the Internet which aims at enabling ‘Things’ to be connected anytime (any context) at anyplace (anywhere) with anything (any device) and anyone (anybody) using any path or network and any service or business (Figure1) [1].
Figure 2 gives a schematic representation of IoT connectedness through gateway and cloud.
Because of its characteristics, IoT is very rapidly penetrating almost all areas of our lives. The fundamental characteristics of IoT are [1,7]:
a. Connectivity: Connectivity makes possible network accessibility and compatibility. Anything can be interconnected with the overall IoT communication and information facilities. Compatibility means that ‘things’ have the common ability to generate and consume data.
b. Heterogeneity: IoT components and devices are heterogeneous since they are based on different platforms and networks. They can communicate and interact with other devices and service platforms via a variety of networks.
c. Tremendous scale: The IoT number of things and devices that communicate and interact with each other, and have to be managed, is at least one order of magnitude bigger than that of the present internet.
d. Dynamic changes: The state and number of components and devices of IoT change dynamically (e.g., alternating connection and disconnection, changing position and speed, etc.).
e. Safety: IoT should be designed for safety of personal data, physical safety, and human well-being. Securing the end points, the networks, and the data travelling through them, implies that we create a security paradigm that we scale.
f. Small devices: Devices are becoming smaller and smaller, cheaper, and more powerful over time.
2) IoT uses small devices built for several tasks and purposes, to achieve its accuracy, scalability and versatility.
a. Autonomous agency: IoT gives an environment for getting augmented human agency, sometimes reaching the point of spontaneous unexpected interventions that are not directly caused by human beings.
b. Pervasiveness /ubiquity: IoT embeds computational capability into everyday objects and makes them effectively communicate and perform desired tasks in a way that minimizes the human need to interact with computers as computers. IoT devices are network-connected devices and always available. IoT makes computing truly ubiquitous and opens new horizons for the society, the economy, and the individual.
c. Ontological vagueness: Human beings, physical objects, and artifacts may not be clearly distinguished due to the deliberate transformation of entities of one type into entities of another type via tagging, engineering, and absorption into a network of artifacts. Criteria to deal with ambiguous identity and system boundary should be developed and used.
d. Distributed control: In IoT the control is not centrally exerted but, because of the enormous number of nodes, it has a distributed form and exhibits emergent features and behaviors which require proper distributed control.
e. Expressing: This feature enables interactivity with people and the physical world. In all cases, ‘expressing’ helps us to create products/things that interact intelligently with the real world and the environment.
The big challenge of IoT is the security issue that involves the protection of access to equipment (e.g., internet connected home or connected car, etc.) and the protection of customer and company data. It is noted that with customer and company data, a different kind of security is needed. Other security challenges of IoT are depicted in Figure 3. An aspect of IoT that should not be ignored is the fact that IoT devices, data, and networks need to be monitored in real time, otherwise we may not have success with IoT.
What is AI?
The artificial intelligence (AI) field is concerned with intelligent machines, or rather with embedding intelligence to computers, i.e., “artificial intelligence is the science and engineering of making intelligent machines” [8]. Today, AI has become an important element of computer industry helping to solve extremely difficult problems of society. AI includes the expert systems which are computer programs that simulate the reasoning and performance of human experts. Alternatively, one can say that an ES is a computer application which solves complex problems that would otherwise require extensive human expertise. To this end, it simulates the human reasoning problem by using specific rules or objects representing the human expertise.
Some of the problems that fall in the framework of AI are [8-14]:
a. Game playing.
b. Theorem proving.
c. General problem solving.
d. Natural language understanding.
e. Machine learning.
f. Pattern recognition.
g. Perception and cognition.
h. Symbolic mathematics.
i. Medical diagnostics.
j. Fault diagnosis/restoration of technological systems.
k. AI- based/Expert control.
A map that shows ‘what is AI’ is given in Figure 4. It builds on mathematics, philosophy, cognitive psychology, and biology. Its methods are distinguished in knowledge-based methods, behavioral methods, and subsymbolic methods, and has both a scientific and a technological content.
Figure 5 shows the constituents of AI (of course nonexhaustively). The robotics part that really belongs to AI includes all intelligent algorithms that perform path/task planning, local/global navigation, and intelligent/knowledgebased control.
The AI process that is mostly used in IoT is machine learning. It is difficult to define machine learning uniquely, since it ranges from the addition of any single fact or a new piece of new knowledge to a complex control strategy, or a proper rearrangement of system structure, and so on. A useful class of machine learning is the automated learning which is the process (capability) of an intelligent system to enhance its performance through learning, i.e., by using its previous experience. In other words, intelligent machines can learn to operate and improve by observing, classifying, and correcting their errors just like humans do. Five basic automated learning paradigms are:
A. Concept learning.
B. Inductive learning (learning by examples).
C. Learning by discovery.
D. Connectionist/neural network learning.
E. Learning by analogy.
Below we list three working machine learning systems:
a. IBM Watson: A question answering software system that can answer questions using machine learning.
b. Google cars create models of people on the road using machine learning.
c. Amazon’s “Featured Recommendations” uses machine learning together with prior browsing history.
Full descriptions of AI paradigms and constituents can be found in Artificial Intelligence books [8-15].
In our days, the ability of AI in smart machines is progressing from handling classical repetitive tasks to the capability to adaptively carrying out continuously changing tasks. In other words, AI application is evolved along three stages, namely:
i. Stage 1: Assisted Intelligence (tasks don’t change, machines learn, tasks are automated).
ii. Stage 2: Augmented Intelligence (changing nature of tasks, humans inform machines, machines inform humans).
iii. Stage 3: Autonomous Intelligence (changing nature of tasks, decisions are automated, machines learn continuously).
Assisted intelligence allows automating repetitive and routine manual and cognitive tasks. Augmented intelligence helps to handle more complex situations and enhance human decision making. Finally, when machines are able to learn enough about the situation and make reliable decisions that humans can trust, they can become autonomous (autonomous intelligence).
Today there are numerous artificial tools that can be used in research and applications. These tools can be classified as:
a. AI tools for personal use.
b. AI tools for business use.
c. AI tools for industry specific business.
A list of AI tools for each of the above categories is provided in [16].
Synergy of IoT and AI
Both AI and IoT are now at very mature states and their synergy promises a lot of benefits. IoT, which by many industry thinkers is considered to be the driver of the Fourth Revolution, has inspired a variety of technological advances and changes covering a wide range of fields. Many thinkers believe that IoT really needs AI, and in fact that the future of IoT is AI [17]. They anticipate that in the near future most IoT implementations will make visible use of AI techniques and tools (particularly machine learning and reasoning algorithms and software tools). Actually, IoT and AI have been worked together in many business and other areas since quite some time. IoT collects data (actually, huge amounts of data) and AI is the proper tool to make sense of huge amounts of data. AI is the engine that performs ‘analysis’, processes the data, and ‘makes decisions’ based on this data. AI enables ‘understand patterns’ and therefore helps to make more informed decisions. The use of machine learning, along big data, has opened new opportunities in IoT. One can already see the synergy of these systems at a personal level in devices such as Google Home and Amazon’s Alexa [17]. Collecting data is one thing, but sorting, analyzing, and making decisions, on the basis of that data, is entirely another thing. Clearly, to be more useful in IoT, AI should develop more accurate and more rapid algorithms and tools (Figure 6).
IoT supported by AI can provide the best way for enterprise stores to gain more from their store operations and assure their sustainability in the long run. Using IoT/AI retailers can, among others, minimize theft and maximize purchases through cross selling.
The operations required in AI/IoT data analysis are the following [17]:
a. Preparation of data (define and clean pools of data).
b. Discovery of data (find useful data in the defined pools of data).
c. Visualization of streaming data (deal with streaming data on the fly, discovering and visualizing data in smart and fast ways, such as to assure rapid decision making without delay).
d. Time series accuracy of data (keep high the confidence level in collected data with high accuracy and integrity of data).
e. Predictive/advance analysis of data (make predictive decisions on the basis of collected data).
f. Real time geospatial and location [logistical data] (Marinate smooth and under control the flow of data).
A discussion on the innovation potentials and pathways merging AI, cyber-physical systems (CPS), and IoT is provided in [18], where a technology forecast is given based on extensive descriptions and developments by field, and also based on interaction traits. According to Sudha Jamthe the junction of IoT and AI constitutes the so-called ‘cognitive IoT’ [19]. In [20] a number of examples are provided that show how AI and IoT can work together. One of them refers to the air condition equipment of buildings and examines what happens in a very hot day in which the local utility is experiencing brownouts. In this case the system could overflow, and the utility staff would need to spend time and money to face angry customers asking for restoration of the service. If the thermostats of the buildings’ equipment and the utility are connected to an IoT system, the utility staff can see how many air conditioning devices are connected to the system and react by turning everyone’s thermostat up 3 degrees, thus preventing a brownout. A built-in AI system could do the same job automatically, whereas a more sophisticated AI system could proactively turn thermostats 3 degrees at homes and nonessential business, while at the same time keeping thermostats stable in hospitals and refrigerated warehouses.
In [21] the dynamics between AI and IoT is examined. It is argued that AI/Machine learning for data science is much more than applying statistical predictive algorithms to an IoT. Therefore, it is proposed that there is a need for a new type of engineer, viz. an engineer with knowledge of electronics (IoT), AI/Machine learning, robotics, cloud, and data management. It is also argued that data science for IoT is different from traditional data science. Data science for IoT involves work with time series methods, such as autoregressive moving average methods and the like. In [22], it is explained why IoT, Big Data, and AI are three essential technologies, the synergy of which will drive the next generation of applications. It is argued that big data fueled by IoT is powerful on its own, and so is AI, but together they are the superpowers in the digital universe. Thinkers in the information field anticipate the size of the digital universe will double every two years leading to a 50-fold growth from 2010 to 2050. For meaningful results, AI needs Big Data. Actually, AI can resolve the Big Data analytics issue.
IoT data involve the following [17]:
a. Smart city data providing information that helps to predict accidents and crimes.
b. Data helping to optimize productivity across industries via preventive maintenance of devices and machines.
c. Data used in communication of automated driving vehicles.
d. Data creating truly smart homes with connected appliances.
e. Health related data giving doctors real-time insight information from biochips and pacemakers.
Humans are not able to understand and handle with standard methods large amounts of data of the above type. They need to develop new ways to analyze the performance data and information created by huge numbers of smart devices/objects. To get the full benefit of IoT data, the speed and accuracy of big data analysis should be considerably improved (Figure 2). Moreover, the continuous advances of AI cause AI to converge with IoT, to the extent that it is quickly becoming indispensable to IoT solutions. The principal elements of IoT, viz: connectivity, sensor data, and robotics, will ultimately lead to a need for almost all devices to become intelligent. In other words, IoT needs smart devices and machines. As AI convergence with IoT continues, the ongoing growth of IoT is being driven by six factors, of which the most powerful is the advent of big data and cloud/fog computing [23] (Figure 7).
The challenges facing AI in IoT include:
A. Privacy/Security/Safety
B. Complexity
C. Compatibility
D. Ethical issues
The IoT applications can be classified in several ways. One of them is the following [24]:
a) Personal and home (the sensor information is collected and used by the individuals who directly own the network).
b) Enterprise (this category includes IoT within work environments, namely offices, companies, organizations, etc.)
c) Utilities (this category includes systems that offer service optimization, such as water network monitoring, smart and grid metering).
d) Mobile (this category includes urban traffic, smart transportation, smart urban traffic, smart logistics, etc.).
Another classification of IoT application domains is the following, the contents of which are shown in Figure 8:
i. Home and buildings’
ii. Transportation
iii. Health
iv. Logistics
v. Precision agriculture
vi. Smart industry
vii. Smart retail
viii. Smart environment
IoT-AI provides many benefits. For example a smart hotel using AI-based IoT provides to its customers the following:
a. Smart booking system.
b. Flexibility in room temperature control.
c. Helpful information selection based on customers.
d. Customer history re-synchronization by returning guests.
e. Real-time support to customers on online platform to face their problems.
The benefits of AI-IoT in retail operations are discussed in [25], where the applications of IoT to retailers are outlined, and a number of major companies that offer data-driven personalization and customer service adopting AI and IoT are listed. In summary, the benefits of AI-IoT applications in brickand- mortal store environments include the following [25]:
1. IoT makes operations more efficient. This is achieved because of the ability of connected devices to track inventory levels in real time.
2. IoT helps retailers to improve the customer store journey by increasing engagement via devices such as smart mirrors.
3. IoT improves efficiency in retailer/supplier relationship. This is facilitated by the partnership of retailers with suppliers who are able to respond promptly and efficiently to frequent orders driven by the retailer’s real-time inventory tracking system.
4. AI enables retailers to provide a personalized and straightforward shopping experience and scale up the use of customer data. This includes customization of shopping recommendations, and e-commerce and m-commerce portals layout and promotion.
5. AI helps retailers to drive the sales and forecast demand. With AI, retailers can maximize the probability of having the right goods in stock, which assures faster fulfillments and leaner inventory operations.
6. AI helps retailers to analyze customer data so as to get a better understanding of customer/consumer behavior, in order to adapt the approach through which the enterprise interacts with shoppers and predicts consumer demand.
7. AI enables retailers to operate ‘chatbots’ that imitate the customer’s interaction with a customer care or sales associate, in order to understand what the best way is of responding to the customer’s need
8. AI enables computers to observe, exploit and strategize data, and implement strategy.
Figure 9 shows the three basic stages of forecasting customer demand using AI [25]. From the above it follows that to get maximum benefit from IoT-AI, retailers should do the following:
A. Act quickly to adapt to competitors that adopt new technologies.
B. Handle data with care and set-up proper strategies for handling consumer data.
C. Review relationship with suppliers and redefine the relationship with other supply-chain players.
Big retailers that run IoT/AI-based systems include: Amazon Go, Walmart, Carefour, Catalyst, Smartrac, Rebecca Minkoff Connected Store, Panasonic, and Coresight Research, 2018. The ten top industries that adopted AI-IoT are:
a. Smart manufacturing
b. Smart retail
c. Smart automobile
d. Smart health
e. Smart transportation
f. Smart education
g. Smart finance
h. Smart entertainment
i. Smart home
j. Smart security/surveillance
Industrial Internet of Things
The incorporation of robotic issues into the wider IoT was called by ABI Research “Internet of Robotic Things” (IoRT). IoRT is actually concerned with machine to machine (M2M) communication between robots and devices in an ecosystem where data are employed to drive insights and actionable outcomes. The robot is an intelligent device in the sense that it can monitor events and fuse data from several sources in order to determine and execute a best course of action, e.g., a move through the physical environment and manipulation of objects in this environment in a desired way. Potential applications of IoRT include:
a. Use a robotic device to check if a car is allowed to use a given park lot in a corporate parking area.
b. Collaboration of IoRT and humans in a manufacturing unit to make operational and other decisions.
c. Use the concept of IoRT to add more flexibility and adaptability to intelligent transportation systems (ITS).
d. Use of IoRT for elderly assistance and domestic cleaning.
One of the major application areas of IoT is the socalled “smart industrial automation”. With the aid of IoT infrastructure, advanced sensor networks, wireless connectivity, and M2M communication, conventional industrial automation is modernized completely. Most industries (small and large) have already adopted and are using IoT enhancements. IoT based industrial automation represents the present state of automation, called “industrial automation 4.0” or “Industrial Automation Internet of Things” (IAIoT). An umbrella term that covers both the IoRT and IAIoT is the term “Industrial Internet of Things” (IIoT). IIoT also embraces industrial control systems and manufacturing systems. IIoT involves smart connected assets (machines, engines, robots, actuators, power grids, sensor clouds, etc.) that operate as part of a larger system or system of systems that comprise the smart manufacturing system. The connected assets can monitor, collect, analyze, exchange, and instantly act on data/ information to automatically and intelligently change their performance or their environment. An analysis framework for IIoT devices is provided in [26], which gives a practical classification scheme with reference to IIoT security aspects. IIoT offers reduced cost structure, and increased operational efficiency, accompanied by higher quality of products (fewer failures, more efficient materials’ sourcing, etc.).
A pictorial illustration of the components that are included in industry 4.0 and create the so-called “smart factory” is given in Figure 10. The dominant components are: IoT/IoRT, cyberphysical systems, and cloud computing. A good interface that can be an assistant to the engineers is the chatbot which is easy to use, provides real-time interaction with IoT and robots, has a question-answer structure, and is a perfect interface for AI. A typical IoRT-based robotic manufacturing shop floor is shown in Figure 11 [27].
A discussion of the challenges and technical solutions concerning the IoT for industrial automation is provided in [26], including the identification of challenges for long-living IoT -aided industrial automation with enormous complexity. Some of the IIoT challenges considered in [28] are the following:
a. Latency and scalability of data (this issue can be faced through localization of computation).
b. Mixed criticality (this challenge can be managed through system partitioning).
c. Scalable and secure real-time collaboration (this can be achieved through the so-called ‘zero-configuration networking’ method).
d. Fault tolerance (this issue can be managed through networking redundancy or local fault detection near the end devices).
e. Functional safety (This can be addressed by separating the safety related issues from IoT).
Case Studies
In the following, a number of case study and application examples will be listed which give a good picture of the range of IoT/IIoT applications, especially in automation and robotics.
Home automation
A IoT-based monitoring and control system for home automation is described in [29]. This is an embedded system that uses a PIC microcontroller which provides intelligent energy preservation. It can control and automate most of the home appliances (such as lights and fan on/off) through a manageable smart phone-based android interface. The components are connected to the embedded micro-web server through LAN or WiFi module for accessing, monitoring, and controlling devices and appliances using android-based smart phone applications. The system also keeps track of status of the devices.
Oil field production
An oil and gas company use IoT to optimize oilfield production. To this end the company is using sensors to measure oil extraction rates, temperature, well pressure, and other variables for 21,000 wells. The frequency of readings is 90×day×variable. The number of data collected is about 18,900 per day. To convert raw IoT data into business data and tangible benefits, the company employs analytics to realize both direct and opportunity cost associated with the analysis of IoT data. The synergy of IoT industrial analytics resulted in persistent significant advancements [30]. Two other AI-IoT case studies presented in [30] are the following:
a. A smart municipality water metering system covering all residential and commercial water meters.
B. Water meters were mounted on 66,000 devices that used to be manually read and recorded.
a. An international truck manufacturer outfitted more than 100,000 trucks with sensors for predictive maintenance. The system is scheduling repairs automatically when needed and orders the required parts for each repair. More than 10,000 data points are transmitted per day for each track.
ABB Smart robotics
This multinational power and robotics Company adopted IIoT for developing an efficient predictive maintenance system. A large number of connected sensors monitor the maintenance requirements of its robots (across five continents) and trigger repair before parts break. The Company’s collaborative robotics is also based on IoT. Its YuMi model can collaborate with humans through Ethernet and industrial protocols (Profibus, Device Net, etc.) [31].
Boeing smart manufacturing
This multinational aviation Company has strongly deployed IIoT technology to drive efficiency in all of its factories and supply chains and is continually increasing the number of sensors embedded in its planes. Currently, Boeing is working towards making service offerings very important, while being at the top of information providers in aviation [31].
KUKA connected robotics
This Company has an IoT policy which extends to entire factories. For instance, as mentioned in [31], Jeep asked KUKA to help build a factory that could produce a car body every 77 seconds. KUKA responded by helping the Company to build an IoT-based factory with hundreds of robots connected to a private cloud. In this way more than 800 vehicles can be produced each day.
Fanuc: Smart factory down time minimization
Fanuc, a robot maker has put much effort to reduce down time in industrial facilities. The Company uses sensors within its robotics in tandem with cloud-based analytics. In this way the Company is able to predict when failure of a component such as a robotic system or process equipment is about to occur. The outcome of this effort is the so-called “Fanuc’s Zero Dynamics System” [31]. In [30] a total of 30 top real IIoT applications are described. Three of them, besides those described above, are the following:
a. Magna Steyr: Smart automotive manufacturing.
b. Kamatsu: Innovation in smart mining and heavy equipment.
c. Shell: Smart oil field Innovator.
IoT-aided robotic applications
The range of applications of IoT –aided robotic systems is very wide, and includes robots used in the manufacturing /automobile industry, health care, military, deep underwater exploration, space exploration, rescue, and security operations. IIoT helps to solve a large variety of industrial problems from temperature/pressure monitoring, to power consumption monitoring, to electrical grid monitoring, and so on. IoT applications include detection of perimeter intrusions in airports, railway stations, and ship ports. IoT paired with AI (perception, natural language understanding) enables efficient human robot interaction. Cloud robotics play a key role in enabling robot functions, e.g., mobility, sensing, manipulation, etc. IoT-based robotic systems also find application in short range communication technology, protocol design, and security assurance in smart pervasive environments. An example of cloud robots is a driverless (autonomous) car which is connected to the internet to get access into the database of maps and satellite imagery. Using sensor fusion to exploit streaming data from its camera, and the global positioning system (GPS), together with 3D sensors, a driverless car can localize its position accurately (within centimeters). Figure 12 shows the capabilities of a driverless car that are achieved through proper sensors. This car is also connected to an IoT platform.
Figure 13 depicts a typical IoT/AI-aided truck with its sensors. The transportation benefits obtained if the vehicles are connected to the IoT, and travel in smart roadways, are the following:
a. Transportation efficiency (real-time traffic is secured, transit and parking data for maximum efficiency and minimum congestion are generated).
b. Low operating costs (preventive maintenance driven by operating data and diagnostics improves warranty and services).
c. Improved safety (connected vehicles ‘talking to each other’ enable cooperation and assure crash avoidance and safety).
Of course, it should be noticed that, as is always the case with IoT-aided applications, any part of a vehicle that talks to the outside world is vulnerable to potential cyber-attack and special measures should be taken. An important application of IoT/AI-aided robots is the home security. An example of these robots is the “AppBot” home security toy robot, a WiFi controlled robot throughout the Internet (Figure 14).
The robot provides the following
a. Live view and remote control.
b. Snapshot and video recording.
c. Motion detection and tracking while communicating with a human.
d. Clear two-way talk.
e. It can be connected with the router of the house and provide access from everywhere in the world.
f. If intruders appear in a house the robot can spontaneously rotate itself to capture them in seconds and send alarm notification messages.
A general comprehensive discussion of IoT-aided robotics applications and implications is presented in [32]. This paper includes state of art issues, highlights the most important challenges, describes currently available tools, and explains why a joint investigation of IoT-aided robotics problems is needed by research teams with complementary skills. Figure 15 shows an IoT-AI-aided robotics scenario created in [32]. The IoT-aided robotic applications discussed in [32] are the following: healthcare, industrial plants and smart areas, military operations, and rescue operations.
IOT-Aided Industrial Automation
Here, a representative system that generates alarms/ alerts, and makes intelligent decisions in IoT/AI-aided industrial automation systems is outlined [33]. IIoT enables remote sensing and control of objects across available network infrastructure. The structure of this system is as shown in Figure 16.
The system is equipped with sensors (temperature, pressure, humidity, vibration, intrusion, etc.) to percept the environment and the objects’ conditions. The analog signals are inputted to the android device which checks the thresholds set by the system administrator and compare them with incoming analog signals. When an uneven/anomalous condition is encountered, special devices (e.g., Buzzer, Alarm, fan, etc.) are employed to take proper measures such as sending an Alarm/ Alert to the system administrator. Then, with the aid of AI the system takes appropriate adequate steps for resolving the problems on the basis of past experience and similar conditions stored in the data base. The cloud is appropriate for use as database of scalability. Cloud computing in industrial IoT provides computing services like storage, servers, networking, software, database, analytics, etc. The cloud-based storage allows the remote database to save data files rather than keeping files on a local storage device. With cloud computing the sharing on networks is much faster than access via other networking. Figure 17 gives a pictorial illustration of cloud computing use in the manufacturing sector.
Concluding Remarks
The Internet of Things has already been established as a major field of multidisciplinary nature that promises to offer services to society of enormous value. Particularly, its recent integration with AI has already exhibited a great success in complex and large-scale real-life applications. The field has reached a very mature state, but scientists and engineers predict that a much more advancement will take place in future with unimaginable beneficial implications for the human life. In this article we have attempted to compile a holistic overview of the IoT field and its synergetic integration with AI in robotic and industrial automation applications. In industrial/robotic automation IoT enables successful facility management, production flow monitoring, inventory control, logistics, supply chain management, and robotic operation. Although IoT security has received from the beginning of the field considerable attention, the solutions derived and used so far were not proved completely successful. Actually, security and privacy still remain the biggest challenges in IoT and IIoT applications. Another problem which is still largely open is the problem of designing distributed and many-to-many IoT/IIoT. This will require the development of new kinds of interconnectedness, interrelationship, and interdependence, such that IoT/IIoT will offer a collective and collaborative resource in which individuals can contribute at their wish. A further topic of high value in the implementation and application of IoT/IIoT-AI is the study of ethics/morality which determines the principles and rules that have to be applied in the field for securing an ethical/moral use of IoT/AI in everyday life applications [34-40].
For More Open Access Journals Please Click on: Juniper Publishers Fore More Articles Please Visit: Robotics & Automation Engineering Journal
0 notes
Ionic liquid as Functional Dispersant for Nanomaterials in Polymer Matrix | Juniper Publishers
Tumblr media
Juniper Publishers-Open Access Journal of Polymer Science
Authored by Jiji Abraham
Abstract
Ionic liquid has been used as a novel dispersant for fillers in polymer matrix. Enhanced interfacial interaction between reinforcing material and polymer matrix leads to smart materials. Use of ionic liquid as a modifier is environmentally friendly method to fabricate nanocomposites potential applications.
Keywords: Ionic liquid; Polymer matrix; Interfacial interaction; Potential applications
Abbrevations: RTILs: Room-Temperature Ionic Liquids; MWCNT: Multiwalled Carbon Nanotube; GO: Graphene Oxide; BNNS: Boron Nitride Nanosheets; BIIR: Bromo butyl Rubber; ILs: Ionic Liquids; GO-ILs: Modified Graphene Oxide; MEMS/NEMS: Micro/Nanoelectromechanical Systems; ZIL: Zwitterionic Imidazolium-Based Ionic Liquid
Ionic liquid as a Functional Dispersant for Nanomaterials in Polymer matrix
Potential applications as well as strength and durability of polymers can be enriched by the reinforcement using various nanosized fillers. However wide spread application of nanomaterials as reinforcing agents has limited due to their processing difficulty and the tendency to form agglomerates. Incorporation of active groups on fillers by emerging chemistry is a good method to overcome the problems associated with filler dispersion. At present, many advanced strategies have been developed to improve the dispersibility and stability of nanofillers in solvents and matrices which include (a) the covalent attachment of functional groups through chemical reactions (b) the non-covalent adsorption or wrapping of various functional molecules. Due to some structural alteration and the need of additional solvent during covalent fictionalisation, non-covalent functionalization is preferred.
Non-covalent functionalization of fillers by Room-Temperature Ionic Liquids (RTILs) has got considerable attention in these days. RTILs, usually liquid at or near room temperature are non-volatile, non-flammable and thermally stable. They provide an environmentally benign “green” alternative to organic solvents for chemical synthesis, extractions and bio catalysis [1].
Use of ionic liquid as novel dispersant for fillers has been developed as an environment friendly technology to functionalize them. Commonly reported dispersants are solid in state which needs additional solvent to disperse nanomaterial. In contrast ionic liquids are fluid at room temperature and are made entirely of ions (asymmetric cation and a symmetric anion) [2].
Das et al. first reported the use of ionic liquid as a dispersing agent for Multiwalled Carbon Nanotube (MWCNT) [3]. Cation-π interaction between cationic part of ionic liquid and π conjugated MWCNT surface is the reason behind the dispersion of MWCNT. Since ionic liquid can act a dispersant, it will improve overall performance of the nanocomposites. Researches from the same group had tried different ionic liquids to functionalize MWCNT and studied its effect on various properties of fabricated nanocomposites. These studies showed a clear evidence for the enhanced dispersion of MWCNT in presence of ionic liquid, improved cure characteristics, mechanical performance, dielectric characteristics, electrical conductivity, ionic conductivity, thermal stability, thermal conductivity oxidation resistance, thermo mechanical properties and processability. Flexible and stable electromagnetic shielding materials can be fabricated with the aid of ionic liquid modified MWCNT [4,5].
Studies also reported in the area of dispersing other fillers like graphene, graphite oxide, graphene oxide, clay, layered double hydroxides, silica, carbon black etc. with the aid of ionic liquid. On mixing Graphene Oxide (GO) and ionic liquid, ILs had been effectively intercalated into the interlayer of GO, which was found to be able to raise the exfoliation degree of GO. It is found that both thermal stability and the thermal conductivity of Bromobutyl Rubber (BIIR) nanocomposites could be improved by incorporating the Ionic Liquids (ILs) modified graphene oxide (GO-ILs) [6]. Tribological study of functionalized graphene-IL nanocomposite ultrathin lubrication films on Si substrates the promising applications in the lubrication of micro/nanoelectromechanical systems (MEMS/NEMS) [7]. It is possible to control the pore size, electrical conductivity and mechanical robustness of the polyurethane nanocomposite foam by incorporating it with modified the graphene oxide using 1-methyl imidazole chloride ionic liquid [8]. Boron Nitride Nanosheets (BNNS) are exfoliated with the help of ionic liquid by physical adsorption on IL on BNNS surfaces. Highly thermally conductive and electrically insulating polymer nanocomposites can be prepares using this material [9].
Ionic liquid can be used as interfacial agent or surfactant or organic modifier for layered double hydroxide in polymer nanocomposites [10]. Ionic liquids were used as an environmentally friendly material to improve the processability of layered silicates containing polymer nanocomposites [11]. Zwitterionic Imidazolium-Based Ionic Liquid (ZIL) was used to modify both cationic and anionic clay minerals. The ZIL was able to penetrate into the interlayer space of clay and modified the interfacial properties [12]. Studies have been reported on the role of ionic liquid as an interfacial modifier for silica and as a cure accelerator in polymer nanocomposites [13].
In conclusion the use of ILs afforded not only high-yield, mild, facile exfoliation of various fillers but also non-covalent functionalization of fillers for multifunctional applications. Merging processing techniques of nanocomposites with ionic liquid for efficient dispersion of nanomaterials facilitates the development of new, high performance materials.
For more Open Access Journals in Juniper Publishers please click on: https://juniperpublishers.business.site/
For more articles in Open Access Journal of Polymer Science please click on: https://juniperpublishers.com/ajop/
To know more about Open Access Journals Publishers
To read more…Fulltext please click on: https://juniperpublishers.com/ajop/AJOP.MS.ID.555576.php
26 notes · View notes
Methods of Game Theory in Safe Maritime Transport- Juniper Publishers
Tumblr media
Introduction
Taking into consideration the form of the quality index the problems of optimal control of the maritime objects may be split into three groups, for which the cost of the process course: is a univocal control function, depends on the way of control and also on a certain random event of a known statistical description and is defined by a choice of the control method and by a certain indefinite factor of an unknown statistical description. The last group of the problems refers to game control systems the synthesis of which is performed by using methods of the games theory.
Classification of Games
The following types of games can be discriminated:
A. With regard to the number of players: two-person and n-person,
B. With regard to the strategy sets: finite and infinite,
C. With regard to the nature of co-operation: non-coalition, co-operative - through the relationships established earlier and coalition ones,
D. With regard to the nature of the prize: zero-sum - closed with a saddle point determined by the optimal pure strategies and those of any sum, e.g.: international trade,
E. With regard to their form of the goal function: matrix, non-continuous and convex,
F. With regard to the nature of conducting the game: in the normal form - one-step static ones and in the extensive form as multi-step games determined by a sequence of movements executed alternately within the kinematical and dynamical processes, the games in their extensive form are split into: positional, stochastic and differential,
G. With regard to the nature of information: with complete and incomplete information,
H. With regard to the to the kind of an opponent: with a thinking opponent and with the nature – the environment performing random movements and not interested in the final result of the game.
Differential games in control engineering
Three classes of the control problems may be discriminated which may offer possibilities to use the differential games both for the description and synthesis of the optimal control:
a. control of the object with no information available on the disturbances operating on such an object. In this case we have only the state equations of the object and a set of acceptable steering actions. Such a control should be then determined as to ensure the minimal functional, under a condition that the disturbance tends to its maximum; in this case the differential game should be solved with a min max optimum condition,
b. control of the object encountering a greater number of the moving objects of different quality index and final goals. An example, in this case, may be a process of ship control in collision situations when encountering a greater number of the moving or non-moving objects (vessels, underwater obstructions, shore line, etc.) a differential game with many participants,
c. synthesis of the multi-layer hierarchical systems. One of the essential hierarchical languages for the steering systems of various nature and methods of determining optimal control is the theory of games including common interests and the right to the first turn.
Game control processes in marine navigation
The classical issues of the theory of the decision process in marine navigation include the safe control of a ship. The problem of non-collision strategies in the control at sea developed by many authors both within the context of the game theory, and also in the steering under uncertainty conditions.
The definition of the problem of avoiding a collision seems to be quite obvious, however, apart from the issue of the uncertainty of information which may be a result of external factors (weather conditions, sea state), incomplete knowledge about other objects and imprecise nature of the recommendations concerning the right of way contained in International Regulations for Preventing Collision at Sea COLREG. The problem of determining safe strategies is still an urgent issue as a result of an ever increase traffic of vessels on particular water areas. It is also important due to the increasing requirements as to the safety of shipping and environmental protection, from one side, and to the improving opportunities to use computer supporting the navigator’s duties.
Differential game of maritime object process control
In order to ensure safe navigation, ships are obliged to observe legal requirements contained in the COLREG Rules. However, these Rules refer exclusively to two ships under good visibility conditions, in case of restricted visibility the Rules provide only recommendations of general nature and they are unable to consider all necessary conditions of the real process. The real process of the ships passing exercises occurs under the conditions of indefiniteness and conflict accompanied by an imprecise co-operation among the ships in the light of the legal regulations. Consequently, it is reasonable - for ship operational purposes - to present this process and to develop and examine methods for a safe steering of the ship by applying the rules of the game theory.
A necessity to consider simultaneously the strategies of the encountered objects and the dynamic properties of the ships as the control objects is a good reason for the application of the differential game model for the description of the processes.
Types of game object control
Assuming that the dynamic movement of the maritime objects in time occurs under the influence of the appropriate sets of control , where:
- a set of the own object strategies, as a possible change in the value of the course and the speed of its own object,
- a set of the j-th object strategies, as a possible change in the value of the course and the speed of its j-th object,
- denotes course and trajectory stabilisation,
- denotes the execution of the anti-collision manoeuvre in order to minimize the risk of collision, which in practice is achieved by satisfying the following inequality: , Dj,min = minDj(t) ≥ Ds , where:
Dj,min - the smallest distance of approach of the own ship and the j-th encountered object,
Ds - safe approach distance in the prevailing conditions depends on the visibility conditions at sea, the COLREG Rules and the object dynamics,
Dj - current distance to the j-th object taken from the ARPA anti-collision system,
- refers to the manoeuvring of the object in order to achieve the closest point of approach, for example during the approach of a rescue vessel, transfer of cargo from ship to ship, destruction the enemy’s ship (Figure 1).
In the adopted describing symbols, we can discriminate the following type of object control in order to achieve a determined goal:
A. basic type of control – stabilization of the course or trajectory:
B. avoidance of a collision by executing:
a. own ship’s manoeuvres: (1) (0) ,
b. manoeuvres of the j-th ship: (0) (1) ,
c. co-operative manoeuvres: (1) (1) ,
C. encounter of the ships: ( 1) ( 1) ,
D. situations of a unilateral dynamic game: and ,
Dangerous situations resulting from a faulty assessment of the approaching process by one of the parties with the other party’s failure to conduct observation - one ship is equipped with a radar or an anti-collision system, the other with a damaged radar or without this device.
E. chasing situations which refer to a typical conflicting differential game: and .
The first case usually represents regular optimal control, the second and third are unilateral games while the fourth and fifth cases represent the conflicting games [1-7].
Conclusion
The application of simplified models of the dynamic game of the process to the synthesis of the optimal control allows the determination of the object’s safe trajectory in situations of passing a greater number of the encountered objects as a certain sequence of the course and speed manoeuvres. The developed software takes also into consideration the Rules of the Regulations for Preventing Collision at Sea COLREG and the advance time of the manoeuvre approximating the object’s dynamic properties and evaluates the final deviation of the real trajectory from the assumed value. The considered steering algorithms are, in a certain sense, formal models of the thinking processes of a navigator conducting a maritime object and making manoeuvring decisions. They may be applied in the construction of both appropriate training simulators at the maritime educational centres and also for various options of the basic module of the ARPA anti-collision system.
For More Open Access Journals Please Click on: Juniper Publishers
Fore More Articles Please Visit: Robotics & Automation Engineering Journal
0 notes
Different Effects of Olive Leaf on Purine Metabolizing Enzymes of Human Gastric Tissues in Vitro | Juniper Publishers
Tumblr media
Juniper Publishers-Open Access Journal of Cancer Therapy & Oncology
Authored by Hikmet Can Çubukçu
Abstract
Olive leaf (Olea europaea leaf) is a natural food source known to have anticarcinogenic, antiproliferative and anti-inflammatory effects in different types of tissues. Adenosine deaminase, 5’nucleotidase and xanthine oxidase are enzymes playing part in purine metabolism including salvage pathway. In the present study, it is aimed to investigate possible inhibitory effects of aqueous extract of olive leaf on different purine metabolizing enzyme activities in benign and malign human gastric tissues. Fouteen cancerous and 14 adjacent noncancerous human gastric tissues were surgically removed from patients underwent surgical operation. Olive leaf extract- treated and - not treated tissues were analyzed in vitro for adenosine deaminase , 5’-nucleotidase and xanthine oxidase activities.
Our results showed that aqueous extract of olive leaf inhibited adenosine deaminase activity significantly in cancerous gastric tissue (p=0.000) and 5’-nucleotidase activity in non cancerous gastric tissue (p=0.001). However, no significant differences were found between tissue xanthine oxidase activities. Results indicate that aqueous extract of olive leaf may exhibit anti-cancer activites by inhibiting adenosine deaminase and 5’nucleotidase in gastric tissues.
Keywords: Olive leaf; Cancer; Adenosine deaminase; 5’-nucleotidase, Xanthine oxidase; Oleuropein; Apigenin; Luteolin; Quercetin; Tyrosol; Hydroxytyrosol; Caffeic acid; Ferulic acid; p-Coumaric acid; Cancer
Introduction
Cancer is increasingly becoming a worldwide public health problem. 14.1 million new cancer cases and 8.2 million cancer deaths were reported in 2012 worldwide. It is expected that by 2025, 20 million new cancer cases are diagnosed each year.The most common cancer types are lung , breast, and colorectal cancer respectively [1]. Gastric cancer is the fourth most common cancer and second most common cause of cancer deaths worldwide [2]. While radiotherapy and chemotherapy are used to treat these cancers, severe side effects can be seen in some patients. Recently natural and herbal remedies have taken attention owing to their represented ability to treat some diseases like cancer. Natural products can be used not only to treat cancer but also to prevent it [3]. Smoking cessation, fruit and vegetable intake, reducing salt intake, Helicobacter pylori eradication can help prevent from gastric cancer [4]. Olea europaea is an evergreen tree which belongs to Oleaceae family . The plant is cultivated widely in Mediterranean basin [5]. While the fruits and the oil are consumed for nutrition, olea europaea leaf has been used as a folk remedy for centuries.
Studies have shown that olive leaf has antiproliferative, apoptotic, antiatherosclerotic, antioxidant, antidiabetic, antiHIV and antifungal properties. Olive leaf contains several phenolic compounds like oleuropein, apigenin, luteolin, quercetin, tyrosol, hydroxytyrosol, caffeic acid, ferulic acid. The potential health benefits of olive leaf have mainly attributed to these bioactive substances [6]. Adenosine deaminase (ADA) is an enzyme involved in purine metabolism which deaminates adenosine and deoxyadenosine to inosine and deoxyinosine respectively. It plays an important role in differentiation of the lymphoid system. ADA deficiency related to severe combined immunodeficiency disease (SCID) .Therefore ADA inhibitors are used to treat lymphoproliferative disorders as an immunosuppressive therapy [7].
5’nucleotidases are important enzymes for maintaining nucleotide pools which dephosphorylate nucleoside monophosphates to nucleosides and inorganic phosphates. Nucleoside triphosphates necessary for maintaining vital cellular processes. Since 5’-nucleotidases are responsible for degradation of nucleoside monophosphates, they can regulate cellular energy homeostasis by changing nucleoside triphosphate to monophosphate ratio [8].
Xanthine oxidase (XO) is involved in purine metabolism catalyzing the oxidation of hypoxanthine to xanthine, and xanthine to uric acid [9]. It generates superoxide radicals and hydrogen peroxide during oxidation [10]. These reactive oxygen substances may contribute to various diseases like cancer [9]. It has also been reported that XO may be a crucial therapeutic target for some diseases like gout, cancer, inflammation and oxidative damage [11]. The present study aims to clarify possible proposed anticarcinogenic effects of aqueous olive leaf extract with regard to purine metabolizing enzyme activities of human gastric tissues in vitro.
Methods
Fourteen cancerous and 14 adjacent noncancerous human gastric tissues were obtained from patients by surgical operation. After cleaned by saline solution, fresh surgical specimens were stored at -80 °C until analysis. Before analysis procedure, specimens were first homogenized by DIAX 900 (Heidolph, Kelhaim, Germany) in saline solution (20 %, w/v). The homogenates were centrifuged at 5000 rpm for 30 min by a Harrier 18/80 centrifuge (MSE, London, UK) to remove debris. Then, clear supernatant fractions were taken for enzymatic analysis. Aqueous extract of olive leaf (Olea europaea leaf) was prepared at concentration of 10 % (w/v) in distilled water. Tissue homogenates were treated with aqueous extract of olive leaf for 1 hour.
Enzyme activities were measured in the specimens with and without olive leaf extract spectrophotometrically by using Helios alpha Ultraviolet/Visible Spectrophotometer (Unicam, Cambridge, UK). Protein concentration in the samples was measured by the method of Lowry, and adjusted to equal concentrations [12]. ADA activities were measured by Giusti method. The method is based on spectrophotometric measurement of a blue colored dye occurred after the reaction of ammonia (product of adenosine deamination) with phenol nitroprusside and alkaline hypochlorite solution [13]. Xanthine oxidase activities were evaluated by measuring uric acid formation from xanthine at 293 nm [14], and 5’-nucleotidase activities were performed by determination of liberated phosphate at 680 nm as described previously [15].
Statistical evaluations between groups were made by using Mann-Whitney U test, and p values lower than 0.05 were evaluated significant. All statistical calculations were performed by using SPSS statistical software (SPSS for Windows, version 11.5)
Results
ADA, 5’-NT and XO activities are shown in the Table 1, and p values in the Table 2. It has been observed that aqueous extract of olive leaf inhibited adenosine deaminase in malign gastric tissue (p=0.000), and 5’-nucleotidase in benign gastric tissue (p=0.001) significantly. However, no significant differences were found between tissue xanthine oxidase activities. Although ADA activities in the treated benign tissues, and 5’-nucleotidase activities in the treated malign tissues were lower than those in the non- treated tissues, they were not significant statistically (p=0.067, p=0.062). In addition, we found no meaningful differences between benign and malign tissue enzyme activities.
Discussion
Natural remedies have been used from ancient times till now in conventional Eastern medicine. It has been known that more plant consumption reduces the incidence rates of cancer. Phenolic compounds are those of the plant ingredients that represent anticancer properties [16]. Olive leaf contains various phenolic compounds including oleuropein, ligstroside aglycone, oleuropein aglycone, quercetin, isorhamnetin, rutin, catechin, gallocatechin, apigenin, luteolin, tyrosol, hydroxytyrosol, gallic acid, p-coumaric acid, caffeic acid and ferulic acid that contribute to anti-carcinogenic, antioxidant, anti-inflammatory and antimicrobial effects [17].
Researchers have demonstrated that hydroxytyrosol-rich extract of the olive leaf can inhibit human breast cancer cell growth owing to cell cycle arrest in the G0/G1 phase [18]. Oleuropein and its semisynthetic peracetylated derivatives have been documented for its antiproliferative and antioxidant effects on human breast cancer cell lines [19]. Olive leaf extract’s antigenotoxic, antiproliferative and proapoptotic activities on human promyelocytic leukemia cells were previously reported [20]. Further researchers have shown that dry olive leaf extract possesses strong anti melanoma potential by reducing tumour volume, inhibiting proliferation,causing cell cycle arrest [21]. Studies have also demonstrated gastroprotective activity of olive leaf [22] and antioxidant effects on ethanol-induced intestinal mucosal damage [23].
ADA is responsible for adenosine and inosine breakdown. Inhibition of ADA blocks the deamination of purine nucleotides, and as a consequence accumulation of ADA substrate, 2-deoxyadenosine inhibits ribonucleotide reductase. This process leads to a reduction of nucleotide pool, and limits DNA syntesis [24]. Phosphorylation of deoxyadenosine results in deoxyadenosine triphosphate production. Deoxyadenosine triphosphate and deoxyadenosine both inactivate S-adenosinehomocysteinase [25] and affect cellular methylation of some substances like proteins, DNA and RNA [26]. There are many studies as to the ADA activation on different pathologic conditions. Inhibition of ADA was found to reduce intestinal inflammation in experimental colitis [27]. A study on human gastric cancer cell line has also shown that extracellular adenosine induces apoptosis [28]. It is known that chronic inflammation predisposes to gastric cancer [29]. Adenosine reflects its metabolic function by its four G-protein coupled receptors. Adenosine A2A reseptor activation possesses antiinflammatory effects on various conditions [30].
In the present study, aqueous extract of olive leaf was found to inhibite adenosine deaminase in malign gastric tissue (p=0.000), significantly. Inhibition of ADA can promote adenosine accumulation, and therefore it not only induces apoptosis but also exhibit anti-inflammatory effects on gastric cancerous tissue. 5’nucleotidases are responsible for degradation of nucleoside monophosphates. Until now, 7 types of human 5’-nucleotidases have been identified [8]. One of them is ecto-5’-nucleotidase that is also known as CD73. Studies have implied that ecto-5’- nucleotidase regulates proliferation, migration and invasion of cancer cells in vitro, tumor angiogenesis and tumor immune evasion in vivo [31]. Nucleoside analogues are used as both anticancer and antiviral agents. These drugs inhibit DNA syntesis by its active substances. Studies have shown that enhancing nucleotidase activity can cause anticancer drug resistence by inhibiting nucleoside analogue activation [8]. Moreover, Lu et al have reported that CD73 expression in malign gastric tissues is higher than benign gastric tissues. This study has also indicated that CD 73 overexpression is related to differentiation of tumour, depth of invasion, stage and metastasis [32]. However in the present study, no meaningful differences were found between 5’nucleotidase enzyme activities of benign and malign tissues. Furthermore results of the present study show that aqueous extract of olive leaf inhibits 5’-nucleotidase in benign gastric tissue (p=0.001) significantly. Although 5’-nucleotidase activities in malign tissues treated with olive leaf extract were found to be lower than those in the untreated tissues, the differences were not howevr significant from statistical points of wiew.
Xanthine oxidase is the last enzyme in purine degradation which converts purines to uric acid and hydrogen peroxide. Hydrogen peroxide is one of the reactive oxygen species. Although hydrogen peroxide can play a part in oxidative damage of DNA, and promotes malignant transformation, some studies have shown that this substance is able to kill cancer cells at higher concentrations [33].It has been reported that olive leaf extract inhibits xanthine oxidase activity in vitro [34] . However in our study, we found no significant differences between tissue xanthine oxidase activity values.
Our results show that olive leaf extract inhibits adenosine deaminase activity in malign gastric tissue significantly, but does not affect xanthine oxidase activity. It seems quite reasonable that accumulation of adenosine can exert anti-carcinogenic properties by inducing apoptosis and by anti-inflammatory effects. Additionally, inhibition of ADA and 5’nucleotidase can deplete nucleotide pool which is very important for new DNA synthesis. This study reveals preliminary information about different effects of olive leaf on purine metabolizing enzymes of benign and malign gastric tissues. Therefore, further in vivo studies should be conducted to clarify possible anti-carcinogenic effects of olive leaf.
For more Open Access Journals in Juniper Publishers please click on: https://juniperpublishers.business.site/
For more articles in Open Access Journal of Cancer Therapy & Oncology please click on: https://juniperpublishers.com/ctoij/
To know more about Open Access Journals Publishers
To read more…Fulltext please click on: https://juniperpublishers.com/ctoij/CTOIJ.MS.ID.555826.php
29 notes · View notes
Severity of Hip Displacement in Relation to Subtypes and Motor Function in Cerebral Palsy- Role of Hip Surveillance | Juniper Publishers
Tumblr media
Juniper Publishers-Open Access Journal of Orthopedics and Rheumatology
Authored by Kunju PAM
Abstract
Background: Hip dislocation in children with cerebral palsy (CP) is a common and often over looked problem by the treating pediatricians. Though it can be diagnosed early by using radiographs, knowledge about the standardized methodology and need for periodic surveillance is lacking among primary care pediatricians. Hip surveillance by X-ray pelvis can identify early hip dislocation and it is shown that early intervention may prevent the need for surgery [1].
Methods: The study was done in a tertiary care hospital as a onetime radiological evaluation of children with CP between the age group of 4-9 yrs. One hundred and one children with CP formed our study population.Clinical evaluation for details regarding CP type and assessment of motor ability by gross motor function classification system (GMFCS) was done. A hip X-ray was done for calculation of, migration index for establishing or ruling out hip displacement. Migration percentage (MP) in relation to CP subtypes and GMFCS grades were done.
Results: There were 48 boys and 53 girls (mean age 4.80 years). 12 children were Gross Motor Function Classification System (GMFCS) level 5, while 26 were GMFCS level 4. Out of 36 hemiplegic CP only one had MP > 40. out of 6 children with spastic quadriplegia, 5 (83%) had MP > 40%. Spastic diplegic and choreoathetotic subtypes showed MP >40% in 9 out of 43 and 7 out of 16 respectively.According to the gross motor function classification system, GMFCS level I had no child with MP > 40%. Whereas 50% of children in GMFCS level IV and V had MP > 40% compared to only 4.76% in GMFCS I and II put together.
Conclusion: All the children in this study did not undergo a hip X-ray prior to this study. 22 out of 101 children had severe degree of hip displacement. The maximum number of hip displacements was seen in children with spastic quadriplegia; Spastic diplegic and choreoathetotic subtypes showed intermediate risk of hip displacement and hemiplegia had very low risk. According to the gross motor function classification system,GMFCS level I had no child with MP > 40%. Whereas 50% of children in GMFCS level IV and V had MP > 40%. The study showed the relationship between the CP subtypes and the severity of the motor involvement. It also emphasized the need for early hip surveillance.
Keywords: Hip dislocation; Cerebral palsy; Lateral Displacement; Hip surveillance
Introduction
In children with spastic cerebral palsy reduced activity of the hip abductor muscles in comparison to the spastic adductors leads to diminished growth of the greater trochanter of femur results in pathologic deformities of the hips-femoral anteversion and coxa valga antetorsa [2]. If untreated, dislocation of the hip typically occurs at age 2–7 years with a maximum at the age of 6 years. The incidence of hip displacement in cerebral palsy is related to the severity of involvement; varying from 1% in children with spastic hemiplegia, up to 75% in those with spastic quadriplegia [2,3]. So periodic evaluation of hip function is essential for early intervention and preventive measures.
Hip surveillance is defined as: “The process of monitoring and identifying the critical early indicators of hip displacement” [4].Hip displacement refers to the displacement of the femoral head laterally out of the acetabulum and is measured using a migration percentage (MP). Hip subluxation refers to hip displacement where the femoral head is partially displaced from under the acetabulum while hip dislocation refers to hip displacement where the femoral head is completely displaced from under the acetabulumn [5,6].Hip surveillance is important, to prevent morbidity of spastic hip disease-The aim of the management in children with spastic hip displacement is to maintain flexible, well-located and painless hips with a symmetrical range of movement. The key to achieving this goal is early identification and intervention.
Periodic hip surveillance also helps to reduce the need for extensive surgical procedures which is highly specialized area of orthopedics which may not be available in every center. So primary care pediatrician has a role for hip surveillance and timely referral.
Patients and Methods
The study was done in a pediatric neurology department of a tertiary care hospital as a onetime radiological evaluation of children with CP between the age group of 4-9 yrs prior to the referral to orthopedics. One hundred and one children between the age group of 4-9 yrs. with the diagnosis of CP formed our study population.A pediatric neurologist and physiotherapist in the department examined the children and completed an assessment form. Clinical evaluation for details regarding CP subtype and assessment of motor ability by gross motor function classification system (GMFCS) [6] was done. Winters, Gage, Hicks (WGH) gait type was determined, in addition to inquiring regarding pain during history taking. Orthopedic consultations done whenever required.
Radiographic Examination
Decision for referral for surgery depends on the degree of displacement of the femoral head and acetabular dysplasia. The migration percentages as described by Reimers and the acetabular index described by Hilgenreiner are the conventional measurements of displacement of the hip and acetabular dysplasia in young children with cerebral palsy.Radiographic assessment consists of measurement of migration percentage (MP) from a supine AP pelvis radiograph with standardized positioning [7] (Figure 1). Reimers Hip Migration Percentage is the percentage of body width of femoral capital epiphysis displaced out of the acetabulum (which falls lateral to perkins line) [8].Measurement of migration percentage of femoral head was done as given in the (Figure 2).
In the adult or older child, where the triradiate cartilages are fused and therefore inapparent, the inferior margin of the pelvic teardrop is used instead.The acetabular angle using Hilgenreiner’s line should be less than 28°at birth. The angle should become progressively shallower with age and should measure less than 22° at and beyond 1 year of age.
Present study an anteroposterior (AP) pelvic radiograph at the time of first visit was done. Any decrease in the range of movement at the hip or presence of scoliosis was a definite indication for further detailed radiological examination & immediate referral. In the present study 101 children were assessed between 4 and 9 years of age. Children with MP > 33% and > 40% were compared in relation to those with MP below these limits. Migration percentage (MP) in relation to CP subtypes and GMFCS grades were done.
Results
There were 48 boys and 53 girls (mean age 4.80 years). Distribution of Cerebral Palsy sub types were as follows. Hemiplegic 36 (35.64%), Quadriplegic 6(5.94%), Diplegic 43(42.57%) and Choreo athetotic 16(15%). 12 children were Gross Motor Function Classification System (GMFCS) level 5, while 26 were GMFCS level 4. Results of hip displacement by radiography as measured by MP in relation to CP subtypes and motor severity are presented in (Tables 1&2) and (Figure 3).
Only one child out of 36 children with spastic hemiplegia developed MP > 40%. The maximum number of hip displacements was seen in children with spastic quadriplegia, where 5 of 6 children (83%) had MP > 40%. Spastic diplegic and choreoathetotic subtypes showed intermediate risk of hip displacement (9 out of 43 and 7 out of 16 respectively had MP >40%). In the present study onset of hip displacement could not be assessed as hip surveillance was not done in a periodic basis. Figure 4 shows x-ray hip of 4-year-old with very minimal displacement (MP 33.33%) and Figure 4 shows severe hip displacement in an 8-year-old child.
According to the gross motor function classification system, GMFCS level I had no child with MP > 40%. Whereas 50% of children in GMFCS level IV and V had MP > 40% compared to only 4.76% in GMFCS I and II put together.
Discussion
The natural history of spastic hip disease of CP is progressive lateral displacement of the hip secondary to spasticity and muscle imbalance in the major muscle groups around the hip. Displacement may progress to severe subluxation, secondary acetabular dysplasia, deformity of the femoral head, dislocation and painful degenerative arthritis [4,5]. The long-term effects of dislocation of the hip can be disastrous for individual patients leading to pain and loss of the ability to sit comfortably in up to 50% of cases [6]. Other problems include difficulty with perineal care and personal hygiene, pelvic obliquity and scoliosis, poor sitting balance and loss of the ability to stand and walk [7-11].
A hip is usually considered to be subluxed,if the migration is equal to or greater than 33%. Reimers [17] found that among normal, the 90th gentile for migration percentage at four years was 10% with spontaneous migration of less than 1% per year. An unstable migration percentage is when progression is greater than or equal to 10% over 1 year [12-16]. Present study has shown that even a single radiological evaluation could identify hip displacement in children after the age of 4 yrs. Majority of (5 out of 6) quadriplegic CP, had severe type of hip displacement compared to hemiplegic CP (1 out of 36). Compared to other bilateral types of CP diplegia had lower rate of hip displacement (9 out of 43). This may be because of the less motor function impairment. So GMFCS may be a better predictor for early prediction of hip structural impairment. It is seen that there is direct correlation between the GMFCs class and severe hip displacement. According to the gross motor function classification system, GMFCS level I had no child with MP > 40%. Whereas 50% of children in GMFCS level IV and V had MP > 40% compared to only 4.76% in GMFCS I and II put together.
Subtyping of CP may have a role in predicting occurrence of severe hip displacement as shown by the almost complete occurrence in quadriplegic CP. However, a mere clinical examination and subtyping will not help in identifying severe hip disease in other type of CP. So, a systematic analysis of GMFCS is required for intensified screening of hip dysfunction. Moreover, as described in various guide lines periodic hip surveillance is mandatory for better ambulation and avoidance of surgery. This can be attained by early intervention measures. Figure 4 itself shows the importance of early surveillance. AACPDM - (American Academy for Cerebral Palsy and developmental medicine) recommends following schedule of hip surveillance (Table 3).
Conclusion
Need for hip evaluation in children with CP is emphasized by this study. All the children in this study did not undergo a hip Xray prior to the study. 22 out of 101 children had severe degree of hip displacement. The maximum number of hip displacements was seen in children with spastic quadriplegia and hemiplegia had very low risk. According to the gross motor function classification system, GMFCS level I had no child with MP > 40%. Whereas 50% of children in GMFCS level IV and V had MP > 40 %. The study showed the relationship between the CP subtypes and the severity of the motor involvement in producing hip displacement. Referral to an orthopedic surgeon with experience in treating hip displacement in children with CP is recommended when there is presence of hip pain on history and/or physical examination. Periodic hip surveillance is mandatory for early detection of hip displacement. When the migration percentage is greater than 30% and/or there is less than 30 degrees of hip abduction with or without other findings, referral to an orthopedic surgeon is recommended [1,17].
For more Open Access Journals in Juniper Publishers please click on: https://juniperpublishers.business.site/
For more articles in Open Access Journal of Orthopedics and Rheumatology please click on: https://juniperpublishers.com/oroaj/
To know more about Open Access Journals Publishers
To read more…Fulltext please click on: https://juniperpublishers.com/oroaj/OROAJ.MS.ID.555848.php
50 notes · View notes