#parallel to the field. which means the resulting path of the particle is
Explore tagged Tumblr posts
rowanintheriver · 6 months ago
Text
you're thinking abt vi as in arcane i'm thinking abt vi as in initial velocity. we are not the same
0 notes
vortexphotonics · 4 years ago
Text
The Uses of Vortex Optical Components in Day-To-Day Tasks
Tumblr media
An optical vortex is a single of a zero of an electromagnetic field; a position of zero degrees in which the propagation of electric and optical waves are stopped. The word is also used to define a straight beam of light which has such a zero degree in it. In other words, it is a position where light rays have no interaction with each other. It is like a light beam travelling along straight and parallel roads. The optical vortex describes the curve of a light wave as it propagates through a medium.
Let us see how this works on a simplified example, the so-called Bell experiment. The Bell experimented with the effect of a shock wave on the surface of an electric conductor, the thickness of which is determined. The theory of the Bell was that if the source is placed at a point, say a head, then a portion of the waves will be incident on the conductor and will create an induced impulse on the surface of the rod; this impulse will produce a variation in the thickness of the rod that is caused by the change in the angle of reflection and transmission. This deviation from the mean path of the electric field, due to the change in angle of transmission and reflection, produces a deviation from the zero degree of the field and the optical vortex phase plate.
This is how it works on a more complicated example; let us presume we take a spherical surface, say disc, as our spherical vortices. We can treat the disc as a collection of grains of different sizes which each can be deformed into a different shape, say a flat disc. Now assume that the number of grains n is constant. To get the vortex structure, we introduce a thin, flat disc as a collimating lens, letting it focus on the disc to form a virtual screen where the image n of the virtual screen can be seen on the surface of the disc. If we now change the focus of the lens and make it smaller, we have effectively introduced a modulation in the topology of the surface of the disc, which will alter the shape of the surface and thus produce a variation in the topological charge of the object.
Here is another way to understand the optical vortex. Assuming a two-dimensional disc as our sphere of operation, we can rotate the arrangement of the disc by hand, or by an electric motor (like the kind you see on television). Suppose we move the central area of the disc, say the area we determined earlier, to the right, and the peripheral areas to the left. We must also suppose that the inner and outer areas of the disc have different angular Momentum densities, which will result in a net change in the angular momentum of the system.
The simplest model for the generation of a vortex, assuming the non-rotating axis of symmetry, is the one-dimensional curve of constant helical beam profiles. The central region will radiate radial thrust into the neighboring regions, creating a net force inside the particle. Although the radial force may be complicated depending on the shapes of the shapes inside the system, this method is more accurate than the previous model because it takes into account the effects of tumbling.
Another example of how optical vortices can be used in practice is in the study of charged pairs of particles. In a situation where a pair of particles is charged at opposite poles, the two will come very close together if they are placed at the focus of an optical field. In this case, the momentum of the system is not conserved, but instead, the wave action produces an attractive curvature, just like in the earlier model for the production of helical waves. When the orientation of the charged particles changes, the wave-field comes into play, causing the electron to bump into the photons.
There are other uses for optical vortex, such as the application of ultrasound power. In this case, the momentum of the system, along with the strength and angle of modulation, are used to drive helical beams through fluid. For instance, ultrasound technicians use a machine to create an ultrasound beam that travels through a tube. Inside this tube are two hollow cylinders, one filled with water and the other with air. The air cylinder is filled with varying volumes of water, while the water cylinder has an inner ring of air that forms a tubular shape as it spirals around the outer ring.
When the inner ring of air and the outer ring of air to form a U shape, the result is an energy dipole moment, which can be measured in terms of the scattering of electromagnetic radiation. A dynamic holographic optical vortex device, which uses the principle of optical vortices, can measure the time it takes for this energy to reach the ends of the tubes. This information is important in designing materials such as lenses and membranes, as well as in manufacturing machinery for such things as printing presses. Without such measurements, manufacturers would know little about the materials they use in their day-to-day tasks.
0 notes
padmalochansworld · 5 years ago
Text
Earth has three types of rotation. (1)Orbital motion (2) Rotational motion (3) vortex motion
Every matter which has mass creates his own electric field + magnetic field simultaneously due to speed. It may be like a sphere surrounding the matter. This mass creates energy how much? Energy transfer one form to another form.
E=mc^2
=>TL=mc^2
=>TL=mv
v=30km/sec
=>TL=hf
=>TL -TLo=hf-hfo=1/2mV^2
=>TL=I/C=Ps/4πr^2/c
=>TL=2I/c
T=Tension
L=Length
I=Intensity
=>TL=IA/c
=>TL=2IA/c
A=Area
=>TL=delU/c
=>TL=2delU/c
U=energy
I/C-total absorption of radiation pressure.
2I/C-total reflection back along the path of radiation pressure.
IA/C-total absorption of radiation force.
2IA/C-total reflection back along path of radiation force.
Now our earth is composed of large no of atoms which are emitted electrical energy created electric field and simultaneously magnetic field also but the field is moving and rotating with the friction of sun's gravitational field(push & pull).
A charged particle moving without acceleration produces an electric as well as a magnetic field . It produces an electric field because it's a charge particle. But when it is at rest, it doesn't produce a magnetic field. All of a sudden when it starts moving, it starts producing a magnetic field. Why? What happens to it when it starts moving? What makes it produce a magnetic field when it starts moving?
Electrostatic field surrounds a stationary charge. A moving charge has magnetic and electric field surrounding it. But since the mag. field at a point due to the moving charge keeps changing, there is also an induced electric field. this ind. electric field in turn induces a magnetic fiel. and this goes on in a cycle. (maxwel equation)
"ACCELERATING charges exert both electric and magnetic forces, while NON-ACCELERATING charges exert only electric forces" A charge at constant velocity doesn't create magnetic fields, only accelerating ones do.
The magnetic field will have no effect on a stationary electric charge. ( this means that the magnetic field is also stationary. ) If the charge is moving , relative to the magnetic field then there might be an effect, but the size and direction of the effect will depend on the direction of the electric charge as it moves through the field. If the charge is moving parallel to the field there will be no effect on it. If the charge is moving at right angles to the field then it will experience a force that is mutually orthogonal to the field and direction of the motion.
The interaction between a moving electric charge and a stationary electric charge distribution is considered. It is shown that the interaction involves not only an electric attraction or repulsion but also a heretofore unreported electric torque exerted by the moving charge on the stationary charge distribution. The torque is associated with the asymmetry of the electric field of the moving charge and is present even if the stationary charge distribution is highly symmetrical, such as a uniformly charged sphere, for example. As a result of the torque, the stationary charge distribution is set in rotation. The rotating stationary charge distribution creates a magnetic field and an induced electric field that act on the moving charge thus further contributing to the complexity of the interaction.
Two types of moving charges are considered: a point charge moving with constant speed along a straight line and a point charge moving with constant speed along a circular orbit. The torques exerted by these charges on stationary charge distributions in the shape of a small circular ring, small disc, and small sphere of uniform charge density are computed and some consequences of these torques are discussed. The possibility of the existence of a similar interaction effect in gravitational systems is also considered.
Higgs field interacting with particles to give them mass or produce mass. Higgs field in case of -solid -liquid -gas -vacuum. Neutrinos & antineutrinos interact so weakly with matter that they even penetrate through earth without being absorbed. Physical properties mass & spin(intrinsic angular momentum). 1/2 spin. Rest mass almost zero. Both are produce real photons and virtual photons and gravitons when penetrate through earth. Gravitational force ->electromagnetic force + weak electroforce + white force + energy + torque + linear momentum of light wave+extra force produced by space in opposite side of earth (dark)+vibration wave force+ weak force produced by other leptons+radiation pressure (quantum radiation formed by particles, antiparticles and quark and and antiquark). Quantum electrodynamics is the study of interaction between two electron at atomic level. According to this theory each electron senses the presence of the other by exchanging photons with it. Quantum chromodynamics is analogous to QED. The study of color force between quarks is known as QCD. Photon,virtual photon,gluons,virtual gluons are called messenger particles. Quark=>red+yellow +blue= white which is color neutral ->baryon. Antired + antiyellow + antiblue= white->antibaryon. Red+antired or yellow + antiyellow or blue + antiblue -> white. Quark- anti quark ->meson White energy+information is the colour force which binds together the protons and neutrons to form nuclei. The messenger particles that transmit the weak force between particles, however are not (massless) photons but massive particles, identified by the symbols W and Z. The proton mass is only 0.938 GeV/c^2. These are massive particles. If a stationary electron emits a photon and remains itself unchanged ,energy is not conserved .But the principle of conservation of energy is saved ,however by the uncertainty principle. When electron X emits a virtual photon , the overdraw in energy is quickly set right when that electron receives a virtual photon from electron Y, the violation of the principle of conservation of energy for the electron pair is hidden by the inherent uncertainty. This hidden energy produced by messenger particles sometimes make energy + information volume in our earth related to ionosphere even when local area network is unavailable to communicate to long distance. When communication signals send to distance space for some time by different towers(strength s are not same) or networks after disconnect of one tower one can communicate from space when he/she finds the position of the messenger particles with energy to communicate. When I was walking in a road on right side I had felt some force track me from right to left with tremendous force even if I want to go on right side.Another day when I had gone to meet my Professor C .R Panda I had felt there was tremendous force working in between two just like north pole and south pole as in between two magnet.
When a muon neutrino strikes on earth produces muon only and when a electron neutrino strikes it produces electron only.
0 notes
siva3155 · 6 years ago
Text
400+ TOP ELECTRICAL Engineering Interview Questions and Answers Pdf- EEE
ELECTRICAL Engineering Interview Questions with Answers free download - EEE :-
CLICK HERE ----> ELECTRICAL ENGINEERING MCQs 1. What is electric traction? Electric traction means using the electric power for traction system (i.e. for railways,trams, trolleys etc). Electric traction means use of the electricity for all the above machines. Now a days, magnetic traction is also used for bullet trains.and basically dc motors are used for electric traction systems. 2. How can you start-up the 40w tube lite with 230v AC/DC without using any choke/Coil? It’s possible by means of Electronic chokes,otherwise it’s not possible to ionize the particles in tube light with normal voltage. 3. What is “pu” in electrical engineering? Pu stands for per unit and this will be used in single line diagram of power distribution and it is like a huge electrical circuit with no of components (generators, transformers, loads) with different ratings (in MVA and KV). To bring all the ratings into common platform we use pu concept in which, in general largest MVA and KV ratings of the component is considered as base values, then all other component ratings will get back into this basis.Those values are called as pu values. (p.u=actual value/base value). 4. Operation carried out in Thermal power stations? The water is obtained in the boiler and the coal is burnt so that steam is obtained this steam is allowed to hit the turbine , the turbine which is coupled with the generator generates the electricity. 5. Why link is provided in neutral of an ac circuit and fuse in phase of ac circuit? Link is provided at a Neutral common point in the circuit from which various connection are taken for the individual control circuit and so it is given in a link form to withstand high Amps. But in the case of Fuse in the Phase of AC circuit it is designed such that the fuse rating is calculated for the particular circuit (i.e load) only.So if any malfunction happen the fuse connected in the particular control circuit alone will blow off.
Tumblr media
ELECTRICAL ENGINEERING Interview Questions 6. How tube light circuit is connected and how it works? A choke is connected in one end of the tube light and a starter is in series with the circuit. When supply is provided ,the starter will interrupt the supply cycle of AC. Due to the sudden change of supply the chock will generate around 1000 volts . This volt will capable of to break the electrons inside the tube to make electron flow. once the current passes through the tube the starter circuit will be out of part. now there is no change of supply causes choke voltage normalized and act as minimize the current. 7. whats is MARX CIRCUIT? It is used with generators for charging a number of capacitor in parallel and discharging them in series.It is used when voltage required for testing is higher than the available. 8. What is encoder, how it function? An encoder is a device used to change a signal (such as a bit stream) or data into a code. The code may serve any of a number of purposes such as compressing information for transmission or storage, encrypting or adding redundancies to the input code, or translating from one code to another. This is usually done by means of a programmed algorithm,especially if any part is digital, while most analog encoding is done with analog circuitry. 9. What are the advantages of speed control using thyristor? Advantages : Fast Switching Characteristics than Mosfet, BJT, IGBT Low cost Higher Accurate. 10. Why Human body feel Electric shock ?? n in an Electric train during running , We did nt feel any Shock ? why? Unfortunately our body is a pretty good conductor of electricity, The golden rule is Current takes the lowest resistant path if you have insulation to our feet as the circuit is not complete (wearing rubber footwear which doing some repairs is advisable as our footwear is a high resistance path not much current flows through our body).The electric train is well insulated from its electrical system. 11. what is the principle of motor? Whenever a current carrying conductor is placed in an magnetic field it produce turning or twisting movement is called as torque. 12. Why, when birds sit on transmission lines or current wires doesn’t get shock? Its true that if birds touch the single one line (phase or neutral) they don’t get electrical shock… if birds touch 2 lines than the circuit is closed and they get electrical shock.. so if a human touch single one line(phase) then he doesn’t get shock if he is in the air (not touching – standing on the ground if he is standing on the ground then touching the line (phase) he will get a shock because the ground on what we standing is like line (ground bed – like neutral)? and in the most of electric lines the neutral is grounded..so that means that human who touch the line closes the circuit between phase and neutral. 13. what is meant by armature reaction? The effect of armature flu to main flux is called armature reaction. The armature flux may support main flux or opposes main flux. 14. what happen if we give 220 volts dc supply to d bulb r tube light? Bulbs for AC are designed to operate such that it offers high impedance to AC supply. Normally they have low resistance. When DC supply is applied, due to low resistance, the current through lamp would be so high that it may damage the bulb element. 15. Which motor has high Starting Torque and Staring current DC motor, Induction motor or Synchronous motor? DC Series motor has high starting torque. We can not start the Induction motor and Synchronous motors on load, but can not start the DC series motor without load. 16. what is ACSR cable and where we use it? ACSR means Aluminium conductor steel reinforced, this conductor is used in transmission & distribution. 17. What is vacuum circuit breaker.define with cause and where be use it Device? A breaker is normally used to break a circuit. while breaking the circuit, the contact terminals will be separated. At the time of separation an air gap is formed in between the terminals. Due to existing current flow the air in the gap is ionized and results in the arc. various mediums are used to quench this arc in respective CB’s. but in VCB the medium is vacuum gas. since the air in the CB is having vacuum pressure the arc formation is interrupted. VCB’s can be used up to kv. 18. What will happen when power factor is leading in distribution of power? If their is high power factor, i.e if the power factor is close to one: losses in form of heat will be reduced, cable becomes less bulky and easy to carry, and very cheap to afford, & it also reduces over heating of transformers. 19. whats the one main difference between UPS & inverter ? And electrical engineering & electronics engineering ? uninterrupt power supply is mainly use for short time . means according to ups VA it gives backup. ups is also two types : on line and offline . online ups having high volt and amp for long time backup with with high dc voltage.but ups start with 2v dc with 7 amp. but inverter is start with 2v,24,dc to 36v dc and 20amp to 80amp battery with long time backup. 20. What is 2 phase motor? A two phase motor is a motor with the the starting winding and the running winding have a phase split. e.g;ac servo motor.where the auxiliary winding and the control winding have a phase split of 90 degree. 21. Advantages of vvvf drives over non vvvf drives for EOT cranes? smooth start and stop. no jerking of load. exact positioning better protection for motor. high/low speed selection. reliability of break shoe. programmable break control. easy circuitry reduction in controls increases motor life 22. What is the significance of vector grouping in Power Transformers? Every power transformer has a vector group listed by its manufacturer. Fundamentally it tells you the information about how the windings are connected (delta or wye) and the phace difference between the current and voltage. EG. DYN means Delta primary, Wye Secondry and the current is at o clock referred to the voltage. 23. Which type of A.C motor is used in the fan (ceiling fan, exhaust fan, pedestal fan, bracket fan etc) which are find in the houses ? Its Single Phase induction motor which mostly squirrel cage rotor and are capacitor start capacitor run. 24. Give two basic speed control scheme of DC shunt motor? By using flux control method:in this method a rheostat is connected across the field winding to control the field current.so by changing the current the flux produced by the field winding can be changed, and since speed is inversely proportional to flux speed can be controlled .armature control method:in this method a rheostat is connected across armature winding by varying the resistance the value of resistive drop(IaRa)can be varied,and since speed is directly proportional to Eb-IaRa the speed can be controlled. 25. what is the principle of motor? Whenever a current carrying conductor is placed in an magnetic field it produce turning or twisting movement is called as torque. 26. what is meant by armature reaction? The effect of armature flu to main flux is called armature reaction. The armature flux may support main flux or opposes main flux. 27. Give two basic speed control scheme of DC shunt motor? By using flux control method:in this method a rheostat is connected across the field winding to control the field current.so by changing the current the flux produced by the field winding can be changed, and since speed is inversely proportional to flux speed can be controlled .armature control method:in this method a rheostat is connected across armature wdg.by varying the resistance the value of resistive drop(IaRa)can be varied,and since speed is directly proportional to Eb-IaRa the speed can be controlled. 28. what is the difference between synchronous generator & asynchronous generator? In simple, synchronous generator supply’s both active and reactive power but asynchronous generator(induction generator) supply’s only active power and observe reactive power for magnetizing.This type of generators are used in windmills. 29. What is the Polarization index value ? (pi value)and simple definition of polarization index ? Its ratio between insulation resistance(IR)i.e meggar value for 0min to insulation resistance for min. It ranges from 5-7 for new motors & normally for motor to be in good condition it should be Greater than .5 . 30. Why syn. generators are used for the production of electricity? synchronous machines have capability to work on different power factor(or say different imaginary power varying the field emf. Hence syn. generators r used for the production of electricity. 31. What is the difference between synchronous generator & asynchronous generator? In simple, synchronous generator supply’s both active and reactive power but asynchronous generator(induction generator) supply’s only active power and observe reactive power for magnetizing.This type of generators are used in windmills. 32. 1 ton is equal to how many watts? 1 ton = 12000 BTU/hr and to convert BTU/hr to horsepower, 12,000 * 0.000929 = 4.715 hp therefore 1 ton = 4.715*.746 = .5 KW. 33. why syn. generators r used for the production of electricity? synchronous machines have capability to work on different power factor(or say different imaginary pow varying the field emf. Hence syn. generators r used for the production of electricity. 34. Enlist types of dc generator? D.C.Generators are classified into two types: separately excited d.c.generator self excited d.c.generator, which is further classified into;1)series 2)shunt and compound(which is further classified into cumulative and differential). 35. What is Automatic Voltage regulator(AVR)? AVR is an abbreviation for Automatic Voltage Regulator.It is important part in Synchronous Generators, it controls the output voltage of the generator by controlling its excitation current. Thus it can control the output Reactive Power of the Generator. 36. What is an exciter and how does it work? There are two types of exciters, static exciter and rotory exciter.purpose of excitor is to supply the excitation dc voltage to the fixed poles of generator.Rotory excitor is an additional small generator mounted on the shaft of main generator. if it is dc generator, it will supply dc to the rotory poles through slip ring and brushes( conventional alternator). if it is an ac excitor, out put of ac excitor is rectified by rotating diodes and supply dc to main fixed poles.ac excitor is the ac generator whose field winding are stationary and armature rotates. initial voltage is built up by residual magnetism.It gives the starting torque to the generator. 37. Difference between a four point starter and three point starter? The shunt connection in four point stater is provided separately form the line where as in three point stater it is connected with line which is the drawback in three point stater 38. Why use the VCB at High Transmission System ? Why can’t use ACB? Actually the thing is vacuum has high arc quenching property compare to air because in VCB ,the die electric strengths equal to 8 times of air . That y always vacuum used as inHT breaker and air used as in LT . 39. What is the difference between surge arrestor and lightning arrestor? LA is installed outside and the effect of lightning is grounded,where as surge arrestor installed inside panels comprising of resistors which consumes the energy and nullify the effect of surge. 40. What happens if i connect a capacitor to a generator load? Connecting a capacitor across a generator always improves power factor,but it will help depends up on the engine capacity of the alternator,other wise the alternator will be over loaded due to the extra watts consumed due to the improvement on pf. Secondly, don’t connect a capacitor across an alternator while it is picking up or without any other load. 41. Why the capacitors works on ac only? Generally capacitor gives infinite resistance to dc components(i.e., block the dc components). it allows the ac components to pass through. 42. Explain the working principal of the circuit breaker? Circuit Breaker is one which makes or breaks the circuit.It has two contacts namely fixed contact & moving contact.under normal condition the moving contact comes in contact with fixed contact thereby forming the closed contact for the flow of current. During abnormal & faulty conditions(when current exceeds the rated value) an arc is produced between the fixed & moving contacts & thereby it forms the open circuit Arc is extinguished by the Arc Quenching media like air, oil, vacuum etc. 43. How many types of colling system it transformers? ONAN (oil natural,air natural) ONAF (oil natural,air forced) OFAF (oil forced,air forced) ODWF (oil direct,water forced) OFAN (oil forced,air forced) 44. Define What is the function of anti-pumping in circuit breaker? when breaker is close at one time by close push button,the anti pumping contactor prevent re close the breaker by close push button after if it already close. 45. What is stepper motor.what is its uses? Stepper motor is the electrical machine which act upon input pulse applied to it. it is one type of synchronous motor which runs in steps in either direction instead of running in complete cycle.so, in automation parts it is used. 46. How to calculate capacitor bank value to maintain unity power factor with some suitable example? KVAR= KW(TAN(COS(-1)#e)- TAN(COS(-1)#d) ) #e= EXISTING P.F. #d= DESIRED P.F. 47. Tell me in detail about c.t. and p.t. ?(Company:reliance) The term C.T means current transformer,and the term P.T means potential transformer.In circuit where measurements of high voltage and high current is involved they are used there.Particularly when a measuring device like voltmeter or ammeter is not able to measure such high value of quantity because of large value of torque due to such high value it can damage the measuring device.so, CT and PT are introduced in the circuits. They work on the same principle of transformer, which is based on linkage of electromagnetic flux produced by primary with secondary.They work on the ratio to they are designed.E.g if CTis of ratio 50005A and it has to measure secondary current of 8000A.then ANS=8000*55000=8Aand this result will be given to ammeter .and after measuring 8A we can calculate the primary current.same is the operation of PT but measuring voltage. 48. There are a Transformer and an induction machine. Those two have the same supply. For which device the load current will be maximum? And why? The motor has max load current compare to that of transformer because the motor consumes real power.. and the transformer is only producing the working flux and its not consuming.. hence the load current in the transformer is because of core loss so it is minimum. 49. What is power factor? whether it should be high or low? why? Power factor should be high in order to get smooth operation of the system.Low power factor means losses will be more.it is the ratio of true power to apparent power. it has to be ideally 1. if it is too low then cable over heating & equipment overloading will occur. if it is greater than 1 then load will act as capacitor and starts feeding the source and will cause tripping.(if pf is poor ex: 0.17 to meet actual power load has to draw more current(V constant),result in more losses if pf is good ex: 0.95 to meet actual power load has to draw less current(V constant),result in less losses). 50. What is the difference between Isolator and Circuit Breaker? Isolator is a off load device which is used for isolating the downstream circuits from upstream circuits for the reason of any maintenance on downstream circuits. it is manually operated and does not contain any solenoid unlike circuit breaker. it should not be operated while it is having load. first the load on it must be made zero and then it can safely operated. its specification only rated current is given.But circuit breaker is onload automatic device used for breaking the circuit in case of abnormal conditions like short circuit, overload etc., it is having three specification 1 is rated current and 2 is short circuit breaking capacity and 3 is instantaneous tripping current. 51. what is boucholz relay and the significance of it in to the transformer? Boucholz relay is a device which is used for the protection of transformer from its internal faults, it is a gas based relay. whenever any internal fault occurs in a transformer, the boucholz relay at once gives a horn for some time, if the transformer is isolated from the circuit then it stop its sound itself other wise it trips the circuit by its own tripping mechanism. 52. What is SF6 Circuit Breaker? SF6 is Sulpher hexa Flouride gas.. if this gas is used as arc quenching medium in a Circuitbreaker means SF6 CB. 53. What is frantic effect? Output voltage is greater than the input voltage or receiving end voltage is greater than the sending end voltage. 54. What is meant by insulation voltage in cables? explain it? It is the property of a cable by virtue of it can withstand the applied voltage without rupturing it is known as insulation level of the cable. 55. Why we do 2 types of earthing on transformer (ie:)body earthing & neutral earthing , what is function. i am going to install a oo kva transformer & 380 kva DG set what should the earthing value? The two types of earthing are Familiar as Equipment earthing and system earthing. In Equipment earthing: body ( non conducting part)of the equipment shouldd be earthed to safegaurd the human beings.system Earthing : In this neutral of the supply source ( Transformer or Generator) should be grounded. With this,in case of unbalanced loading neutral will not be shifted.so that unbalanced voltages will not arise. We can protect the equipment also. With size of the equipment( transformer or alternator)and selection of relying system earthing will be further classified into directly earthed,Impedance earthing, resistive (NGRs) earthing. 56. What is the difference between MCB & MCCB, Where it can be used? MCB is miniature circuit breaker which is thermal operated and use for short circuit protection in small current rating circuit. MCCB moulded case circuit breaker and is thermal operated for over load current and magnetic operation for instant trip in short circuit condition.under voltage and under frequency may be inbuilt. Normally it is used where normal current is more than 100A. 57. Where should the lighting arrestor be placed in distribution lines? Near distribution transformers and out going feeders of 11kv and incomming feeder of 33kv and near power transformers in sub-stations. 58. Define IDMT relay? It is an inverse definite minimum time relay.In IDMT relay its operating is inversely proportional and also a characteristic of minimum time after which this relay operates.It is inverse in the sense ,the tripping time will decrease as the magnitude of fault current increase. 59. What are the transformer losses? TRANSFORMER LOSSES – Transformer losses have two sources-copper loss and magnetic loss. Copper losses are caused by the resistance of the wire (I2R). Magnetic losses are caused by eddy currents and hysteresis in the core. Copper loss is a constant after the coil has been wound and therefore a measurable loss. Hysteresis loss is constant for a particular voltage and current. Eddy-current loss, however, is different for each frequency passed through the transformer. 60. What is the count of hvdc transmission lines in India? Resolution:At present there are three hvdc transmission lines in india chandrapur to padghe(mumbai)–(100 MW at ±00 kV DC) rehand to delhi (100 MW at ±00 kV DC) talchal to kolar (200 MW) 61. What is meant by regenerative braking? Resolution:When the supply is cutt off for a running motor, it still continue running due to inertia. In order to stop it quickly we place a load(resitor) across the armature winding and the motor should have maintained continuous field supply. so that back e.m.f voltage is made to apply across the resistor and due to load the motor stops quickly.This type of breaking is called as “Regenerative Breaking”. 62. Why is the starting current high in a DC motor? Resolution:In DC motors, Voltage equation is V=Eb-IaRa (V = Terminal voltage,Eb = Back emf in Motor,Ia = Armature current,Ra = Aramture resistance).At starting, Eb is zero. Therefore, V=IaRa, Ia = V/Ra ,where Ra is very less like 0.01ohm.i.e, Ia will become enormously increased. 63. What are the advantages of star-delta starter with induction motor? Resolution: (1). The main advantage of using the star delta starter is reduction of current during the starting of the motor.Starting current is reduced to 3-4 times Of current of Direct online starting. (2). Hence the starting current is reduced , the voltage drops during the starting of motor in systems are reduced. 64. Why Delta Star Transformers are used for Lighting Loads? Resolution:For lighting loads, neutral conductor is must and hence the secondary must be star winding. and this lighting load is always unbalanced in all three phases. To minimize the current unbalance in the primary we use delta winding in the primary. So delta / star transformer is used for lighting loads. 65. Why in a three pin plug the earth pin is thicker and longer than the other pins? Resolution:It depends upon R=rho l/a where area(a) is inversely proportional to resistance (R), so if (a) increases, R decreases & if R is less the leakage current will take low resistance path so the earth pin should be thicker. It is longer because the The First to make the connection and Last to disconnnect should be earth Pin. This assures Safety for the person who uses the electrical instrument. 66. Why series motor cannot be started on no-load? Resolution:Series motor cannot be started without load because of high starting torque. Series motor are used in Trains, Crane etc. 67. Why ELCB can’t work if N input of ELCB do not connect to ground? Resolution:ELCB is used to detect earth leakage fault. Once the phase and neutral are connected in an ELCB, the current will flow through phase and that much current will have to return neutral so resultant current is zero. Once there is a ground fault in the load side, current from phase will directly pass through earth and it will not return through neutral through ELCB. That means once side current is going and not returning and hence because of this difference in current ELCB wil trip and it will safe guard the other circuits from faulty loads. If the neutral is not grounded, fault current will definitely high and that full fault current will come back through ELCB, and there will be no difference in current. 68. How electrical power is generated by an A.C Generator? For the generation of elect power we need a prime mover which supplies mechanical power input to the alternator, can be steam turbines,or hydro turbines .When poles of the rotor moves under the armature conductors which are placed on the stator ,field flux cut the armature conductor ,therefore voltage is generated and is of sinusoidal in nature…due to polarity change of rotor poles(i,e) N-S-N-S. 69. Why an ac solenoid valve attract the plunger even though we interchanges the terminal? Will the poles changes? Yes because the poles changes for every half-cycle of ac voltage so the polarity of AC voltage is continuously changing for every half cycle. so, interchanging of terminals in ac system does not show any difference. That’s why the ac solenoid attract the plunger even though it’s terminals are interchanged. 70. What is derating?, why it is necessary, it is same for all means for drives, motors,and cables. The current currying of cables will change depending upon the site temperature (location of site), type of run (it will run through duct, trench, buried etc.), number of tray, depth of trench, distance between cables. Considering this condition actual current currying capacity of cable reduce than current currying capacity (which given to cable Catalogue) this is called derating. 71. Why temperature rise is conducted in bus bars and isolators? Bus bars and isolators are rated for continuous power flow, that means they carry heavy currents which rises their temperature. so it is necessary to test this devices for temperature rise. 72. When voltage increases then current also increases then what is the need of over voltage relay and over current relay? Can we measure over voltage and over current by measuring current only? No.We can’t sense the over voltage by just measuring the current only because the current increases not only for over voltages but also for under voltage(As most of the loads are non-linear in nature).So,the over voltage protection & over current protection are completely different. Over voltage relay meant for sensing over voltages & protect the system from insulation break down and firing. Over current relay meant for sensing any internal short circuit, over load condition ,earth fault thereby reducing the system failure & risk of fire.So, for a better protection of the system.It should have both over voltage & over current relay. 73. If one lamp connects between two phases it will glow or not? If the voltage between the two phase is equal to the lamp voltage then the lamp will glow. When the voltage difference is big it will damage the lamp and when the difference is smaller the lamp will glow depending on the type of lamp. 74. How do you select a cable size (Cu & Al) for a particular load? At first calculate the electrical current of the load, after that derate the electrical current considering derating factor(depending on site condition and laying of cable) after choose the cable size from cable catalog considering derating electrical current.After that measure the length of cable required from supply point of load to load poin. Calculate the voltage drop which will max 3% (resistance and reactance of cable found from cable catalog of selecting cable) if voltage drop>3% then choose next higher size of cable. 75. What are HRC fuses and where it is used? HRC stand for “high rupturing capacity” fuse and it is used in distribution system for electrical transformers. 76. Which power plant has high load factor? All base load power plants have a high load factor. If we use high efficiency power plants to supply the base load,we can reduce the cost of generation.Hydel power plants have a higher efficiency than thermal & nuclear power plants. 77. Mention the methods for starting an induction motor? The different methods of starting an induction motor DOL:direct online starter Star delta starter Auto transformer starter Resistance starter Series reactor starter 78. What is the difference between earth resistance and earth electrode resistance? Only one of the terminals is evident in the earth resistance. In order to find the second terminal we should recourse to its definition: Earth Resistance is the resistance existing between the electrically accessible part of a buried electrode and another point of the earth, which is far away. The resistance of the electrode has the following components: (A) the resistance of the metal and that of the connection to it. (B) the contact resistance of the surrounding earth to the electrode. 79. Explain What is use of lockout relay in ht voltage? A lock-out relay is generally placed in line before or after the e-stop switch so the power can be shut off at one central location. This relay is powered by the same electrical source as the control power and is operated by a key lock switch. The relay itself may have up to 24 contact points within the unit itself. This allows the control power for multiple machines to be locked out by the turn of a single key switch. 80. What is the power factor of an alternator at no load? At no load Synchronous Impedance of the alternator is responsible for creating angle difference. So it should be zero lagging like inductor. 81. Explain How to determine capacitor tolerance codes? Resolution:In electronic circuits, the capacitor tolerance can be determined by a code that appears on the casing. The code is a letter that often follows a three-digit number (such as 130Z).The first two are the 1st and 2nd significant digits and the third is a multiplier code. Most of the time the last digit tells you how many zeros to write after the first two digits and these are read as Pico-Farads. 82. Why most of analog o/p devices having o/p range 4 to 20 mA and not 0 to 20 mA? Resolution:4-20 mA is a standard range used to indicate measured values for any process. The reason that 4ma is chosen instead of 0 mA is for fail safe operation .For example- a pressure instrument gives output 4mA to indicate 0 psi, up to 20 mA to indicate 100 psi, or full scale. Due to any problem in instrument (i.e) broken wire, its output reduces to 0 mA. So if range is 0-20 mA then we can differentiate whether it is due to broken wire or due to 0 psi. 83. Two bulbs of 100w and 40w respectively connected in series across a 230v supply which bulb will glow bright and why? Resolution:Since two bulbs are in series they will get equal amount of electrical current but as the supply voltage is constant across the bulb(P=V^2/R).So the resistance of 40W bulb is greater and voltage across 40W is more (V=IR) so 40W bulb will glow brighter. 84. What is meant by knee point voltage? Resolution:Knee point voltage is calculated for electrical Current transformers and is very important factor to choose a CT. It is the voltage at which a CT gets saturated.(CT-current transformer). 85. What is reverse power relay? Resolution:Reverse Power flow relay are used in generating stations’s protection. A generating stations is supposed to fed power to the grid and in case generating units are off,there is no generation in the plant then plant may take power from grid. To stop the flow of power from grid to generator we use reverse power relay. 86. What will happen if DC supply is given on the primary of a transformer? Resolution:Mainly transformer has high inductance and low resistance.In case of DC supply there is no inductance ,only resistance will act in the –> electrical circuit. So high electrical current will flow through primary side of the transformer.So for this reason coil and insulation will burn out. 87. What is the difference between isolators and –>electrical circuit breakers? What is bus-bar? Resolution:Isolators are mainly for switching purpose under normal conditions but they cannot operate in fault conditions .Actually they used for isolating the CBs for maintenance. Whereas CB gets activated under fault conditions according to the fault detected.Bus bar is nothing but a junction where the power is getting distributed for independent loads. 88. What are the advantage of free wheeling diode in a Full Wave rectifier? Resolution:It reduces the harmonics and it also reduces sparking and arching across the mechanical switch so that it reduces the voltage spike seen in a inductive load 89. What is the function of interposing current transformer? Resolution:The main function of an interposing current transformer is to balance the currents supplied to the relay where there would otherwise be an imbalance due to the ratios of the main current transformers. Interposing current transformer are equipped with a wide range of taps that can be selected by the user to achieve the balance required. 90. What are Motor Generator Sets and explain the different ways the motor generator set can be used ? Resolution:Motor Generator Sets are a combination of an electrical generator and an engine mounted together to form a single piece of equipment. Motor generator set is also referred to as a genset, or more commonly, a generator. The motor generator set can used in the following different ways: Alternating current (AC) to direct current (DC) DC to AC DC at one voltage to DC at another voltage AC at one frequency to AC at another harmonically-related frequency 91. Define what is power quality meter ? Power Quality meters are common in many industrial environment. Small units are now available for home use as well. They give operators the ability to monitor the both perturbations on the power supply, as well as power used within a building, or by a single machine or appliance. In some situations, equipment function and operation is monitored and controlled from a remote location where communication is via modem, or highspeed communication lines.So we can understand the importance of power measurement through power quality meters. 92. What is the different between digital phase converter and ordinary phase converter? Digital phase converter are a recent development in phase converter technology that utilizes proprietary software in a powerful microprocessor to control solid state power switching components. This microprocessor, called a digital signal processor (DSP), monitors the phase conversion process, continually adjusting the input and output modules of the converter to maintain perfectly balanced three-phase power under all load conditions. 93. Explain the operation of variable frequency transformer? A variable frequency transformer is used to transmit electricity between two asynchronous alternating current domains. A variable frequency transformer is a doubly-fed electric machine resembling a vertical shaft hydroelectric generator with a three-phase wound rotor, connected by slip rings to one external ac power circuit. A direct-current torque motor is mounted on the same shaft. Changing the direction of torque applied to the shaft changes the direction of power flow; with no applied torque, the shaft rotates due to the difference in frequency between the networks connected to the rotor and stator.The variable frequency transformer behaves as a continuously adjustable phase-shifting transformer. It allows control of the power flow between two networks . 94. What is the main use of rotary phase converter ? Rotary phase converter will be converting single phase power into true balanced 3 phase power,so it is often called as single phase to three phase converter . Often the advantages of 3 phase motors, and other 3 phase equipment, make it worthwhile to convert single phase to 3 phase so that small and large consumers need not want to pay for the extra cost of a 3 phase service but may still wish to use 3 phase equipment. 95. Use of switch mode power converter in real-time basis? Switch mode power converter can be used in the following 5 different ways step down an unregulated dc input voltage to produce a regulated dc output voltage using a circuit known as Buck Converter or Step-Down SMPS, step up an unregulated dc input voltage to produce a regulated dc output voltage using a circuit known as Boost Converter or Step-Up SMPS, step up or step down an unregulated dc input voltage to produce a regulated dc output voltage, invert the input dc voltage using usually a circuit such as the Cuk converter, and produce multiple dc outputs using a circuit such as the fly-back converter. 96. Which type of oil is used as a transformer oil? Transformer oil, or insulating oil, is usually a highly-refined mineral oil that is stable at high temperatures and has excellent electrical insulating properties. It is used in oil filled transformers, some types of high voltage capacitors, fluorescent lamp ballasts, and some types of high voltage switches and circuit breakers. Its functions are to insulate, suppress corona and arcing, and to serve as a coolant. Well into the 170s, polychlorinated biphenyls (PCB)s were often used as a dielectric fluid since they are not flammable. They are toxic, and under incomplete combustion, can form highly toxic products such as furan. Starting in the early 170s, concerns about the toxicity of PCBs have led to their banning in many countries. Today, non-toxic, stable silicon-based or fluoridated hydrocarbons are used, where the added expense of a fireresistant liquid offsets additional building cost for a transformer vault. Combustion-resistant vegetable oil-based dielectric coolants and synthetic pentaerythritol tetra fatty acid (C7, C8) esters are also becoming increasingly common as alternatives to naphthenic mineral oil. Esters are non-toxic to aquatic life, readily biodegradable, and have a lower volatility and higher flash points than mineral oil. 97. If we give 2334 A, 540V on Primary side of 1.125 MVA step up transformer, then what will be the Secondary Current, If Secondary Voltage=11 KV? As we know the Voltage & current relation for transformer-V1/V2 = I2/I1 We Know, VI= 540 V; V2=11KV or 11000 V; I1= 2334 Amps. By putting these value on Relation- 540/11000= I2/2334 So,I2 = 114.5 Amps 98. what are the points to be consider for MCB(miniature circuit breaker selection? I(L)*1.25=I(MAX) maximum current. Mcb specification are done on maximum current flow in circuit. 99. What is the full form of KVAR? We know there are three types of power in Electrical as Active, apparent & reactive. So KVAR is stand for “Kilo Volt Amps with Reactive component. 100. What is excitation? Excitation is applying an external voltage to DC shunt coil in DC motors. ELECTRICAL Interview Questions :: 101. In three pin plug 6 Amp. 220v AC rating. why earth pin diameter is higher than other two pin? what its purpose ? Because Current flow in the conductor is inversely proportional to the conductor diameter. So if any short circuits occur in the system first high currents bypassed in the Earthling terminal.( R=Pl/a area of the conductor increases resistance value decreases) 102. Difference between megger test equipment and contact resistance meter test instruments? Megger test equipment used to measure cable electric resistance, conductor continuity, phase identification where as contact resistance meter test instruments used to measure low resistance like relays ,contactors. 103. When we connect the large capacitor bank in series ? we connect large capacitor bank in series to improve the voltage power supply at the load end in balanced transmission line when there is considerable voltage drop along the balanced transmission line due to high impedance of the line.So in order to bring the voltage at the load terminals within its limits (i.e (+ or – %6 )of the rated high terminal voltage )the large capacitor bank is used in series. 104. What is electrical diversity factor in electrical installations? Electrical diversity factor is the ratio of the sum of the individual maximum demands of the various subdivisions of a system, or part of a system, to the maximum demand of the whole system, or part of the system, under consideration. Electrical diversity factor is usually more than one. 105. Why field rheostat is kept in minimum position while armature rheostat at maximum position? In motors at the time of starting the armature resistance is introduced to reduce the high starting current and the field resistance is kept minimum to have high starting torque. 106. Why computer humming sound occurred in HT transmission line? This computer humming sound is coming due to ionization (breakdown of air into charged particles) of air around transmission conductor. This effect is called as Corona effect, and it is considered as power loss. 107. Explain What is rated speed? At the time of motor taking normal current (rated current)the speed of the motor is called rated speed. It is a speed at which any system take small current and give maximum efficiency. 108. What is different between resistance grounding system and resistance earthing system? Resistance grounding system means connecting the neutral point of the load to the ground to carry the residual current in case of unbalanced conditions through the neutral to the ground whereas resistance earthing system is done in an electric equipment in order to protect he equipment in occurrence of fault in the system. 109. Why should be the frequency 50 Hz 60Hz only why not others like 45, 95 56 or anything , why should we maintain the frequency constant if so why it is only 50 Hz 60Hz? We can have the frequency at any frequency you like, but than you must also make your own motors,high voltage transformers or any other equipment you want to use.We maintain the frequency at 50hz or 60hz because the world maintains a standard at 50 /60hz and the equipments are are made to operate at these frequency. 110. How to determine alternating current frequency? Zero crossings of the sine wave to trigger a monostable (pulse generator) is a way to determine alternating current frequency. A fixed width pulse is generated for each cycle. Thus there are “n” pulses per second, each with with a constant energy. The more pulses there are per second, the more the energy. The pulses are integrated (filtered or averaged) to get a steady DC voltage which is proportional to frequency. This voltage can then be displayed on an analogue or digital voltmeter, indicating frequency. This method is more suitable than a direct counter, as it can get good accuracy in a second or so. 111. Why electricity in India is in the multiples of 11 like 11kv, 22kv, 33kv ? Transformer Induced voltage equation contains 4.44 factor. E=4.44*f*T*phi E -Induced emf per phase T -number of turns f -frequency phi -maximum flux per pole From the equation we see that E is proportional to 4.4 and it is in turn multiple of 11. So always transmission voltage is multiple of 11 112. Why we use ac system in India why not dc ? Firstly, the output of power stations comes from a rotary turbine, which by it’s nature is AC and therefore requires no power electronics to convert to DC. Secondly it is much easier to change the voltage of AC electricity for transmission and distribution. thirdly the cost of plant associated with AC transmission (circuit breakers, transformers etc) is much lower than the equivilant of DC transmission AC transmission provides a number of technical advantages. When a fault on the network occurs, a large fault current occurs. In an AC system this becomes much easier to interupt, as the sine wave current will naturally tend to zero at some point making the current easier to interrupt. 113. Which type of motor is used in trains, what is the rating of supply used explain Working principal? Dc series is in the trains to get high starting torque while starting of the trains and operating voltage is 1500v dc. 114. Battery banks are in connected in series or parallel and why? Battery banks are always connected in series in order to get a multiplied voltage where the AH or current capacity remaining same. Ex : 24 nos. 2V,200Ah batteries connected in series will give 48V,200Ah output (AH = Ampere hours) 115. What is inrush current? Inrush current is the current drawn by a piece of electrically operated equipment when power is first applied. It can occur with AC or DC powered equipment, and can happen even with low supply voltages. 116. In a Tap changing transformer where is the tap connected, is it connected in the primary side or secondary side? Tapings are connected to high voltage winding side, because of low current. If we connect tapings to low voltage side, sparks will produce while tap changing operation due to high current. 117. Why transformer ratings are in kva? Since the power factor of transformer is dependent on load we only define VA rating and does not include power factor .In case of motors, power factor depend on construction and hence rating of motors is in KWatts and include power factor. 118. Define what is difference between fuse and breaker? Fuses are burned at the time of over current flows in the circuit but breakers are just open(not burn) at the time of over current flow. Fuses are used in only one time but breakers are used by multiple number of times. 119. What is the difference between delta-delta, delta-star transformer? Delta-delta transformer is used at generating station or a receiving station for Change of Voltage (i,e) generally it is used where the Voltage is high & Current is low.Delta-star is a distribution kind of transformer where from secondary star neutral is taken as a return path and this configuration is used for Step down voltage phenomena. 120. Capacitor is load free component but why ampere meter shows current when capacitor bank breaker close? As we know that Electrical is having two type of load, Active and Reactive .Capacitor is a reactive load which is not considering as a load,& its factor is Isin@ .Meter is design based on Current RMS value because of it meter is showing the current RMS value. 121. What’s electric traction? Traction implies with the electric power for traction system i. e. for railways, trams, trolleys etc. electric traction implies use of the electricity for all these. Now a day, magnetic traction is also utilised for bullet trains. Essentially dc motors are utilized for electric traction systems. 122. What is “pu” in EE? Pu stands for per unit in power system. (pu = actual value/ base value) 123. Define stepper motor. What is the use of stepper motor? The motor which work or act on the applied input pulse in it, is called as stepper motor.This stepper motor is under the category of synchronous motor, which often does not fully depend of complete cycle. It likes to works in either direction related to steps. for this purpose it mainly used in automation parts. 124. What is a differential amplifier? Also, explain CMRR. Differential Amplifier: The amplifier, which is used to amplify the voltage difference between two input-lines neither of which is grounded, is called differential amplifier. This reduces the amount of noise which is injected into the amplifier, because any noise appearing simultaneously on both the input-terminals as the amplifying circuitry rejects it being a common mode signal. CMRR: It can be defined as the ratio of differential voltage-gain to common made voltage gain. If a differential amplifier is perfect, CMRR will be infinite because in that case common mode voltage gain would be zero. 125. What is use of lockout relay in ht voltage? A lock-out relay is generally placed in line before or after the e-stop switch so the power can be shut off at one central location. This relay is powered by the same electrical source as the control power which is operated by a key lock switch. The relay itself may have up to 24 contact points within the unit itself. This allows the control power for multiple machines to be locked out by the turn of a single key switch. 126. How can you start-up the 40w tube lite with 230v AC/DC without using any choke/Coil? It’s possible with Electronic choke. otherwise it’s not possible to ionize the particles in tube. light, with normal voltage. 127. What types domain of Laplace transforms? What behavior can Laplace transform predict how the system work? Types domain of Laplace transforms is s-domain, Laplace transforms provide a method to find position, acceleration or voltage the system will have. 128. In the magnetic fluxes, what is the role of armature reaction? The armature flux has an important role for the running condition. This armature flux can oppose the main flux or it may support the main flux for better running condition. This effect of supporting and opposing of main flux to armature flux is called armature reaction. 129. Explain thin film resistors and wire-wound resistors Thin film resistors- It is constructed as a thin film of resistive material is deposited on an insulating substrate. Desired results are obtained by either trimming the layer thickness or by cutting helical grooves of suitable pitch along its length. During this process, the value of the resistance is monitored closely and cutting of grooves is stopped as soon as the desired value of resistance is obtained. Wire wound resistors – length of wire wound around an insulating cylindrical core are known as wire wound resistors. These wires are made of materials such as Constantan and Manganin because of their high resistivity, and low temperature coefficients. The complete wire wound resistor is coated with an insulating material such as baked enamel 130. whats the one main difference between UPS & inverter ? And electrical engineering & electronics engineering ? uninterrupt power supply is mainly use for short time . means according to ups VA it gives backup. ups is also two types : on line and offline . online ups having high volt and amp for long time backup with with high dc voltage.but ups start with v dc with 7 amp. but inverter is startwith v,24,dc to 36v dc and 0amp to 180amp battery with long time backup. 131. What are the operation carried out in Thermal power station? The water is obtained in the boiler and the coal is burnt so that steam is obtained this steam is allowed to hit the turbine, the turbine which is coupled with the generator generates the electricity 132. What is the difference between Electronic regulator and ordinary rheostat regulator for fans? The difference between the electronic and ordinary regulator is the fact that in electronic reg. power losses tend to be less because as we minimize the speed the electronic reg. give the power necessary for that particular speed but in case of ordinary rheostat type reg. the power wastage is same for every speed and no power is saved. In electronic regulator triac is employed for speed control. by varying the firing angle speed is controlled but in rheostatic control resistance is decreased by steps to achievespeed control. 133. What is 2 phase motor? A two phase motor is often a motor with the the starting winding and the running winding have a phase split. e.g; ac servo motor. where the auxiliary winding and the control winding have a phase split of 90 degree. 134. What does quality factor depend on in resonance? Quality factor q depends on frequency and bandwidth. 135. What are the types of power in electrical power? There are normally three types of power are counted in electrical power. They are, Apparent power Active power Reactive power 136. What are the advantages of VSCF wind electrical system? Advantages of VSCF wind electrical system are: No complex pitch changing mechanism is needed. Aero turbine always keeps going at maximum efficiency point. Extra energy in the high wind speed region of the speed – duration curve can be extracted Significant reduction in aerodynamic stresses, which are associated with constant – speed operation. 137. What is slip in an induction motor? Slip can be defined as the distinction between the flux speed (Ns) and the rotor speed (N). Speed of the rotor of an induction motor is always less than its synchronous speed. It is usually expressed as a percentage of synchronous speed (Ns) and represented by the symbol ‘S’. 138. Why link is provided in neutral of an ac circuit and fuse in phase of ac circuit? Link is provided at a Neutral common point in the circuit from which various connection are taken for the individual control circuit and so it is given in a link form to withstand high Amps. But in the case of Fuse in the Phase of AC circuit it is designed such that the fuse rating is calculated for the particular circuit (i.e load) only.So if any malfunction happen the fuse connected in the particular control circuit alone will blow off. 139. State the difference between generator and alternator? Generator and alternator are two devices, which converts mechanical energy into electrical energy. Both have the same principle of electromagnetic induction, the only difference is that their construction. Generator persists stationary magnetic field and rotating conductor which rolls on the armature with slip rings and brushes riding against each other, hence it converts the induced emf into dc current for external load whereas an alternator has a stationary armature and rotating magnetic field for high voltages but for low voltage output rotating armature and stationary magnetic field is used. 140. What is ACSR cable and where we use it? ACSR means Aluminium conductor steel reinforced, this conductor is used in transmission & distribution. 141. Why star delta starter is preferred with induction motor? Star delta starter is preferred with induction motor due to following reasons: Starting current is reduced 3-4 times of the direct current due to which voltage drops and hence it causes less losses. Star delta starter circuit comes in circuit first during starting of motor, which reduces voltage 3 times, that is why current also reduces up to 3 times and hence less motor burning is caused. In addition, starting torque is increased and it prevents the damage of motor winding. 142. State the difference between generator and alternator Generator and alternator are two devices, which converts mechanical energy into electrical energy. Both have the same principle of electromagnetic induction, the only difference is that their construction. Generator persists stationary magnetic field and rotating conductor which rolls on the armature with slip rings and brushes riding against each other, hence it converts the induced emf into dc current for external load whereas an alternator has a stationary armature and rotating magnetic field for high voltages but for low voltage output rotating armature and stationary magnetic field is used. 143. Why AC systems are preferred over DC systems? Due to following reasons, AC systems are preferred over DC systems: a. It is easy to maintain and change the voltage of AC electricity for transmission and distribution. b. Plant cost for AC transmission (circuit breakers, transformers etc) is much lower than the equivalent DC transmission c. From power stations, AC is produced so it is better to use AC then DC instead of converting it. d. When a large fault occurs in a network, it is easier to interrupt in an AC system, as the sine wave current will naturally tend to zero at some point making the current easier to interrupt. 144. How can you relate power engineering with electrical engineering? Power engineering is a sub division of electrical engineering. It deals with generation, transmission and distribution of energy in electrical form. Design of all power equipments also comes under power engineering. Power engineers may work on the design and maintenance of the power grid i.e. called on grid systems and they might work on off grid systems that are not connected to the system. 145. What are the various kind of cables used for transmission? Cables, which are used for transmitting power, can be categorized in three forms: Low-tension cables, which can transmit voltage upto 1000 volts. High-tension cables can transmit voltage upto 23000 volts. Super tension cables can transmit voltage 66 kV to 132 kV. 146. Why back emf used for a dc motor? highlight its significance. The induced emf developed when the rotating conductors of the armature between the poles of magnet, in a DC motor, cut the magnetic flux, opposes the current flowing through the conductor, when the armature rotates, is called back emf. Its value depends upon the speed of rotation of the armature conductors. In starting, the value of back emf is zero. 147. What is slip in an induction motor? Slip can be defined as the difference between the flux speed (Ns) and the rotor speed (N). Speed of the rotor of an induction motor is always less than its synchronous speed. It is usually expressed as a percentage of synchronous speed (Ns) and represented by the symbol ‘S’. 148. Explain the application of storage batteries. Storage batteries are used for various purposes, some of the applications are mentioned below: For the operation of protective devices and for emergency lighting at generating stations and substations. For starting, ignition and lighting of automobiles, aircrafts etc. For lighting on steam and diesel railways trains. As a supply power source in telephone exchange, laboratories and broad casting stations. For emergency lighting at hospitals, banks, rural areas where electricity supplies are not possible. 149. Explain advantages of storage batteries Few advantages of storage batteries are mentioned below: Most efficient form of storing energy portably. Stored energy is available immediately because there is no lag of time for delivering the stored energy. Reliable source for supply of energy. The energy can be drawn at a fairly constant rate. 160. What are the different methods for the starting of a synchronous motor. Starting methods: Synchronous motor can be started by the following two methods: By means of an auxiliary motor: The rotor of a synchronous motor is rotated by auxiliary motor. Then rotor poles are excited due to which the rotor field is locked with the stator-revolving field and continuous rotation is obtained. By providing damper winding: Here, bar conductors are embedded in the outer periphery of the rotor poles and are short-circuited with the short-circuiting rings at both sides. The machine is started as a squirrel cage induction motor first. When it picks up speed, excitation is given to the rotor and the rotor starts rotating continuously as the rotor field is locked with stator revolving field. 161. Name the types of motors used in vacuum cleaners, phonographic appliances, vending machines,refrigerators, rolling mills, lathes, power factor improvement and cranes. Following motors are used: – Vacuum cleaners- Universal motor. Phonographic appliances – Hysteresis motor. Vending machines – Shaded pole motor. Refrigerators – Capacitor split phase motors. Rolling mills – Cumulative motors. Lathes – DC shunt motors. Power factor improvement – Synchronous motors. 162. State Thevenin’s Theorem: According to thevenin’s theorem, the current flowing through a load resistance Connected across any two terminals of a linear active bilateral network is the ratio open circuit voltage (i.e. the voltage across the two terminals when RL is removed) and sum of load resistance and internal resistance of the network. It is given by Voc / (Ri + RL). 163. State Norton’s Theorem The Norton’s theorem explains the fact that there are two terminals and they are as follows: One is terminal active network containing voltage sources Another is the resistance that is viewed from the output terminals. The output terminals are equivalent to the constant source of current and it allows giving the parallel resistance. The Norton’s theorem also explains about the constant current that is equal to the current of the short circuit placed across the terminals. The parallel resistance of the network can be viewed from the open circuit terminals when all the voltage and current sources are removed and replaced by the internal resistance. 164. State Maximum power transfer theorem The Maximum power transfer theorem explains about the load that a resistance will extract from the network. This includes the maximum power from the network and in this case the load resistance is being is equal to the resistance of the network and it also allows the resistance to be equal to the resistance of the network. This resistance can be viewed by the output terminals and the energy sources can be removed by leaving the internal resistance behind. 165. Explain different losses in a transformer. There are two types of losses occurring in transformer: Constant losses or Iron losses: The losses that occur in the core are known as core losses or iron losses. Two types of iron losses are: 1. eddy current loss 2. Hysteresis loss. These losses depend upon the supply voltage, frequency, core material and its construction. As long as supply voltage and frequency is constant, these losses remain the same whether the transformer is loaded or not. These are also known as constant losses. Variable losses or copper losses: when the transformer is loaded, current flows in primary and secondary windings, there is loss of electrical energy due to the resistance of the primary winding, and secondary winding and they are called variable losses. These losses depend upon the loading conditions of the transformers. Therefore, these losses are also called as variable losses. 176. Explain different types of D.C motors? Give their applications Different type of DC motors and their applications are as follows:- Shunt motors: It has a constant speed though its starting torque is not very high. Therefore, it is suitable for constant speed drive, where high starting torque is not required such as pumps, blowers, fan, lathe machines, tools, belt or chain conveyor etc. Service motors: It has high starting torque & its speed is inversely proportional to the loading conditions i.e. when lightly loaded, the speed is high and when heavily loaded, it is low. Therefore, motor is used in lifts, cranes, traction work, coal loader and coal cutter in coalmines etc. Compound motors: It also has high starting torque and variable speed. Its advantage is, it can run at NIL loads without any danger. This motor will therefore find its application in loads having high inertia load or requiring high intermittent torque such as elevators, conveyor, rolling mill, planes, presses, shears and punches, coal cutter and winding machines etc. 177. Explain the process of commutation in a dc machine. Explain what are inter-poles and why they are required in a dc machine. Commutation: It is phenomenon when an armature coil moves under the influence of one pole- pair; it carries constant current in one direction. As the coil moves into the influence of the next pole- pair, the current in it must reverse. This reversal of current in a coil is called commutation. Several coils undergo commutation simultaneously. The reversal of current is opposed by the static coil emf and therefore must be aided in some fashion for smooth current reversal, which otherwise would result in sparking at the brushes. The aiding emf is dynamically induced into the coils undergoing commutation by means of compoles or interpoles, which are series excited by the armature current. These are located in the interpolar region of the main poles and therefore influence the armature coils only when these undergo commutation. 178. Comment on the working principle of operation of a single-phase transformer. Working principle of operation of a single-phase transformer can be explained as An AC supply passes through the primary winding, a current will start flowing in the primary winding. As a result, the flux is set. This flux is linked with primary and secondary windings. Hence, voltage is induced in both the windings. Now, when the load is connected to the secondary side, the current will start flowing in the load in the secondary winding, resulting in the flow of additional current in the secondary winding. Hence, according to Faraday’s laws of electromagnetic induction, emf will be induced in both the windings. The voltage induced in the primary winding is due to its self inductance and known as self induced emf and according to Lenze’s law it will oppose the cause i.e. supply voltage hence called as back emf. The voltage induced in secondary coil is known as mutually induced voltage. Hence, transformer works on the principle of electromagnetic induction. 179. Define the following terms:- • Reliability, • Maximum demand, • Reserve-generating capacity, • Availability (operational). Reliability: It is the capacity of the power system to serve all power demands without failure over long periods. Maximum Demand: It is maximum load demand required in a power station during a given period. Reserve generating capacity: Extra generation capacity installed to meet the need of scheduled downtimes for preventive maintenance is called reserve-generating capacity. Availability: As the percentage of the time a unit is available to produce power whether needed by the system or not. 180. Mention the disadvantages of low power factor? How can it be improved? Disadvantages of low power factor: Line losses are 1.57 times unity power factor. Larger generators and transformers are required. Low lagging power factor causes a large voltage drop, hence extra regulation equipment is required to keep voltage drop within prescribed limits. Greater conductor size: To transmit or distribute a fixed amount of power at fixed voltage, the conductors will have to carry more current at low power factor. This requires a large conductor size. 181. State the methods of improving power factor? Methods of improving power factor: By connecting static capacitors in parallel with the load operating at lagging power factor. A synchronous motor takes a leading current when over excited and therefore behaves like a capacitor. By using phase advancers to improve the power factor of induction motors. It provides exciting ampere turns to the rotor circuit of the motor. By providing more ampere-turns than required, the induction motor can be made to operate on leading power factor like an overexcited synchronous motor. 182. State the factors, for the choice of electrical system for an aero turbine. The choice of electrical system for an aero turbine is guided by three factors: Type of electrical output: dc, variable- frequency ac, and constant- frequency ac. Aero turbine rotational speed: constant speed with variable blade pitch, nearly constant speed with simpler pitch- changing mechanism or variable speed with fixed pitch blades. Utilization of electrical energy output: in conjunction with battery or other form of storage, or interconnection with power grid. 183. What are the advantages of VSCF wind electrical system? Advantages of VSCF wind electrical system are: No complex pitch changing mechanism is needed. Aero turbine always operates at maximum efficiency point. Extra energy in the high wind speed region of the speed – duration curve can be extracted Significant reduction in aerodynamic stresses, which are associated with constant – speed operation. 184. Explain the terms real power, apparent power and reactive power for ac circuits and also the units used. Real Power: It is the product of voltage, current and power factor i.e. P = V I cos j and basic unit of real power is watt. i.e. Expressed as W or kW. Apparent power: It is the product of voltage and current. Apparent power = V I and basic unit of apparent power is volt- ampere. Expressed as VA or KVA. Reactive Power: It is the product of voltage, current and sine of angle between the voltage and current i.e. Reactive power = voltage X current X sinj or Reactive power = V I sin j and has no other unit but expressed in VAR or KVAR. 185. Define the following: Average demand, Maximum demand, Demand factor, Load factor. Average Demand: the average power requirement during some specified period of time of considerable duration is called the average demand of installation. Maximum Demand: The maximum demand of an installation is defined as the greatest of all the demand, which have occurred during a given period. It is measured accordingly to specifications, over a prescribed time interval during a certain period. Demand Factor: It is defined as the ratio of actual maximum demand made by the load to the rating of the connected load. Load Factor: It is defined as the ratio of the average power to the maximum demand. 186. Explain forward resistance, static resistance and dynamic resistance of a pn junction diode. Forward Resistance: Resistance offered in a diode circuit, when it is forward biased, is called forward resistance. DC or Static Resistance: DC resistance can be explained as the ratio of the dc-voltage across the diode to the direct current flowing through it. AC or Dynamic Resistance: It can be defined as the reciprocal of the slope of the forward characteristic of the diode. It is the resistance offered by a diode to the changing forward current. 187. How does Zener phenomenon differ from Avalanche breakdown? The phenomenon when the depletion region expands and the potential barrier increases leading to a very high electric field across the junction, due to which suddenly the reverse current increases under a very high reverse voltage is called Zener effect. Zener-breakdown or Avalanche breakdown may occur independently or both of these may occur simultaneously. Diode junctions that breakdown below 5v are caused by Zener Effect. Junctions that experience breakdown above 5v are caused by avalanche-effect. The Zener-breakdown occurs in heavily doped junctions, which produce narrow depletion layers. The avalanche breakdown occurs in lightly doped junctions, which produce wide depletion layers. 191. Compare JFET’s and MOSFET’s. Comparison of JFET’s and MOSFET’s: JFET’s can only be operated in the depletion mode whereas MOSFET’s can be operated in either depletion or in enhancement mode. In a JFET, if the gate is forward-biased, excess-carrier injunction occurs and the gatecurrent is substantial. MOSFET’s have input impedance much higher than that of JFET’s. Thus is due to negligible small leakage current. JFET’s have characteristic curves more flat than that of MOSFET is indicating a higher drain resistance. When JFET is operated with a reverse-bias on the junction, the gate-current IG is larger than it would be in a comparable MOSFET. 192. Explain thin film resistors and wire-wound resistors a. Thin film resistors- It is constructed as a thin film of resistive material is deposited on an insulating substrate. Desired results are obtained by either trimming the layer thickness or by cutting helical grooves of suitable pitch along its length. During this process, the value of the resistance is monitored closely and cutting of grooves is stopped as soon as the desired value of resistance is obtained. b. Wire wound resistors – length of wire wound around an insulating cylindrical core are known as wire wound resistors. These wires are made of materials such as Constantan and Manganin because of their high resistivity, and low temperature coefficients. The complete wire wound resistor is coated with an insulating material such as baked enamel 193. What is a differential amplifier? Also, explain CMRR. Differential Amplifier: The amplifier, which is used to amplify the voltage difference between two input-lines neither of which is grounded, is called differential amplifier. This reduces the amount of noise injected into the amplifier, because any noise appearing simultaneously on both the input-terminals as the amplifying circuitry rejects it being a common mode signal. CMRR: It can be defined as the ratio of differential voltage-gain to common made voltage gain. If a differential amplifier is perfect, CMRR would be infinite because in that case common mode voltage gain would be zero. 196. What is the difference between electronic regulator and ordinary electrical rheostat regulator for fans? The difference between the electronic and ordinary electrical regulator is that in electronic regulator power losses are less because as we decrease the speed the electronic regulator gives the power needed for that particular speed but in case of ordinary rheostat type regulator, the power wastage is same for every speed and no power is saved.In electronic regulator, triac is employed for speed control by varying the firing angle speed and it is controlled but in rheostatic ,control resistance is decreased by steps to achieve speed control. 197. What is the voltage gain or transfer function of amplifier? Vout/Vin 198. What does the KVAR means? The KVAR indicates the electrical power. KVAR means “Kilo Volt Amperes with Reactive components” 199. Why use the VCB at High Transmission System ? Why can't use ACB? Actually the thing is vacuum has high arc quenching property compare to air because in VCB ,the die electric strengths equal to 8 times of air . That y always vacuum used as inHT breaker and air used as in LT . 200. What is the difference between MCB & MCCB, Where it can be used? MCB is miniature circuit breaker which is thermal operated and use for short circuit protection in small current rating circuit. MCCB moulded case circuit breaker and is thermal operated for over load current and magnetic operation for instant trip in short circuit condition.under voltage and under frequency may be inbuilt. Normally it is used where normal current is more than 100A. Basic ELECTRICAL Engineer Interview Questions and Answers :: Read the full article
0 notes
scepticaladventure · 8 years ago
Text
9  Light - Some Important Background  18Aug17
Introduction
We observe the Universe, and physics within the Universe, and we try to make sense of it. There is often tension between our natural impression of the physical world and what our models and mathematical logic tell us.
Consider the most important of our senses – sight. Our eyes detect photons of light and our brain composes this information into a visualization of the world around us. That becomes our subjective perceived reality.
Nearly all the information we receive about the Universe arrives in the form of electromagnetic radiation (which I will loosely refer to as ‘light’).
However, light takes time to travel between its source and our eyes (or other detectors such as cameras). Hence all the information we are receiving is already old. We see things not as they are, but as they were when the light was emitted. Which can be a considerable time ago. Which means that we are seeing the objects when they were much younger than they are “now”. In other words, we are seeing back in time to what they looked like then.
Light from the sun takes nearly ten minutes to reach us. Light from the nearest star about 4 years. Light from the nearest spiral galaxy (Andromeda) is about 2 million years old (but Andromeda is becoming closer at about 110 km/sec). Light from distant galaxies and quasars can be billions of years old. In fact our telescopes can see light (microwaves actually) that is so old it originated at the time the early universe became transparent enough for light to travel at all.
Imagine we are at the centre of concentric shells, rather like an infinite onion. At any one moment, we are receiving light from all these shells, but the bigger the shell from which the light originated, the older the information. So what we are seeing is a complete sample of history stretching back over billions of years.
It would be mind boggling exercise to try to reimagine our mental model of what the universe is really like “now” everywhere. The only way I can think to tackle this would be some sort of computerized animation.
Even then there are a range of other distortions to contend with. All the colors we see are affected by the relative speeds between us and the sources of the light. And light is bent by gravity, so some of what we see is not where we think it is. There are other distortions as well, including relativistic distortions. So, in short, what we see is only approximately true. Believing what we see works well for most purposes on everyday earth but it works less well on cosmological time and distance scales.
Light is vital to our Perception of Nature
Electromagnetic radiation is by far the main medium through which we receive information about the rest of the universe. We also receive some information from comets, meteorites, sub-atomic particles, neutrinos and possibly even some gravitational waves, but these sources pale into insignificance compared to the information received from light in all its forms (gamma rays, x-rays, visible light, microwaves, radio waves).
Since we rely so heavily on this form of information it is a concern that the nature of light has perplexed mankind for centuries, and is still causing trouble today.
Hundreds of humanity’s greatest minds have grappled with the nature of light. (Newton, Huygens, Fresnel, Fizeau, Young, Michelson, Einstein, Dirac … the list goes on).
At the same time the topic is still taught and described quite badly, perpetuating endless confusion. Conceptual errors are perpetuated with abandon. For example, radio ways are shown as a set of rings radiating out from the antenna like water ripples in a pond. If this were true then they would lose energy and hence change frequency with increasing distance from source.
Another example:  It is widely taught that Einstein’s work on the photoelectric effect shows that light must exist as quantized packets of energy and that only certain energy levels are possible. I think the equation e = h x frequency (where h is Planck’s constant) does not say this at all. The frequency can be any integral number or any fraction in between. The confusion arises because photons are commonly created by electrons moving between quantized energy levels in atoms, and photons are commonly detected by physical systems which are also quantized. But if a photon arrives which does not have exactly one of these quantised levels of energy and is absorbed, the difference simply ends up in the kinetic energy of the detector. Or so it seems to me.
The Early Experimenters
Most of the progress in gathering evidence about light has been achieved since the middle of the 17th century. Galileo Galilei thought that light must have a finite speed of travel and tried to measure this speed. But he had no idea how enormously fast light travelled and did not have the means to cope with this.
Sir Isaac Newton was born in the year that Galileo died (1642 – which was also the year the English Civil War started and Abel Tasman discovered Tasmania). As well as co-inventing calculus, explaining gravity and the laws of motion, Newton conducted numerous experiments on light, taking advantages of progress in glass, lens and prism manufacturing techniques.  I think Newton is still the greatest physicist ever.
In experiment #42 Newton separated white sunlight into a spectrum of colors. With the aid of a second prism he turned the spectrum back into white light. The precise paths of the beams in his experiments convinced him that light was “corpuscular” in nature. He argued that if light was a wave then it would tend to spread out more.
Other famous scientists of the day (e.g. Huygens) formed an opinion that light was more akin to a water wave. They based this opinion on many experiments with light that demonstrated various diffraction and refraction effects.
Newton’s view dominated due to his immense reputation, but as more and more refraction and diffraction experiments were conducted (e.g. by Fresnel, Brewster, Snell, Stokes, Hertz, Young, Rayleigh etc) light became to be thought of as an electromagnetic wave.
The Wave Model
The model that emerged was that light is a transverse sinusoidal electro-magnetic wave, with magnetic components orthogonal to the electric components. This accorded well with the electromagnetic field equations developed by James Clerk Maxwell.
Light demonstrates a full variety of polarization properties. A good way to model these properties is to imagine that light consists of two electromagnetic sine waves travelling together with a variable phase angle between them. If the phase angle is zero the light is plane polarized. If the phase angle is 90 degrees then the light exhibits circular polarization. And so on. The resultant wave is the vector sum of the two constituent waves.
Most people are familiar with the effect that if you place one linear polarizing filter at right angles to another, then no light passes through both sheets. But if you place a third sheet between the other two, angled at 45 degrees to both the other two filters, then quite a lot of light does get through. How can adding a third filter result in more light getting through?
The answer is that the light leaving the first filter has two components, each at 45 degrees to the first sheet’s plane of polarization. Hence a fair bit of light lines up reasonably well with the interspersed middle sheet. And the light leaving the middle sheet also has two components, each at 45 degrees to its plane of polarization. Hence a fair bit of the light leaving the interspersed sheet lines up reasonably well with the plane of polarization of the last sheet.
Interesting effects were discovered when light passes through crystals with different refractive indices in different planes (see birefringence). Also when light was reflected or refracted using materials with strong electric or magnetic fields across them (see Faraday effect and Kerr effect).
Young’s Double Slit Experiment
Experiments performed by Thomas Young around 1801 are of special interest. Light passing through one slit produces a diffraction pattern analogously to the pattern a water wave might produce. When passed through two parallel slits and then captured on a screen a classic interference pattern can be observed. This effect persists even if the light intensity is so low that it could be thought of as involving just one photon at a time. More on this later.
The Corpuscular Model Returns
At the start of the 20th century, Albert Einstein and others studied experiments that demonstrated that light could produce free electrons when it struck certain types of metal – the photoelectric effect. But only when the incident light was above a characteristic frequency. This experiment was consistent with light being a sort of particle. It helped to revive the corpuscular concept of light.
Arthur Compton showed that the scattering of light through a cloud of electrons was also consistent with light being corpuscular in nature. There were a lot of scattering experiments going on at the time because the atomic structure of atoms was being discovered largely through scattering experiments (refer e.g. Lord Rutherford).
The “light particle” was soon given a new name - the photon.
Wave Particle Duality
Quantum mechanics was being developed at the same time as the corpuscular theory of light re-emerged, and quantum theories and ideas were extended to light. The wave versus particle argument eventually turned into the view that light was both a wave and a particle, (see Complementarity Principle). What you observed depended on how you observed it.
Furthermore, you could never be exactly sure where a photon would turn up (see Heisenberg Uncertainty Principle, Schrodinger Wave equation and Superposition of States).
The wave equation description works well but certain aspects of the model perplexed scientists of the day and have perplexed students of physics ever since. In particular there were many version of Young’s double slit experiments with fast acting shutters covering one or both slits. It turns out that if an experimenter can tell which slit the photons have passed through, the interference pattern vanishes. If it is impossible to determine which slit the photons have passed through, the interference pattern reappears.
It does not matter if the decision to open one slit or the other is made after the photons have left their source – the results are still the same. And if pairs of photons are involved and one of them is forced into adopting a certain state at the point of detection, then the other photons have the equal and opposite states, even though they might be a very long distance away from where their pairs are being detected.
This all led to a variety of convoluted explanations, including the view that the observations were in fact causal factors determining reality. An even more bizarre view is that the different outcomes occur in different universes.
At the same time as all this was going on, a different set of experiments was leading to a radical new approach to understanding the world of physics – Special Relativity. (See an earlier essay in this series.)
The Speed of Light
Waves (water waves, sound waves, waves on a string etc.) typically travel at well-defined speeds in the medium in which they occur. By analogy, it was postulated that light waves must be travelling in an invisible “lumiferous aether” and that this aether filled the whole galaxy (only one galaxy was known at the time) and that light travelled at a well defined speed relative to this aether.
Bradley, Eotvos, Roemer and others showed that telescopes had to lean a little bit one way and then a little bit the other way six months later in order to maintain a fixed image fixed of a distant star. This stellar aberration was interpreted as being caused by the earth moving through the lumiferous aether.
So this should produce a kind of “aether wind”. The speed of light should be faster when it travelling with the wind than if it travelling against the wind. The earth moves quite rapidly in its orbit around the sun. There is a 60 km/sec difference in the velocity of the earth with respect to the “fixed stars” over a six month period due to this movement alone. In addition the surface of the earth is moving quite quickly (about 10 km/sec) due to its own rotation.
In 1886 a famous experiment was carried out in Ohio by Michelson and Morley. They split a beam of light into two paths of equal length but at right angles to each other. The two beams were then recombined and the apparatus was set up to look for interference effects. Light travelling back and forth in a moving medium should take longer to travel if its path lines up with an aether wind than if its path goes across and back the aether wind. (See the swimmer-in-the-stream analogy in an earlier blog).
However, no matter which way the experiment was oriented, no interference effects could be detected. No aether wind or aether wind effects could be found. It became the most famous null experiment in history.
Fizeau measured the speed of light travelling in moving water around a more or less circular path. He sent beams in either direction and looked for small interference effects. He found a small difference in the time of travel (see Sagnac effect), but not nearly as much as if the speed of light was relative to an aether medium through which the earth was moving.
Other ingenious experiments were performed to measure the speed of light. Many of these involved bouncing light off rotating mirrors and suchlike and looking for interference effects. In essence the experimenters were investigating the speed of light over a two-way, back-and-forth path. Some other methods used astronomical approaches. But they all came up with the same answer – about 300 million meters/second (when in a vacuum.)
It did not matter if the source of light is stationary relative to the detection equipment, or whether the source of light is moving towards the detection equipment, or vice versa. The measured or inferred speed of light was always the same. This created an immediate problem – where were the predicted effects of the aether wind?
Some scientists speculated that the earth must drag the aether surrounding it along with it in its heavenly motions. But the evidence from the earlier stellar aberration experiments showed that this could not be the case either.
So the speed of light presented quite a problem.
It was not consistent with the usual behaviour of a wave. Waves ignore the speed of their source and travel at well defined speeds within their particular mediums. If the source is travelling towards the detector, all that happens is that the waves are compressed together. If the source is travelling away from the detector, all that happens is that the waves are stretched out (Doppler shifts).
But if the source is stationary in the medium and the detector is moving then the detected speed of the wave is simply the underlying speed in the medium plus the closing speed of the detector (or minus that speed if the detector is moving away).
The experimenters did not discover these effects for light. They always got the same answer.
Nor is the speed of light consistent with what happens when a particle is emitted. Consider a shell fired from a cannon on a warship. If the warship is approaching the detector, the warship’s speed adds to the speed of the shell. If the detector is approaching the warship then the detector’s speed adds to the measured impact speed of the shell.  This sort of thing did not happen for light.
Lorentz, Poincaré  and Fitzgerald were some of the famous scientists who struggled to explain the experimental results. Between 1892-1895 Hendrik Lorentz speculated that what was going on was that lengths contracted when the experimental equipment was pushed into an aether headwind. But this did not entirely account for the results. So he speculated that time must also slow down in such circumstances. He developed the notion of “local time”.
Quite clearly, the measurement of speed is intimately involved with the measurement of both distance and time duration. Lorentz imagined that when a measuring experiment was moving through the aether, lengths and times distorted in ways that conspired to always give the same result for the speed of light no matter what the orientation to the supposed aether wind.
Lorentz developed a set of equations (Lorentz transformations for 3 dimensional coordinates plus time, as corrected by Poincaré) so that a description of a physical system in one inertial reference frame could be translated to become a description of the same physical system in another inertial reference frame. The laws of physics and the outcome of experiments held true in both descriptions.
Einstein built on this work to develop his famous theory of Special Relativity. But he did not bother to question or explain why the speed of light seemed to be always the same – he just took it as a starting point assumption for his theory.
Many scientists clung to the aether theory. However, as it seemed that the aether was undetectable and Special Relativity became more and more successful and accepted, the aether theory was slowly and quietly abandoned.
Young’s Double Slit Experiment (again)
Reference Wikipedia:  
“The modern double-slit experiment is a demonstration that light and matter can display characteristics of both classically defined waves and particles; moreover, it displays the fundamentally probabilistic nature of quantum mechanical phenomena.
A simpler form of the double-slit experiment was performed originally by Thomas Young in 1801 (well before quantum mechanics). He believed it demonstrated that the wave theory of light was correct. The experiment belongs to a general class of "double path" experiments, in which a wave is split into two separate waves that later combine into a single wave. Changes in the path lengths of both waves result in a phase shift, creating an interference pattern. Another version is the Mach–Zehnder interferometer, which splits the beam with a mirror.
In the basic version of this experiment, a coherent light source, such as a laser beam, shines on a plate pierced by two parallel slits, and the light passing through the slits is observed on a screen behind the plate. The wave nature of light causes the light waves passing through the two slits to interfere, producing bright and dark bands on the screen, as a result that would not be expected if light consisted of classical particles.
However, the light is always found to be absorbed at the screen at discrete points, as individual particles (not waves), the interference pattern appearing via the varying density of these particle hits on the screen.
Furthermore, versions of the experiment that include detectors at the slits find that each detected photon passes through one slit (as would a classical particle), and not through both slits (as a wave would). Such experiments demonstrate that particles do not form the interference pattern if one detects which slit they pass through. These results demonstrate the principle of wave–particle duality. “
In this author’s view, there is so much amiss with this conventional interpretation of Young’s Double Slit Experiment experiment that it hard to know where to begin. I think the paradox is presented in an unhelpful way and then explained in an unsatisfactory way. It is presented as a clash between a wave theory of light and a particle theory of light, and it concludes by saying that light therefore has wave-particle duality.
Deciding that a photon has “wave-particle duality” seems to satisfy most people, but actually it is just enshrining the problem. Just giving the problem a name and saying “that is just the way it is” doesn’t really resolve the issue, it just sweeps it under the carpet.
In this author’s view, what the experimental evidence is telling us is that light is not a wave and that it is not a particle. Neither is it both at the same time (being careful about what that actually means), or one or the other on a whimsy. It is what it is.
Here is just one of the just one of this author’s complaints about the conventional explanation of the double slit experiment. In my opinion, if you place a detector at one slit or the other and you detect a photon then you have destroyed that photon. Photons can only be detected once. To detect a photon is to destroy it.
A detector screen tells you nothing about the path taken by a photon that manages to arrive at the final screen, other than it has arrived. You have to deduce the path by other means.
Wikipedia again:  “The double-slit experiment (and its variations) has become a classic thought experiment for its clarity in expressing the central puzzles of quantum mechanics. Because it demonstrates the fundamental limitation of the ability of an observer to predict experimental results, (the famous physicist and educator) Richard Feynman called it "a phenomenon which is impossible […] to explain in any classical way, and which has in it the heart of quantum mechanics. In reality, it contains the mystery [of quantum mechanics].”   Feynman was fond of saying that all of quantum mechanics can be gleaned from carefully thinking through the implications of this single experiment.”
There is a class of experiments, known as delayed choice experiments, in which the mode of detection is changed only after the photons have begun their journey. (See Wheeler Delayed Choice Experiments, circa 1980’s  – some of these are thought experiments). The results change depending on the method of detection and seem to produce a paradox.
Reference the Wikipedia article on Young’s slit experiment, quoting John Archibald Wheeler from the 1980’s:  “Actually, quantum phenomena are neither waves nor particles but are intrinsically undefined until the moment they are measured. In a sense, the British philosopher Bishop Berkeley was right when he asserted two centuries ago "to be is to be perceived."
Wheeler went on to suggest that there is no reality until it is perceived, and that the method of perception must determine the phenomena that gave rise to that perception.
They say that fools rush in where angels fear to tread. So, being eminently qualified, the author proposes to have a go at explaining Young’s Double Slit Experiment. But first he would like to suggest a model for photons based on the evidence of the experiments, Einstein’s Special Relativity and some fresh thinking.
1 note · View note
micaramel · 5 years ago
Link
Artists: Thomas Fougeirol, Jo-ey Tang
Venue: Lyles & King, New York
Exhibition Title: Animot
Date: February 16 – March 22, 2020
Click here to view slideshow
Full gallery of images, press release and link available after the jump.
Images:
Images courtesy of Lyles & King, New York
Press Release:
“The animal looks at us, and we are naked before it. Thinking perhaps begins there.” (1)
Upon his naked exit from the shower, when French philosopher Jacques Derrida saw his cat staring at him, the urgency to cover up was met with a questioning of his need to do so. This moment of shame was a reckoning.
In addressing the violent gesture in the act of naming of “animal,” Derrida coined the term “animot,” a portmanteau of animaux (“animals” in French) and mot (“word” in French). Animot thus inscribes its own mechanism of naming.
)(
For Animot at Lyles & King, Jo-ey Tang and Thomas Fougeirol takes up this “perhaps begins here” with debris and matters from the past, as wrought on and beneath the surfaces of Fougeirol’s receptively layered paintings and in Tang’s consideration of the generative fluidity of the condition, status and temporality of art and its document/ation.
)(
Fougeirol applies layers of gesso and oil paint on canvases, which takes months to dry, and on and into them he throws debris, trash, and extra stuff collected from the streets of New York, where the artist keeps a studio. Previously his selection of materials was limited to dust particles and elements generated from his studio activities. Simultaneously working on multiple taxonomies of painting series, plastic gallon containers with their spouts cut off are used as paint buckets. For Animot, the dried-up sedimentation of paint-cakes lodged in the bottom of the buckets are employed as both mark-making device and self-referential paint-object. These deposits are re-deposited on and into the fresh layers of still-drying canvases. Dead paint meets fresh paint. Sometimes these paintings register gravitational pull, and sometimes they trick the eye into pulling them back up. They operate across multiple coordinates, pivoting between flatness and depth, between what they look like and what they might be.
The impulse to equalize and recalibrate value can be found as a parallel in Fougeirol’s anthropological research project on studio practice, INTOTO (beginning in 2016 and with exhibitions having taken place in New York, Berlin, and Paris so far in seven locations), with artists Julien Carreyn and Pepo Salazar. They collectively cull a range of things with unstable status: scraps, items, material tests, and for-now failures from artists’ studios, and install theses finds evenly spaced in a line as a horizon of possibilities. Each is sold democratically for 100 dollars or 100 euros. These non-works show a path, whether abandoned or a way, forgoing what they might be for what they look like.
)(
For the past decade, Jo-ey Tang has attuned to the conditions of his life, its constraints and limits of energy-time, as a person, and in the ecology in the field of art, as curator of art institutions, writer and communicator with artists, to shape his non-studio and non-practicing art practice. With an ethos of non-out- put, Tang only generates artworks on the occasion of invitations, where concretion from past exhibitions are often dragged into the present as a kind of ephemeral anti-ephemerality. He insistently destabilizes the status of artworks and the status of documentation, as a moving target which could take the forms of pho- tography, language, and objects. For example, photographic works might be generated by using sculptural elements or documentation from previous exhibitions, only to be broken apart into disparate images and works, and to be built up again to generate new iterations. The movement between conflict and freedom – whose and which work, what forms does it takes and how – is ongoing and not meant to be resolvable.
In the past few years, Tang has turned his focus on projects with other artists in the form of two-person exhibitions in lieu of solo exhibitions, to allow for proximities and resonances for his own works to come into being. He fluidly moves these works across time and space and medium and from the company of one artist to another, in order to take shot at boundaries. Casting unequal measures of self-doubt and trust, in this keep-doing, he hopes these works know no ends.
)(
In Tang and Fougeirol’s 2017 exhibition at the gallery, Bullet Through Glass, they filled various craters of the concrete floor with macadamia milk – its allusion to the contemporary condition via the consumption and the proliferation of dairy’s various substitutes – which curdled through the duration of the exhibition. A clear acrylic box contained in its middle layer a pool of macadamia milk. Above it, on the top layer was Harold Edgerton’s Bullet Through Glass (1962), capturing the impact of the action using high-speed photography, a technique he pioneered and also employed in the iconic image Milk Drop Coronet (1957).
In Animot, Tang shares the photographic documentation of this previous floor installation, focusing for the most part on one specific crater shot from multiple angles. Here, they resemble some of the basic strokes that form the basis of Chinese characters. Depending on perspective, Tí (提) “Raising”, Wān (彎) “Bend- ing”, Piě (撇) “Left-falling stroke”, and Nà (捺) “Right-falling stroke”. And elsewhere, Diǎn (點) “Dot”,
Héng (橫) “Horizontal”, Shù (竪) “Vertical”, and Gōu (鉤) “Hook”. Meaning is contingent and only emerges through combinational alignments, activated by energies. Created through a brief contact be- tween the analog and the digital – by manually holding a sheet of unexposed photographic paper against a computer screen – that registers the movement and force of each encounter, the resulting prints are placed on the gallery floor as an acknowledgement and a compression.
)(
Fougeirol and Tang had originally planned their iterations of the two-person exhibition format to occur every few years alongside a third person. This time, they realize this third entity is simply the past.
Animot marks the present as an end-beginning or a beginning-end, like a pair of outward-facing brackets that flip the notion of the supplementary.
)(
Like the tightening and clearing – ahem – of the throat full of internal utterings that project.
  (1) Derrida, Jacques, The Animal that Therefore I Am (New York: Fordham University Press, 2008), 29.
Link: Thomas Fougeirol, Jo-ey Tang at Lyles & King
from Contemporary Art Daily https://bit.ly/2LuUGmy
0 notes
marcos008-blog · 6 years ago
Text
SAKURAI – An Emergent Behavior of Collective Quantum Systems.
Quantum chromodynamics
In theoretical physics, quantum chromodynamics (QCD) is the theory of strong interactions, a fundamental force describing the interactions between quarks and gluons which make up hadrons such as the proton, neutron and pion. QCD is a type of quantum field theory called a non-abelian gauge theory with symmetry group SU(3). The QCD analog of electric charge is a property called color. Gluons are the force carrier of the theory, like photons are for the electromagnetic force in quantum electrodynamics. The theory is an important part of the Standard Model of particle physics. A large body of experimental evidence for QCD has been gathered over the years.
QCD emits two peculiar properties:
Confinement, which means that the force between quarks does not diminish as they are separated. Because of this, when you do separate a quark from other quarks, the energy in the gluon field is enough to create another quark pair; they are thus forever bound into hadrons such as the proton and the neutron or the pion and kaon. Although analytically unproven, confinement is widely believed to be true because it explains the consistent failure of free quark searches, and it is easy to demonstrate in lattice QCD.
Asymptotic freedom, which means that in very high-energy reactions, quarks and gluons interact very weakly creating a quark–gluon plasma. This prediction of QCD was first discovered in the early 1970s by David Politzer, Frank Wilczek and David Gross. For this work they were awarded the 2004 Nobel Prize in Physics.
The phase transition temperature between these two properties has been measured by the ALICE experiment to be well above 160 MeV. Below this temperature, confinement is dominant, while above it, asymptotic freedom becomes dominant.
Terminology
American physicist Murray Gell-Mann (b. 1929) coined the word quark in its present sense. It originally comes from the phrase "Three quarks for Muster Mark" in Finnegans Wake by James Joyce. On June 27, 1978, Gell-Mann wrote a private letter to the editor of the Oxford English Dictionary, in which he related that he had been influenced by Joyce’s words: "The allusion to three quarks seemed perfect." (Originally, only three quarks had been discovered.) Gell-Mann, however, wanted to pronounce the word to rhyme with "fork" rather than with "park", as Joyce seemed to indicate by rhyming words in the vicinity such as Mark. Gell-Mann got around that "by supposing that one ingredient of the line ‘Three quarks for Muster Mark’ was a cry of ‘Three quarts for Mister …’ heard in H.C. Earwicker’s pub", a plausible suggestion given the complex punning in Joyce’s novel.
The three kinds of charge in QCD (as opposed to one in quantum electrodynamics or QED) are usually referred to as "color charge" by loose analogy to the three kinds of color (red, green and blue) perceived by humans. Other than this nomenclature, the quantum parameter "color" is completely unrelated to the everyday, familiar phenomenon of color.
Since the theory of electric charge is dubbed "electrodynamics", the Greek word "chroma" Χρώμα (meaning color) is applied to the theory of color charge, "chromodynamics".
History
With the invention of bubble chambers and spark chambers in the 1950s, experimental particle physics discovered a large and ever-growing number of particles called hadrons. It seemed that such a large number of particles could not all be fundamental. First, the particles were classified by charge and isospin by Eugene Wigner and Werner Heisenberg; then, in 1953, according to strangeness by Murray Gell-Mann and Kazuhiko Nishijima. To gain greater insight, the hadrons were sorted into groups having similar properties and masses using the eightfold way, invented in 1961 by Gell-Mann and Yuval Ne’eman. Gell-Mann and George Zweig, correcting an earlier approach of Shoichi Sakata, went on to propose in 1963 that the structure of the groups could be explained by the existence of three flavors of smaller particles inside the hadrons: the quarks.
Perhaps the first remark that quarks should possess an additional quantum number was made as a short footnote in the preprint of Boris Struminsky in connection with Ω− hyperon composed of three strange quarks with parallel spins (this situation was peculiar, because since quarks are fermions, such combination is forbidden by the Pauli exclusion principle):
Three identical quarks cannot form an antisymmetric S-state. In order to realize an antisymmetric orbital S-state, it is necessary for the quark to have an additional quantum number. — B. V. Struminsky, Magnetic moments of barions in the quark model, JINR-Preprint P-1939, Dubna, Submitted on January 7, 1965
Boris Struminsky was a PhD student of Nikolay Bogolyubov. The problem considered in this preprint was suggested by Nikolay Bogolyubov, who advised Boris Struminsky in this research. In the beginning of 1965, Nikolay Bogolyubov, Boris Struminsky and Albert Tavkhelidze wrote a preprint with a more detailed discussion of the additional quark quantum degree of freedom. This work was also presented by Albert Tavchelidze without obtaining consent of his collaborators for doing so at an international conference in Trieste (Italy), in May 1965.
A similar mysterious situation was with the Δ++ baryon; in the quark model, it is composed of three up quarks with parallel spins. In 1965, Moo-Young Han with Yoichiro Nambu and Oscar W. Greenberg independently resolved the problem by proposing that quarks possess an additional SU gauge degree of freedom, later called color charge. Han and Nambu noted that quarks might interact via an octet of vector gauge bosons: the gluons.
Since free quark searches consistently failed to turn up any evidence for the new particles, and because an elementary particle back then was defined as a particle which could be separated and isolated, Gell-Mann often said that quarks were merely convenient mathematical constructs, not real particles. The meaning of this statement was usually clear in context: He meant quarks are confined, but he also was implying that the strong interactions could probably not be fully described by quantum field theory.
Richard Feynman argued that high energy experiments showed quarks are real particles: he called them partons (since they were parts of hadrons). By particles, Feynman meant objects which travel along paths, elementary particles in a field theory.
The difference between Feynman’s and Gell-Mann’s approaches reflected a deep split in the theoretical physics community. Feynman thought the quarks have a distribution of position or momentum, like any other particle, and he (correctly) believed that the diffusion of parton momentum explained diffractive scattering. Although Gell-Mann believed that certain quark charges could be localized, he was open to the possibility that the quarks themselves could not be localized because space and time break down. This was the more radical approach of S-matrix theory.
James Bjorken proposed that pointlike partons would imply certain relations should hold in deep inelastic scattering of electrons and protons, which were spectacularly verified in experiments at SLAC in 1969. This led physicists to abandon the S-matrix approach for the strong interactions.
The discovery of asymptotic freedom in the strong interactions by David Gross, David Politzer and Frank Wilczek allowed physicists to make precise predictions of the results of many high energy experiments using the quantum field theory technique of perturbation theory. Evidence of gluons was discovered in three-jet events at PETRA in 1979. These experiments became more and more precise, culminating in the verification of perturbative QCD at the level of a few percent at the LEP in CERN.
The other side of asymptotic freedom is confinement. Since the force between color charges does not decrease with distance, it is believed that quarks and gluons can never be liberated from hadrons. This aspect of the theory is verified within lattice QCD computations, but is not mathematically proven. One of the Millennium Prize Problems announced by the Clay Mathematics Institute requires a claimant to produce such a proof. Other aspects of non-perturbative QCD are the exploration of phases of quark matter, including the quark–gluon plasma.
The relation between the short-distance particle limit and the confining long-distance limit is one of the topics recently explored using string theory, the modern form of S-matrix theory.
Some definitions
Every field theory of particle physics is based on certain symmetries of nature whose existence is deduced from observations. These can be
local symmetries, that are the symmetries that act independently at each point in spacetime. Each such symmetry is the basis of a gauge theory and requires the introduction of its own gauge bosons. global symmetries, which are symmetries whose operations must be simultaneously applied to all points of spacetime.
QCD is a gauge theory of the SU(3) gauge group obtained by taking the color charge to define a local symmetry.
Since the strong interaction does not discriminate between different flavors of quark, QCD has approximate flavor symmetry, which is broken by the differing masses of the quarks.
There are additional global symmetries whose definitions require the notion of chirality, discrimination between left and right-handed. If the spin of a particle has a positive projection on its direction of motion then it is called left-handed; otherwise, it is right-handed. Chirality and handedness are not the same, but become approximately equivalent at high energies.
Chiral symmetries involve independent transformations of these two types of particle. Vector symmetries (also called diagonal symmetries) mean the same transformation is applied on the two chiralities. Axial symmetries are those in which one transformation is applied on left-handed particles and the inverse on the right-handed particles.
Additional remarks: duality
As mentioned, asymptotic freedom means that at large energy – this corresponds also to short distances – there is practically no interaction between the particles. This is in contrast – more precisely one would say dual – to what one is used to, since usually one connects the absence of interactions with large distances. However, as already mentioned in the original paper of Franz Wegner, a solid state theorist who introduced 1971 simple gauge invariant lattice models, the high-temperature behaviour of the original model, e.g. the strong decay of correlations at large distances, corresponds to the low-temperature behaviour of the (usually ordered!) dual model, namely the asymptotic decay of non-trivial correlations, e.g. short-range deviations from almost perfect arrangements, for short distances. Here, in contrast to Wegner, we have only the dual model, which is that one described in this article.
Symmetry groups
The color group SU(3) corresponds to the local symmetry whose gauging gives rise to QCD. The electric charge labels a representation of the local symmetry group U(1) which is gauged to give QED: this is an abelian group. If one considers a version of QCD with Nf flavors of massless quarks, then there is a global (chiral) flavor symmetry group SUL(Nf) × SUR(Nf) × UB(1) × UA(1). The chiral symmetry is spontaneously broken by the QCD vacuum to the vector (L+R) SUV(Nf) with the formation of a chiral condensate. The vector symmetry, UB(1) corresponds to the baryon number of quarks and is an exact symmetry. The axial symmetry UA(1) is exact in the classical theory, but broken in the quantum theory, an occurrence called an anomaly. Gluon field configurations called instantons are closely related to this anomaly.
There are two different types of SU(3) symmetry: there is the symmetry that acts on the different colors of quarks, and this is an exact gauge symmetry mediated by the gluons, and there is also a flavor symmetry which rotates different flavors of quarks to each other, or flavor SU(3). Flavor SU(3) is an approximate symmetry of the vacuum of QCD, and is not a fundamental symmetry at all. It is an accidental consequence of the small mass of the three lightest quarks.
In the QCD vacuum there are vacuum condensates of all the quarks whose mass is less than the QCD scale. This includes the up and down quarks, and to a lesser extent the strange quark, but not any of the others. The vacuum is symmetric under SU(2) isospin rotations of up and down, and to a lesser extent under rotations of up, down and strange, or full flavor group SU(3), and the observed particles make isospin and SU(3) multiplets.
The approximate flavor symmetries do have associated gauge bosons, observed particles like the rho and the omega, but these particles are nothing like the gluons and they are not massless. They are emergent gauge bosons in an approximate string description of QCD.
Lagrangian
The dynamics of the quarks and gluons are controlled by the quantum chromodynamics Lagrangian. The gauge invariant QCD Lagrangian is
L Q C D = ψ ¯ i ( i ( γ μ D μ ) i j − m δ i j ) ψ j − 1 4 G μ ν a G a μ ν {displaystyle {mathcal {L}}_{mathrm {QCD} }={bar {psi }}_{i}left(i(gamma ^{mu }D_{mu })_{ij}-m,delta _{ij}right)psi _{j}-{frac {1}{4}}G_{mu nu }^{a}G_{a}^{mu nu }} {mathcal {L}}_{mathrm {QCD} }={bar {psi }}_{i}left(i(gamma ^{mu }D_{mu })_{ij}-m,delta _{ij}right)psi _{j}-{frac {1}{4}}G_{mu nu }^{a}G_{a}^{mu nu }
where ψ i ( x ) {displaystyle psi _{i}(x),} psi _{i}(x), is the quark field, a dynamical function of spacetime, in the fundamental representation of the SU(3) gauge group, indexed by i , j , … {displaystyle i,,j,,ldots } i,,j,,ldots ; A μ a ( x ) {displaystyle {mathcal {A}}_{mu }^{a}(x),} {mathcal {A}}_{mu }^{a}(x), are the gluon fields, also dynamical functions of spacetime, in the adjoint representation of the SU(3) gauge group, indexed by a, b,… The γμ are Dirac matrices connecting the spinor representation to the vector representation of the Lorentz group.
The symbol G μ ν a {displaystyle G_{mu nu }^{a},} G_{mu nu }^{a}, represents the gauge invariant gluon field strength tensor, analogous to the electromagnetic field strength tensor, Fμν, in quantum electrodynamics. It is given by:[12]
G μ ν a = ∂ μ A ν a − ∂ ν A μ a + g f a b c A μ b A ν c , {displaystyle G_{mu nu }^{a}=partial _{mu }{mathcal {A}}_{nu }^{a}-partial _{nu }{mathcal {A}}_{mu }^{a}+gf^{abc}{mathcal {A}}_{mu }^{b}{mathcal {A}}_{nu }^{c},,} G_{mu nu }^{a}=partial _{mu }{mathcal {A}}_{nu }^{a}-partial _{nu }{mathcal {A}}_{mu }^{a}+gf^{abc}{mathcal {A}}_{mu }^{b}{mathcal {A}}_{nu }^{c},,
where fabc are the structure constants of SU(3). Note that the rules to move-up or pull-down the a, b, or c indices are trivial, (+, …, +), so that fabc = fabc = fabc whereas for the μ or ν indices one has the non-trivial relativistic rules, corresponding e.g. to the metric signature (+ − − −).
The constants m and g control the quark mass and coupling constants of the theory, subject to renormalization in the full quantum theory.
An important theoretical notion concerning the final term of the above Lagrangian is the Wilson loop variable. This loop variable plays an important role in discretized forms of the QCD (see lattice QCD), and more generally, it distinguishes confined and deconfined states of a gauge theory. It was introduced by the Nobel prize winner Kenneth G. Wilson and is treated in a separate article.
Fields
Quarks are massive spin-1/2 fermions which carry a color charge whose gauging is the content of QCD. Quarks are represented by Dirac fields in the fundamental representation 3 of the gauge group SU(3). They also carry electric charge (either −1/3 or 2/3) and participate in weak interactions as part of weak isospin doublets. They carry global quantum numbers including the baryon number, which is 1/3 for each quark, hypercharge and one of the flavor quantum numbers.
Gluons are spin-1 bosons which also carry color charges, since they lie in the adjoint representation 8 of SU(3). They have no electric charge, do not participate in the weak interactions, and have no flavor. They lie in the singlet representation 1 of all these symmetry groups.
Every quark has its own antiquark. The charge of each antiquark is exactly the opposite of the corresponding quark.
Dynamics
According to the rules of quantum field theory, and the associated Feynman diagrams, the above theory gives rise to three basic interactions: a quark may emit (or absorb) a gluon, a gluon may emit (or absorb) a gluon, and two gluons may directly interact. This contrasts with QED, in which only the first kind of interaction occurs, since photons have no charge. Diagrams involving Faddeev–Popov ghosts must be considered too (except in the unitarity gauge). Area law and confinement Detailed computations with the above-mentioned Lagrangian[13] show that the effective potential between a quark and its anti-quark in a meson contains a term ∝ r {displaystyle propto r} propto r, which represents some kind of "stiffness" of the interaction between the particle and its anti-particle at large distances, similar to the entropic elasticity of a rubber band (see below). This leads to confinement [14] of the quarks to the interior of hadrons, i.e. mesons and nucleons, with typical radii Rc, corresponding to former "Bag models" of the hadrons[15] . The order of magnitude of the "bag radius" is 1 fm (= 10−15 m). Moreover, the above-mentioned stiffness is quantitatively related to the so-called "area law" behaviour of the expectation value of the Wilson loop product PW of the ordered coupling constants around a closed loop W; i.e. ⟨ P W ⟩ {displaystyle ,langle P_{W}rangle } ,langle P_{W}rangle is proportional to the area enclosed by the loop. For this behaviour the non-abelian behaviour of the gauge group is essential.
Methods
Further analysis of the content of the theory is complicated. Various techniques have been developed to work with QCD. Some of them are discussed briefly below. Perturbative QCD Main article: Perturbative QCD
This approach is based on asymptotic freedom, which allows perturbation theory to be used accurately in experiments performed at very high energies. Although limited in scope, this approach has resulted in the most precise tests of QCD to date. Lattice QCD Main article: Lattice QCD A quark and an antiquark (red color) are glued together (green color) to form a meson (result of a lattice QCD simulation by M. Cardoso et al.[16])
Among non-perturbative approaches to QCD, the most well established one is lattice QCD. This approach uses a discrete set of spacetime points (called the lattice) to reduce the analytically intractable path integrals of the continuum theory to a very difficult numerical computation which is then carried out on supercomputers like the QCDOC which was constructed for precisely this purpose. While it is a slow and resource-intensive approach, it has wide applicability, giving insight into parts of the theory inaccessible by other means, in particular into the explicit forces acting between quarks and antiquarks in a meson. However, the numerical sign problem makes it difficult to use lattice methods to study QCD at high density and low temperature (e.g. nuclear matter or the interior of neutron stars).
Main article: 1/N expansion
A well-known approximation scheme, the 1/N expansion, starts from the premise that the number of colors is infinite, and makes a series of corrections to account for the fact that it is not. Until now, it has been the source of qualitative insight rather than a method for quantitative predictions. Modern variants include the AdS/CFT approach. Effective theories For specific problems effective theories may be written down which give qualitatively correct results in certain limits. In the best of cases, these may then be obtained as systematic expansions in some parameter of the QCD Lagrangian. One such effective field theory is chiral perturbation theory or ChiPT, which is the QCD effective theory at low energies. More precisely, it is a low energy expansion based on the spontaneous chiral symmetry breaking of QCD, which is an exact symmetry when quark masses are equal to zero, but for the u,d and s quark, which have small mass, it is still a good approximate symmetry. Depending on the number of quarks which are treated as light, one uses either SU(2) ChiPT or SU(3) ChiPT . Other effective theories are heavy quark effective theory (which expands around heavy quark mass near infinity), and soft-collinear effective theory (which expands around large ratios of energy scales). In addition to effective theories, models like the Nambu–Jona-Lasinio model and the chiral model are often used when discussing general features.
QCD sum rules
Based on an Operator product expansion one can derive sets of relations that connect different observables with each other.
Nambu–Jona-Lasinio model
In one of his recent works, Kei-Ichi Kondo derived as a low-energy limit of QCD, a theory linked to the Nambu–Jona-Lasinio model since it is basically a particular non-local version of the Polyakov–Nambu–Jona-Lasinio model.[17] The later being in its local version, nothing but the Nambu–Jona-Lasinio model in which one has included the Polyakov loop effect, in order to describe a ‘certain confinement’.
The Nambu–Jona-Lasinio model in itself is, among many other things, used because it is a ‘relatively simple’ model of chiral symmetry breaking, phenomenon present up to certain conditions (Chiral limit i.e. massless fermions) in QCD itself. In this model, however, there is no confinement. In particular, the energy of an isolated quark in the physical vacuum turns out well defined and finite. Experimental tests
The notion of quark flavors was prompted by the necessity of explaining the properties of hadrons during the development of the quark model. The notion of color was necessitated by the puzzle of the Δ++ . This has been dealt with in the section on the history of QCD.
The first evidence for quarks as real constituent elements of hadrons was obtained in deep inelastic scattering experiments at SLAC. The first evidence for gluons came in three jet events at PETRA.
Several good quantitative tests of perturbative QCD exist:
The running of the QCD coupling as deduced from many observations Scaling violation in polarized and unpolarized deep inelastic scattering Vector boson production at colliders (this includes the Drell-Yan process) Direct photons produced in hadronic collisions Jet cross sections in colliders Event shape observables at the LEP Heavy-quark production in colliders
Quantitative tests of non-perturbative QCD are fewer, because the predictions are harder to make. The best is probably the running of the QCD coupling as probed through lattice computations of heavy-quarkonium spectra. There is a recent claim about the mass of the heavy meson Bc. Other non-perturbative tests are currently at the level of 5% at best. Continuing work on masses and form factors of hadrons and their weak matrix elements are promising candidates for future quantitative tests. The whole subject of quark matter and the quark–gluon plasma is a non-perturbative test bed for QCD which still remains to be properly exploited.
One qualitative prediction of QCD is that there exist composite particles made solely of gluons called glueballs that have not yet been definitively observed experimentally. A definitive observation of a glueball with the properties predicted by QCD would strongly confirm the theory. In principle, if glueballs could be definitively ruled out, this would be a serious experimental blow to QCD. But, as of 2013, scientists are unable to confirm or deny the existence of glueballs definitively, despite the fact that particle accelerators have sufficient energy to generate them.
Cross-relations to solid state physics
There are unexpected cross-relations to solid state physics. For example, the notion of gauge invariance forms the basis of the well-known Mattis spin glasses,[18] which are systems with the usual spin degrees of freedom s i = ± 1 {displaystyle s_{i}=pm 1,} s_{i}=pm 1, for i =1,…,N, with the special fixed "random" couplings J i , k = ϵ i J 0 ϵ k . {displaystyle J_{i,k}=epsilon _{i},J_{0},epsilon _{k},.} J_{i,k}=epsilon _{i},J_{0},epsilon _{k},. Here the εi and εk quantities can independently and "randomly" take the values ±1, which corresponds to a most-simple gauge transformation ( s i → s i ⋅ ϵ i J i , k → ϵ i J i , k ϵ k s k → s k ⋅ ϵ k ) . {displaystyle (,s_{i}to s_{i}cdot epsilon _{i}quad ,J_{i,k}to epsilon _{i}J_{i,k}epsilon _{k},quad s_{k}to s_{k}cdot epsilon _{k},),.} (,s_{i}to s_{i}cdot epsilon _{i}quad ,J_{i,k}to epsilon _{i}J_{i,k}epsilon _{k},quad s_{k}to s_{k}cdot epsilon _{k},),. This means that thermodynamic expectation values of measurable quantities, e.g. of the energy H := − ∑ s i J i , k s k , {displaystyle {mathcal {H}}:=-sum s_{i},J_{i,k},s_{k},,} {mathcal {H}}:=-sum s_{i},J_{i,k},s_{k},, are invariant.
However, here the coupling degrees of freedom J i , k {displaystyle J_{i,k}} J_{i,k}, which in the QCD correspond to the gluons, are "frozen" to fixed values (quenching). In contrast, in the QCD they "fluctuate" (annealing), and through the large number of gauge degrees of freedom the entropy plays an important role (see below).
For positive J0 the thermodynamics of the Mattis spin glass corresponds in fact simply to a "ferromagnet in disguise", just because these systems have no "frustration" at all. This term is a basic measure in spin glass theory.[19] Quantitatively it is identical with the loop product P W : = J i , k J k , l . . . J n , m J m , i {displaystyle P_{W}:,=,J_{i,k}J_{k,l}…J_{n,m}J_{m,i}} P_{W}:,=,J_{i,k}J_{k,l}…J_{n,m}J_{m,i} along a closed loop W. However, for a Mattis spin glass – in contrast to "genuine" spin glasses – the quantity PW never becomes negative.
The basic notion "frustration" of the spin-glass is actually similar to the Wilson loop quantity of the QCD. The only difference is again that in the QCD one is dealing with SU(3) matrices, and that one is dealing with a "fluctuating" quantity. Energetically, perfect absence of frustration should be non-favorable and atypical for a spin glass, which means that one should add the loop product to the Hamiltonian, by some kind of term representing a "punishment". In the QCD the Wilson loop is essential for the Lagrangian rightaway.
The relation between the QCD and "disordered magnetic systems" (the spin glasses belong to them) were additionally stressed in a paper by Fradkin, Huberman and Shenker, which also stresses the notion of duality.
A further analogy consists in the already mentioned similarity to polymer physics, where, analogously to Wilson Loops, so-called "entangled nets" appear, which are important for the formation of the entropy-elasticity (force proportional to the length) of a rubber band. The non-abelian character of the SU corresponds thereby to the non-trivial "chemical links", which glue different loop segments together, and "asymptotic freedom" means in the polymer analogy simply the fact that in the short-wave limit, i.e. for 0 ← λ w ≪ R c {displaystyle 0leftarrow lambda _{w}ll R_{c}} 0leftarrow lambda _{w}ll R_{c} (where Rc is a characteristic correlation length for the glued loops, corresponding to the above-mentioned "bag radius", while λw is the wavelength of an excitation) any non-trivial correlation vanishes totally, as if the system had crystallized.
There is also a correspondence between confinement in QCD – the fact that the color field is only different from zero in the interior of hadrons – and the behaviour of the usual magnetic field in the theory of type-II superconductors: there the magnetism is confined to the interiour of the Abrikosov flux-line lattice i.e., the London penetration depth λ of that theory is analogous to the confinement radius Rc of quantum chromodynamics. Mathematically, this correspondendence is supported by the second term, ∝ g G μ a ψ ¯ i γ μ T i j a ψ j , {displaystyle propto gG_{mu }^{a}{bar {psi }}_{i}gamma ^{mu }T_{ij}^{a}psi _{j},,} propto gG_{mu }^{a}{bar {psi }}_{i}gamma ^{mu }T_{ij}^{a}psi _{j},, on the r.h.s. of the Lagrangian.
SAKURAI Overviews, Standard Model, its field theoretical formulation, strong interactions, quarks and gluons, hadrons, confinement, QCD matter, or quark–gluon plasma. For details, Gauge theory, quantization procedure including BRST quantization and Faddeev–Popov ghosts. A more general category is quantum field theory. For techniques, Lattice QCD, 1/N expansion, perturbative QCD, Soft-collinear effective theory, heavy quark effective theory, chiral models, and the Nambu and Jona-Lasinio model. For experiments, Quark search experiments, deep inelastic scattering, jet physics, quark–gluon plasma. For boundaries, Symmetry in quantum mechanics
Posted by tom sakurai on 2016-12-24 08:42:32
Tagged:
The post SAKURAI – An Emergent Behavior of Collective Quantum Systems. appeared first on Good Info.
0 notes
smartphone-science · 6 years ago
Link
Scientists at the National Institute of Standards and Technology (NIST) have discovered a superconductor that could very probably be useful for developing quantum computers by overcoming one of the main barriers in the development of effective quantum logic circuit. The paper has been published in Science journal.
Recently unearthed properties in the compound uranium ditelluride or UTe2 show that it could be highly resilient to one of the nemeses of quantum computer development – the problem with making memory storage switches of a quantum computer known as qubits, to function long enough to complete a calculation before losing the sensitive physical relationship that allows them to function as a group. This is known as quantum coherence which is difficult to sustain because of disturbances from the surrounding world.
It is one of the rare superconductor materials because of its peculiar and strong resistance to the magnetic field and provides benefits for qubit design, mainly their resistance to the fallacies that can easily drag into the quantum calculation. The research team’s Nick Butch said that UTe2’s unique properties could make it alluring to the emerging quantum computer sector.
Butch, a physicist at the NIST Center for Neutron Research (NCNR) said that uranium ditelluride which is the silicon of the quantum information era could be used to build the qubits of an efficient quantum computer.
Results of research team which includes scientists from Ames Laboratory and the University of Maryland explain UTe2’s exceptional characteristics, interesting from viewpoint of both technical application and fundamental science.
Electrons that conduct electricity travel as separate particles in copper wire or some other ordinary conductor but in Superconductors, they form cooper pairs and the electromagnetic interactions that produce these pairings are responsible for the material’s superconductivity. BCS theory which explains this type of superconductivity is named after the three scientists who revealed the pairings and also won the Nobel prize for that.
The property of electrons that is especially important to the cooper pairing is the quantum “spin” that makes electrons act as if they have a little bar magnet running through them. In the majority of superconductors, the paired electrons have their quantum spins oriented one upward and other downwards and the opposed pairing is called a spin-singlet.
The Cooper pairs in UTe2 can have their spins oriented in one of three combinations making their spin triplets oriented in parallel rather than opposition making it nonconformists like the very few known superconductors. Most of the spin-triplet SCs are assumed to be “topological” with an extremely useful quality in which the superconductivity occurs on the material’s surface and persist even in the presence of outer shocks.
These parallel spin pairs could help the computer keep operative and can’t automatically collapse because of quantum variations. Superconductor has been perceived to have advantages as the basis for quantum computer elements, and recent economical advances in quantum computer development have engaged circuits made from superconductors, unlike the quantum computer that need a way to correct the errors that drag in from their surroundings because of the topological SC’s properties.
Butch said that Topological superconductors are a substitute path to quantum computing because of long lifespan and it gives error-free qubits and also protects it from the environment.
Researchers stumbled upon UTe2 while exploring uranium-based magnets whose electronic properties can be adjusted as desired by changing their chemistry, pressure or magnetic field and is a useful feature for customizable materials (the material consists of slightly radioactive “depleted uranium”).
UTe2 was first developed back in the 1970s but recently while making some UTe2 while they were synthesizing related materials, they experimented it at lower temperatures to see if any event might have been ignored and they noticed that they had something very special.
The NIST team at both the NCNR and the University of Maryland started studying UTe2 with specialized tools and noticed that it became superconducting at low temperatures (below -271.5 oCelsius, or 1.6 Kelvin) with properties resembling rare ferromagnetic superconductors which acts like low-temperature permanent magnets. Yet, strangely UTe2 is itself not ferromagnetic which makes it fundamentally new.
UTe2 can resist fields as high as 35 Tesla which is 3,500 times strong as a normal refrigerator magnet, and much more than the lowest temperature topological SCs can resist.
This extraordinary resistance to strong magnetic fields means it is a spin-triplet SC and likely a topological SC as well and will help researchers to study the nature of UTe2 and superconductivity itself. The main purpose of this research and exploring SC’s is to study superconductivity and to know where to look for undiscovered SC materials which is difficult right now and also to understand what stabilizes these parallel-spin SCs.
Journal Reference: Science journal
The post Researchers discover superconductor that could enhance quantum computer development appeared first on ScienceHook.
via Science Blogs
0 notes
Text
Original Essay 1
Electric Charges and Their Interactions
Electric charges are the fundamental properties of matter. Interactions between charges follow the basic rules of charges, that opposite ends attract, and like ends repel each other. There are only two types of charges, positive and negative, and they are represented by protons and electrons. Electric charges influence the space around through a transfer which can be direct or indirect. Indirect transfer happens through feeling and results in opposite charges. Direct transfer occurs through friction and conduction. Friction is when two surfaces rub against each other, and the applied surface attracts opposite charges from the other surface. Another form of direct interaction is two touching surfaces.
Electric potential is used to express the effect of an electric field of a source in terms of the location within the electric field. It is expressed in an equation as potential energy divided by q. Charges establish electric potential when they move. Electric current is the flow of charge caused by electric potential. In an electric circuit, there is a continuous conducting path connected between terminals of the battery. In order for charges to move in a circuit, there must be a high to low voltage difference and a continuous path. In a series, charges travel when two resistors are connected to each other from head to tail on each, and the current can only take the path connecting them. They can be reduced to one equivalent resistor. In opposition, the charges in a parallel travel with their heads are directly connected to each other and their tales are directly connected to each other. They can be reduced to one resistor using the equivalent resistance equation for resistors in parallel.
In an electric circuit, voltage plays an important role because without voltage to push them, charges will not flow. The importance of voltage is not just its presence, but the difference between two points that cause the flow. Resistance is a device which controls the flow of electrons, and is measured in Ohms. Ohm’s law state that voltage equals the current times the resistance Resistivity is a property related to the material and its constant changes with the change of temperature.
Magnets are objects that have two opposing poles, a north and a south, and that create magnetic fields around them. In magnets, like poles repel each other, and unlike poles attract each other. The roles that charges play in magnets is that they are the determining factors of the poles, meaning that if there are all positive charges on one side and all negative on the other side, then the positive end of the magnet will be the positive pole and the negative end of the magnet will be the negative pole. A magnetic field is an aura that emanates through space surrounding a magnetic object. Magnetic fields are a vector quantity, and they are given their direction by the north pole of a compass needle. Magnetic fields interact  with charges because they show the direction that the charges can travel, and are defined in terms of the magnetic force exerted on a test charge. Magnetic flux is a measurement of the total magnetic field which passes through a given area, and using it for detection is one of the most popular methods of pipeline inspection. It is a nondestructive testing technique which uses magnetic sensitive sensors to detect the magnetic leakage field of defects on both the internal and external surfaces of pipelines. Magnetic force is what a charged particle experiences when moving through a magnetic field. Magnetic force is at its maximum value when the charge moves perpendicularly to the magnetic field lines, and its value is zero when the charge moves along the field lines.
0 notes
medalmonkey · 4 years ago
Text
2021 POC 12 Unit 01 Electrostatics
1)
Once, medal monkey saw a bird sitting over a buffalo. He was expecting that soon the buffalo will give up. It never happened.
Calculate the ratio of electrostatic force with gravitational force.
Put two charges side by side. There are two kinds of forces- electrostatic force and gravitational force. The constant k in case of electrostatic force has value \(9\times 10^9\), while the constant G in case of gravitational force has value \(6.67\times 10^{-11}\). Calculate the ratio \(\frac{10^9}{10^{-11}}\). It is equal to \(10^{20}\). It means that electrostatic forces are \(10^{20}\) times stronger than gravitational forces.
2)
Write two basic properties of charge.
It is not a basic property that like charges repeals and unlike charges attract. The basic properties are-
o Charge is quantized.
o Charge is additive.
o Charge is conservative.
3)
What is the energy equivalence of charge?
There is no charge and energy equivalence.
4)
You can charge an object by two processes- charging by friction and charging by induction. In charging by friction, electrons leave one surface and go to other surface.
Once, medal monkey was rubbing a corn with a metal brush. He was expecting that bristles of iron brush would come out. It never happened.
What is the role of work function in charging by friction?
Rub two surfaces over each other. Heat is produced in this process which is equally available to both the surfaces. The surface with lower work function will emit its electrons first and will lose them. These electrons will go to the other surface of higher work function.
5)
What is the fundamental unit of charge?
Coulomb is not fundamental unit of charge. It is derived unit. Fundamental unit is ampere-second.
6)
Once, medal monkey was watching two donkeys fighting. The donkeys were pushing each other. He found that they pushed hard when they were big. They were not able to push hard in mud.
Force between two charges is calculated by Coulomb’s law.
\(F=\frac1{4{\mathrm{πε}}_0}\frac{q_1q_2}{r^2}\)
This force depends upon following factors-
o Product of the charges
o Distance between the charges
o Permittivity of medium in which charges are placed
Write down Coulomb’s law in vector form.
Obviously the above mentioned form is not the vector form since it does not include sense of directions. You will have to write following equations-
\({\overrightarrow F}_{12}=\frac1{4{\mathrm{πε}}_0}\frac{q_1q_2}{r^2}\widehat{r_{21}}\)
\({\overrightarrow F}_{21}=\frac1{4{\mathrm{πε}}_0}\frac{q_1q_2}{r^2}\widehat{r_{12}}\)
7)
If the charged particles are placed in air, and if the charged particles are placed in any other medium, will the forces of interaction be same? Answer to the question is no. The forces of interaction do depend on the medium in which they are placed.
Calculate the change in force when a system of charges is shifted to a gas chamber.
Force in any medium other than vacuum/air is always lesser. Dependence on permittivity is of inverse nature.
Force between two given charges held at a given distance apart in water (k=81) is only 1/81 of the force between them in air.
8)
Calculate absolute permittivity of water. Relative permittivity of water is 81.
Relative permittivity is defined as-
\(relative\;permitivity\;=\;\frac{absolute\;permittivity}{permittivity\;of\;free\;space}\)
The point of confusion/clarityin this relation is that whether it is
\(\frac{absolute\;permittivity}{permittivity\;of\;free\;space}\) or \(\frac{permittivity\;of\;free\;space}{absolute\;permittivity}\)
Remember that relative permittivity is always more than one. So to obtain a value more than one, you will have to divide a larger quantity (absolute permittivity of any media) by smaller quantity (permittivity of free space).
9)
Other name for relative permittivity is dielectric constant.
10)
Dielectric constant does depend upon temperature. Dielectric constant of a medium usually decreases with rise in temperature. For example, for water at 20C, K is 80 and for water at 25C, K is 78.5.
11)
Derive an expression for electric field due to a point charge.
Once, medal monkey placed a \(deepak\) at a place. He observed the aura in its environment. The light was spread everywhere in its near region.
Same as this light is spread in the environment of a \(deepak\); electric field is spread in the environment of a charge.
Electric field at a point is defined as the force experienced by a test charge at that point.
\(E=\frac Fq\)
You calculate electric field at a point just by dividing force at that point by the magnitude of the charge which experiences that force.
12)
Once, medal monkey was watching a child sent for counting \(gulaabjamum\) in the kitchen. He found that you are wise if you send a little child.
Why is the test charge taken infinitely small?
It is taken an infinitesimally small charge so that it affects least the electric field of source charge.
13)
Once, medal monkey observed a woodcutter. Woodcutter used a zigzag cutter to cut a zigzag wood. And the result was a straight cut piece of wood.
Plot a graph between \(E\) and \(\frac1{r^2}\) for a point charge.
Graph between \(E\) and \(\frac1{r^2}\) is a straight line-
Tumblr media
What would be your reply for following-?
Plot a graph between \(E\) and \(r\) for a point charge.
And what would be your reply for following-?
Plot a graph between \(E\) and \(\frac1r\).
14)
Derive an expression for the electric field due to a uniformly charged ring.
A charge is distributed uniformly over a ring of radius \(a\) . You have to obtain an expression for the electric field intensity \(E\) at a point on the axis of the ring.
When you calculate electric field intensity at a point on the axis of a uniformly charged ring, you observe that the resultant electric field intensity is \(\textstyle\sum_{}dE\cos\left(\theta\right)\).
How does a circular loop of charge behave when the observation point is at very large distance from the loop, compared to the radius of the loop?
A circular loop of charge behaves as a point charge when the observation point is at very large distance from the loop, compared to the radius of the loop.
15)
Once, medal monkey was watching a child holding pea nuts in his hand inside a beaker. Child was not able to either leave the pea nuts or to take them away.
Depict graphically the variation of electric field intensity due to a uniformly charged ring.
Electric field intensity due to a uniformly charged ring is depicted in the following figure-
Tumblr media
At what distance the electric field intensity due to a uniformly charged ring is maximum from its center on either side on the axis of the ring?
Electric field intensity due to a uniformly charged ring is maximum at a distance \(\frac r{\sqrt2}\) from its center on either side on the axis of the ring.
\(\frac r{\sqrt2}\) seems to be a typical value but it is not, it is just 70% of \(r\). What is \(r\) here? It is the radius of the ring, so in a sense how far (70% of \(r\)) the value of the electric field reaches to its maximum value depends upon the geometry of the ring. This is very obvious and it would have been strange if it had been opposite of it.
16)
Once, medal monkeyobserved a child. The child was standing between two parallel mirrors. The child was laughing. And his reflection in the mirror was laughing too. And the reflection of the reflection was laughing too. And the reflection of reflection of the reflection was laughing too.
What is the principle of production of electromagnetic induction?
When a charged particle is accelerated, its motion is communicated to other charged particles in its neighborhood in the form of a disturbance called electromagnetic wave travelling in vacuum with the speed of light. Thus an electric field may be treated as a source of energy which is transported from one place to another in the electric field with the help of electromagnetic waves.
17)
Once, medal monkey observed that every tree grows perpendicular to the gravitational field.
Why is the electrostatic field at the surface of a charged conductor perpendicular to surface at every point on the surface?
Existence of horizontal component of electric field gives rise to possibility of surface currents which are not there essentially. So the possibility of horizontal component of electric field is ruled out.
18)
Once, medal monkey was watching a child coming home from school in the noon under the sun. Child was sweating and experiencing immense force (due to sunlight) to reach home as early as possible.
Calculate force on a charged particle when placed in an electric field E.
You can calculate it as-
\(F=qE\)
19)
Once, medal monkey was sitting in a \(satsang\). They were saying that a gentle man has a straight path.
Plot electric field lines corresponding to various system of charges.
You first of all decide the direction of field lines when plotting them. Direction of electric field is the direction of movement of unit positive test charge.
Tumblr media
20)
Whether electric field lines are continuous curves?
You have to understand that electric field curves are continuous but not closed. These are two different things. They start from a positively charged body and end at a negatively charged body. No electric lines of force exist inside the charged body. Thus, electrostatic field lines are continuous but do not form closed loops.
What will be your answer to the following question-
Whether electric field lines are closed curves?
21)
Why electric field lines never cross each other?
Tangent to an electric field line tells you the direction of electric field at a point. If there exists two electric field lines at a point, they tell you two directions which create a contradiction.
22)
Define dipole moment and its direction?
Product of either charge of the electric dipole and the distance between the two charges is called dipole moment.
The point of confusion/clarity lies in the direction of the dipole moment.
Remember the following diagram-
Tumblr media
The above diagram shows a molecule of water with three nuclei represented by dots. The electric dipole moment \(p\) points from the negative oxygen side to the positive hydrogen side of the molecule.
23)
Dipole moment of a quadrupole is zero.
24)
In which orientation, a dipole placed in a uniform electric field is in (i) stable equilibrium (ii) unstable equilibrium?
You place an electric dipole in a \(uniform\) electric field, it (the dipole) experiences a torque. You calculate the torque, by multiplying either force with the perpendicular distance between the forces. The perpendicular distance between the forces is \(2a\sin\left(\theta\right)\). There are two positions when the dipole does not experience any torque, one is at 0 degree and the other is at 180 degree. Both these positions are called equilibrium positions. The equilibrium corresponding to one is \(stable\) equilibrium while corresponding to other is \(unstable\) equilibrium.
25)
1 note · View note
samanthasroberts · 6 years ago
Text
The Gear That Could Solve the Next Big Wildfire Whodunit
To date, California's Ranch fire—the (much) larger of the two wildfires that make up the Mendocino Complex fire—has consumed more than 360,000 acres of Northern California, making it the largest conflagration in state history. It was probably wind that taught the nascent Ranch fire to walk and search for food, to glut itself on timber and brush and grass, to race up hills and away from its place of birth. But the crucial details of those beginnings remain unresolved.
Humans start an estimated 84 percent of wildfires, and determining where and how the worst ones originate is a crucial step in assigning blame. That's where experts like Paul Steensland come in. A wildfire investigator for going on 50 years, he was the US Forest Service's premier fire sleuth before he retired in 2005 to start his own consultancy. These days Steensland works on a contract basis and trains others to retrace a fire's path of destruction to its place of origin and sift through the ashes—sometimes literally—in search of its cause.
As another wildfire investigator with 26 years' experience told me: "Paul is the one. He is the master." Here, according to him, are the most essential pieces of equipment to bring when analyzing an inferno like California's Ranch fire.
Camera
Fire investigators show their work. "You need to be able to explain exactly how you narrowed your search from a 10,000 acre area down to the six-inch-by-six-inch square where you found the match," says Steenland, who is often called on to testify about his findings. Which is why he says a camera is the single most important piece of equipment he brings into the field.
In the case of fires, the primary form of evidence are "indicators"—physical objects carrying traces of an inferno's spread. A skilled investigator can use them to determine which way a fire was traveling and the direction from which it came, like a hunter backtracking the prints of quarry that just happens to be 1400° Fahrenheit.
An example of foliage freeze in the needles of a pine tree. Fire investigators use indicators like this to map a fire’s spread and retrace its path to its point of origin.
National Wildlife Coordinating Group
So-called "protection indicators" form when part of an object is shielded from the heat of advancing flames. The result is an object with more damage on one of its sides than the other. Another telltale indicator is "foliage freeze." Like a strand of blow-dried hair, leaves and stems and pine needles can become pliant in the presence of heat and bend in the direction of prevailing winds, only to remain pointed, fingerlike, in the direction of an inferno's travel as they cool and stiffen. A camera allows investigators like Steensland to catalogue these and other indicators as they map and retrace a fire's spread.
Color-Coded Surveyor Flags
Fires, in their early stages, tend to burn in a V shape. Leading the charge is what fire investigators call the advancing area. It burns hotter and the more intensely than any other portion of the fire. The apex of the V, also known as the heel, burns slowest and coolest. The flanks, which run outward from the fire's sides at angles between 45 and 90 degrees, burn at a rate and temperature somewhere in the middle.
Wildfire investigators use color-coded surveyor flags to mark directional fire indicators: Red flags correspond to the advancing area, yellow to the flanks, and blue to the heel. Steensland developed the system in the early aughts as a training tool, but it turned out to be a great way to visualize a fire's spread on the fly. Now they're an essential feature of wildfire investigation kits. One by one the flags go up, and pretty soon, generally in the direction of the blue flags near the base of the V, you begin to develop an idea of where the fire began (investigators call this the ignition area), and what it looked like as it moved across the landscape.
Evidence Tents
The yucca base at the center of this photo is an example of a protection fire indicator. It’s been labeled with a red flag, to indicate its presence in an advancing fire area. The yellow evidence tents denote that the indicator was photographed and its positioned measured. The red arrow points in the direction of fire progression at that point.
Paul Steensland
This LIDAR map depicts the indicators that Steensland and his team flagged in the General Origin Area, or GOA, of the Oil Creek Fire a wildfire that burned some 60,000 acres of northeast Wyoming in 2012. The color coding reflects how the fire spread, based on the evidence they found.
Paul Steensland
Investigators scrutinizing a large fire can find on the order of 1,000 indicators. Of those, a team might only mark a couple hundred. "And out of those, we typically only document 30, 40, or 50," Steensland says.
What indicators they document they'll mark with evidence tents—little yellow triangles marked with bold, black numbers. The point is to select a representative sampling of the indicators that they found. Documenting all of them would be overkill, but when you're presenting your evidence to a lay audience—a judge and jury, for example—it's important to have a good visual examples of what you discovered in the field. "So you can say, yeah, we found and marked 50 charred rocks. We only photographed three of them, but this is what the other 47 looked like," Steensland says.
100-Foot Steel Tape Measure (x2)
Another purpose of documentation is reproducibility. That means photographs alone are insufficient; to ensure that anyone can visit the scene at a later date, check your work, and retrace your steps, you need to specify precisely where you found each piece of evidence.
Handheld GPS units can be off by more than 20 feet. Not good enough. Instead, Steensland recommends the right angle transect method: Run a 100-foot tape measure along a north-south or east-west axis, between two markers placed somewhere near a cluster of evidence. (Two pieces of rebar, painted orange, usually does the trick.) Then run a second tape measure from each piece of evidence back to the first measuring tape, such that the two tapes overlap at a 90 degree angle. Record the distances and bearings between the point of intersection, your rebar, and the pieces of evidence you're documenting.
Steensland says GPS units are typically good enough to get someone to your reference points, and might soon become accurate enough to abandon the transect method. But for now, evidence at most fires is still measured and documented with tape.
Stakes and String
A fire investigation team uses stakes and string to perform a grid search.
National Wildlife Coordinating Group
Wildfires are common enough that investigators sometimes evaluate several per day. When you're working that quickly, there's no time to be meticulous. "Most fires are small, and there's never going to be civil collection for damages, so there's no incentive to determine who's responsible," Steensland says.
Stakes and string (A), a magnifying glass (B), and steel measuring tape (C), are just some of the essential fire investigation tools featured in this kit.
Deaton Investigations
But when a fire becomes big, expensive, or deadly, investigators will take time to plot out the suspected ignition area with stakes and string, dividing the ground into parallel lanes no more than a foot wide. When the fire is particularly bad—if multiple people have died, or the investigators suspect arson—they'll run additional string perpendicular to the search lanes to form a grid, just like an archaeological site. Dividing the ignition area into small squares serves to systematize the search and guide the eye, both of which are crucial for the steps that follow.
Magnifying Glass
The search of the ignition area proceeds in four stages. Stage one involves scouring the ground visually, unaided. For the second stage, investigators make another pass with the help of magnification. To keep his hands free, Steensland uses four-power reading glasses, but many investigators opt for a magnifying glass.
Patience and diligence are key. To quote the Guide to Wildland Fire Origin and Cause Determination, a 337-page field guide published by the National Wildfire Coordinating Group that Steensland helped develop, the cause of the fire is "usually very small, and black, and is located in the middle of a lot of other black material."
Magnet
After their visual search, investigators proceed to stage three: Passing over the ignition area with a magnet or metal detector. Steensland prefers to use a magnet, as many of the metal objects that start fires are ferrous. Brake-shoe particles. Splinters from a bulldozer's cleats. Fragments of a spinning saw head. Even the staple from a book of matches. A powerful magnet can attract all of them through several inches of ash and soil (an important consideration, Steensland says, since hot metal tends to burrow).
"Sometimes you find stuff," Steensland says. "Most of the time you don't. But by running over the area with a magnet, you can eliminate ferrous sources of ignition."
Evidence Collection Kit
Trowels and cans for collecting and storing evidence.
Deaton Investigations
Evidence storage containers and tags
Deaton Investigations
Once they've scoured the ignition area by eye and by magnet, investigators will proceed to stage four: Collecting debris and sifting it. "If there’s anything in there big enough to start a fire, you’ll typically catch it," Steensland says. "I once found a match by sifting—just the head and about a quarter inch of stem."
Investigators will deposit sifted evidence—and any other clues collected up to this point—into a variety of containers, from paper and plastic bags to old film canisters and pill bottles. These are part of an investigator's evidence collection kit. "Technically that kit contains more than one item, but I'm going to cheat here," says Steensland, who carries things like nitrile gloves, tweezers, a small trowel for exhuming fragile objects, and evidence tags to label what he finds. It could be as incriminating as a match or as incidental as an empty beer can ("it might have fingerprints," Steensland says); if it has evidentiary value, an investigator will bag it and tag it, taking care to note what the object is, who collected it, and where and when it was found.
Perhaps one of the investigators working the Ranch fire will bag a tiny match, or a shard of metal, that ignited California's biggest blaze ever.
More Great WIRED Stories
How Apple's iPhone X changed smartphone design
How to protect yourself against a SIM swap attack
This wild avalanche animation could save your life
A guide to finding your ideal movie ticket subscription
The super-secret sand that makes your phone possible
Looking for more? Sign up for our daily newsletter and never miss our latest and greatest stories
Related Video
Science
This New Satellite Will Help Track Extreme Weather in the West
NOAA's latest GOES satellite will help researchers study, track, and predict storms, fires, floods, and other weather systems.
Source: http://allofbeer.com/the-gear-that-could-solve-the-next-big-wildfire-whodunit/
from All of Beer https://allofbeer.wordpress.com/2019/04/04/the-gear-that-could-solve-the-next-big-wildfire-whodunit/
0 notes
adambstingus · 6 years ago
Text
The Gear That Could Solve the Next Big Wildfire Whodunit
To date, California’s Ranch fire—the (much) larger of the two wildfires that make up the Mendocino Complex fire—has consumed more than 360,000 acres of Northern California, making it the largest conflagration in state history. It was probably wind that taught the nascent Ranch fire to walk and search for food, to glut itself on timber and brush and grass, to race up hills and away from its place of birth. But the crucial details of those beginnings remain unresolved.
Humans start an estimated 84 percent of wildfires, and determining where and how the worst ones originate is a crucial step in assigning blame. That’s where experts like Paul Steensland come in. A wildfire investigator for going on 50 years, he was the US Forest Service’s premier fire sleuth before he retired in 2005 to start his own consultancy. These days Steensland works on a contract basis and trains others to retrace a fire’s path of destruction to its place of origin and sift through the ashes—sometimes literally—in search of its cause.
As another wildfire investigator with 26 years’ experience told me: “Paul is the one. He is the master.” Here, according to him, are the most essential pieces of equipment to bring when analyzing an inferno like California’s Ranch fire.
Camera
Fire investigators show their work. “You need to be able to explain exactly how you narrowed your search from a 10,000 acre area down to the six-inch-by-six-inch square where you found the match,” says Steenland, who is often called on to testify about his findings. Which is why he says a camera is the single most important piece of equipment he brings into the field.
In the case of fires, the primary form of evidence are “indicators"—physical objects carrying traces of an inferno’s spread. A skilled investigator can use them to determine which way a fire was traveling and the direction from which it came, like a hunter backtracking the prints of quarry that just happens to be 1400° Fahrenheit.
An example of foliage freeze in the needles of a pine tree. Fire investigators use indicators like this to map a fire’s spread and retrace its path to its point of origin.
National Wildlife Coordinating Group
So-called "protection indicators” form when part of an object is shielded from the heat of advancing flames. The result is an object with more damage on one of its sides than the other. Another telltale indicator is “foliage freeze.” Like a strand of blow-dried hair, leaves and stems and pine needles can become pliant in the presence of heat and bend in the direction of prevailing winds, only to remain pointed, fingerlike, in the direction of an inferno’s travel as they cool and stiffen. A camera allows investigators like Steensland to catalogue these and other indicators as they map and retrace a fire’s spread.
Color-Coded Surveyor Flags
Fires, in their early stages, tend to burn in a V shape. Leading the charge is what fire investigators call the advancing area. It burns hotter and the more intensely than any other portion of the fire. The apex of the V, also known as the heel, burns slowest and coolest. The flanks, which run outward from the fire’s sides at angles between 45 and 90 degrees, burn at a rate and temperature somewhere in the middle.
Wildfire investigators use color-coded surveyor flags to mark directional fire indicators: Red flags correspond to the advancing area, yellow to the flanks, and blue to the heel. Steensland developed the system in the early aughts as a training tool, but it turned out to be a great way to visualize a fire’s spread on the fly. Now they’re an essential feature of wildfire investigation kits. One by one the flags go up, and pretty soon, generally in the direction of the blue flags near the base of the V, you begin to develop an idea of where the fire began (investigators call this the ignition area), and what it looked like as it moved across the landscape.
Evidence Tents
The yucca base at the center of this photo is an example of a protection fire indicator. It’s been labeled with a red flag, to indicate its presence in an advancing fire area. The yellow evidence tents denote that the indicator was photographed and its positioned measured. The red arrow points in the direction of fire progression at that point.
Paul Steensland
This LIDAR map depicts the indicators that Steensland and his team flagged in the General Origin Area, or GOA, of the Oil Creek Fire a wildfire that burned some 60,000 acres of northeast Wyoming in 2012. The color coding reflects how the fire spread, based on the evidence they found.
Paul Steensland
Investigators scrutinizing a large fire can find on the order of 1,000 indicators. Of those, a team might only mark a couple hundred. “And out of those, we typically only document 30, 40, or 50,” Steensland says.
What indicators they document they’ll mark with evidence tents—little yellow triangles marked with bold, black numbers. The point is to select a representative sampling of the indicators that they found. Documenting all of them would be overkill, but when you’re presenting your evidence to a lay audience—a judge and jury, for example—it’s important to have a good visual examples of what you discovered in the field. “So you can say, yeah, we found and marked 50 charred rocks. We only photographed three of them, but this is what the other 47 looked like,” Steensland says.
100-Foot Steel Tape Measure (x2)
Another purpose of documentation is reproducibility. That means photographs alone are insufficient; to ensure that anyone can visit the scene at a later date, check your work, and retrace your steps, you need to specify precisely where you found each piece of evidence.
Handheld GPS units can be off by more than 20 feet. Not good enough. Instead, Steensland recommends the right angle transect method: Run a 100-foot tape measure along a north-south or east-west axis, between two markers placed somewhere near a cluster of evidence. (Two pieces of rebar, painted orange, usually does the trick.) Then run a second tape measure from each piece of evidence back to the first measuring tape, such that the two tapes overlap at a 90 degree angle. Record the distances and bearings between the point of intersection, your rebar, and the pieces of evidence you’re documenting.
Steensland says GPS units are typically good enough to get someone to your reference points, and might soon become accurate enough to abandon the transect method. But for now, evidence at most fires is still measured and documented with tape.
Stakes and String
A fire investigation team uses stakes and string to perform a grid search.
National Wildlife Coordinating Group
Wildfires are common enough that investigators sometimes evaluate several per day. When you’re working that quickly, there’s no time to be meticulous. “Most fires are small, and there’s never going to be civil collection for damages, so there’s no incentive to determine who’s responsible,” Steensland says.
Stakes and string (A), a magnifying glass (B), and steel measuring tape ©, are just some of the essential fire investigation tools featured in this kit.
Deaton Investigations
But when a fire becomes big, expensive, or deadly, investigators will take time to plot out the suspected ignition area with stakes and string, dividing the ground into parallel lanes no more than a foot wide. When the fire is particularly bad—if multiple people have died, or the investigators suspect arson—they’ll run additional string perpendicular to the search lanes to form a grid, just like an archaeological site. Dividing the ignition area into small squares serves to systematize the search and guide the eye, both of which are crucial for the steps that follow.
Magnifying Glass
The search of the ignition area proceeds in four stages. Stage one involves scouring the ground visually, unaided. For the second stage, investigators make another pass with the help of magnification. To keep his hands free, Steensland uses four-power reading glasses, but many investigators opt for a magnifying glass.
Patience and diligence are key. To quote the Guide to Wildland Fire Origin and Cause Determination, a 337-page field guide published by the National Wildfire Coordinating Group that Steensland helped develop, the cause of the fire is “usually very small, and black, and is located in the middle of a lot of other black material.”
Magnet
After their visual search, investigators proceed to stage three: Passing over the ignition area with a magnet or metal detector. Steensland prefers to use a magnet, as many of the metal objects that start fires are ferrous. Brake-shoe particles. Splinters from a bulldozer’s cleats. Fragments of a spinning saw head. Even the staple from a book of matches. A powerful magnet can attract all of them through several inches of ash and soil (an important consideration, Steensland says, since hot metal tends to burrow).
“Sometimes you find stuff,” Steensland says. “Most of the time you don’t. But by running over the area with a magnet, you can eliminate ferrous sources of ignition.”
Evidence Collection Kit
Trowels and cans for collecting and storing evidence.
Deaton Investigations
Evidence storage containers and tags
Deaton Investigations
Once they’ve scoured the ignition area by eye and by magnet, investigators will proceed to stage four: Collecting debris and sifting it. “If there’s anything in there big enough to start a fire, you’ll typically catch it,” Steensland says. “I once found a match by sifting—just the head and about a quarter inch of stem.”
Investigators will deposit sifted evidence—and any other clues collected up to this point—into a variety of containers, from paper and plastic bags to old film canisters and pill bottles. These are part of an investigator’s evidence collection kit. “Technically that kit contains more than one item, but I’m going to cheat here,” says Steensland, who carries things like nitrile gloves, tweezers, a small trowel for exhuming fragile objects, and evidence tags to label what he finds. It could be as incriminating as a match or as incidental as an empty beer can (“it might have fingerprints,” Steensland says); if it has evidentiary value, an investigator will bag it and tag it, taking care to note what the object is, who collected it, and where and when it was found.
Perhaps one of the investigators working the Ranch fire will bag a tiny match, or a shard of metal, that ignited California’s biggest blaze ever.
More Great WIRED Stories
How Apple’s iPhone X changed smartphone design
How to protect yourself against a SIM swap attack
This wild avalanche animation could save your life
A guide to finding your ideal movie ticket subscription
The super-secret sand that makes your phone possible
Looking for more? Sign up for our daily newsletter and never miss our latest and greatest stories
Related Video
Science
This New Satellite Will Help Track Extreme Weather in the West
NOAA’s latest GOES satellite will help researchers study, track, and predict storms, fires, floods, and other weather systems.
from All Of Beer http://allofbeer.com/the-gear-that-could-solve-the-next-big-wildfire-whodunit/ from All of Beer https://allofbeercom.tumblr.com/post/183948168482
0 notes
allofbeercom · 6 years ago
Text
The Gear That Could Solve the Next Big Wildfire Whodunit
To date, California's Ranch fire—the (much) larger of the two wildfires that make up the Mendocino Complex fire—has consumed more than 360,000 acres of Northern California, making it the largest conflagration in state history. It was probably wind that taught the nascent Ranch fire to walk and search for food, to glut itself on timber and brush and grass, to race up hills and away from its place of birth. But the crucial details of those beginnings remain unresolved.
Humans start an estimated 84 percent of wildfires, and determining where and how the worst ones originate is a crucial step in assigning blame. That's where experts like Paul Steensland come in. A wildfire investigator for going on 50 years, he was the US Forest Service's premier fire sleuth before he retired in 2005 to start his own consultancy. These days Steensland works on a contract basis and trains others to retrace a fire's path of destruction to its place of origin and sift through the ashes—sometimes literally—in search of its cause.
As another wildfire investigator with 26 years' experience told me: "Paul is the one. He is the master." Here, according to him, are the most essential pieces of equipment to bring when analyzing an inferno like California's Ranch fire.
Camera
Fire investigators show their work. "You need to be able to explain exactly how you narrowed your search from a 10,000 acre area down to the six-inch-by-six-inch square where you found the match," says Steenland, who is often called on to testify about his findings. Which is why he says a camera is the single most important piece of equipment he brings into the field.
In the case of fires, the primary form of evidence are "indicators"—physical objects carrying traces of an inferno's spread. A skilled investigator can use them to determine which way a fire was traveling and the direction from which it came, like a hunter backtracking the prints of quarry that just happens to be 1400° Fahrenheit.
An example of foliage freeze in the needles of a pine tree. Fire investigators use indicators like this to map a fire’s spread and retrace its path to its point of origin.
National Wildlife Coordinating Group
So-called "protection indicators" form when part of an object is shielded from the heat of advancing flames. The result is an object with more damage on one of its sides than the other. Another telltale indicator is "foliage freeze." Like a strand of blow-dried hair, leaves and stems and pine needles can become pliant in the presence of heat and bend in the direction of prevailing winds, only to remain pointed, fingerlike, in the direction of an inferno's travel as they cool and stiffen. A camera allows investigators like Steensland to catalogue these and other indicators as they map and retrace a fire's spread.
Color-Coded Surveyor Flags
Fires, in their early stages, tend to burn in a V shape. Leading the charge is what fire investigators call the advancing area. It burns hotter and the more intensely than any other portion of the fire. The apex of the V, also known as the heel, burns slowest and coolest. The flanks, which run outward from the fire's sides at angles between 45 and 90 degrees, burn at a rate and temperature somewhere in the middle.
Wildfire investigators use color-coded surveyor flags to mark directional fire indicators: Red flags correspond to the advancing area, yellow to the flanks, and blue to the heel. Steensland developed the system in the early aughts as a training tool, but it turned out to be a great way to visualize a fire's spread on the fly. Now they're an essential feature of wildfire investigation kits. One by one the flags go up, and pretty soon, generally in the direction of the blue flags near the base of the V, you begin to develop an idea of where the fire began (investigators call this the ignition area), and what it looked like as it moved across the landscape.
Evidence Tents
The yucca base at the center of this photo is an example of a protection fire indicator. It’s been labeled with a red flag, to indicate its presence in an advancing fire area. The yellow evidence tents denote that the indicator was photographed and its positioned measured. The red arrow points in the direction of fire progression at that point.
Paul Steensland
This LIDAR map depicts the indicators that Steensland and his team flagged in the General Origin Area, or GOA, of the Oil Creek Fire a wildfire that burned some 60,000 acres of northeast Wyoming in 2012. The color coding reflects how the fire spread, based on the evidence they found.
Paul Steensland
Investigators scrutinizing a large fire can find on the order of 1,000 indicators. Of those, a team might only mark a couple hundred. "And out of those, we typically only document 30, 40, or 50," Steensland says.
What indicators they document they'll mark with evidence tents—little yellow triangles marked with bold, black numbers. The point is to select a representative sampling of the indicators that they found. Documenting all of them would be overkill, but when you're presenting your evidence to a lay audience—a judge and jury, for example—it's important to have a good visual examples of what you discovered in the field. "So you can say, yeah, we found and marked 50 charred rocks. We only photographed three of them, but this is what the other 47 looked like," Steensland says.
100-Foot Steel Tape Measure (x2)
Another purpose of documentation is reproducibility. That means photographs alone are insufficient; to ensure that anyone can visit the scene at a later date, check your work, and retrace your steps, you need to specify precisely where you found each piece of evidence.
Handheld GPS units can be off by more than 20 feet. Not good enough. Instead, Steensland recommends the right angle transect method: Run a 100-foot tape measure along a north-south or east-west axis, between two markers placed somewhere near a cluster of evidence. (Two pieces of rebar, painted orange, usually does the trick.) Then run a second tape measure from each piece of evidence back to the first measuring tape, such that the two tapes overlap at a 90 degree angle. Record the distances and bearings between the point of intersection, your rebar, and the pieces of evidence you're documenting.
Steensland says GPS units are typically good enough to get someone to your reference points, and might soon become accurate enough to abandon the transect method. But for now, evidence at most fires is still measured and documented with tape.
Stakes and String
A fire investigation team uses stakes and string to perform a grid search.
National Wildlife Coordinating Group
Wildfires are common enough that investigators sometimes evaluate several per day. When you're working that quickly, there's no time to be meticulous. "Most fires are small, and there's never going to be civil collection for damages, so there's no incentive to determine who's responsible," Steensland says.
Stakes and string (A), a magnifying glass (B), and steel measuring tape (C), are just some of the essential fire investigation tools featured in this kit.
Deaton Investigations
But when a fire becomes big, expensive, or deadly, investigators will take time to plot out the suspected ignition area with stakes and string, dividing the ground into parallel lanes no more than a foot wide. When the fire is particularly bad—if multiple people have died, or the investigators suspect arson—they'll run additional string perpendicular to the search lanes to form a grid, just like an archaeological site. Dividing the ignition area into small squares serves to systematize the search and guide the eye, both of which are crucial for the steps that follow.
Magnifying Glass
The search of the ignition area proceeds in four stages. Stage one involves scouring the ground visually, unaided. For the second stage, investigators make another pass with the help of magnification. To keep his hands free, Steensland uses four-power reading glasses, but many investigators opt for a magnifying glass.
Patience and diligence are key. To quote the Guide to Wildland Fire Origin and Cause Determination, a 337-page field guide published by the National Wildfire Coordinating Group that Steensland helped develop, the cause of the fire is "usually very small, and black, and is located in the middle of a lot of other black material."
Magnet
After their visual search, investigators proceed to stage three: Passing over the ignition area with a magnet or metal detector. Steensland prefers to use a magnet, as many of the metal objects that start fires are ferrous. Brake-shoe particles. Splinters from a bulldozer's cleats. Fragments of a spinning saw head. Even the staple from a book of matches. A powerful magnet can attract all of them through several inches of ash and soil (an important consideration, Steensland says, since hot metal tends to burrow).
"Sometimes you find stuff," Steensland says. "Most of the time you don't. But by running over the area with a magnet, you can eliminate ferrous sources of ignition."
Evidence Collection Kit
Trowels and cans for collecting and storing evidence.
Deaton Investigations
Evidence storage containers and tags
Deaton Investigations
Once they've scoured the ignition area by eye and by magnet, investigators will proceed to stage four: Collecting debris and sifting it. "If there’s anything in there big enough to start a fire, you’ll typically catch it," Steensland says. "I once found a match by sifting—just the head and about a quarter inch of stem."
Investigators will deposit sifted evidence—and any other clues collected up to this point—into a variety of containers, from paper and plastic bags to old film canisters and pill bottles. These are part of an investigator's evidence collection kit. "Technically that kit contains more than one item, but I'm going to cheat here," says Steensland, who carries things like nitrile gloves, tweezers, a small trowel for exhuming fragile objects, and evidence tags to label what he finds. It could be as incriminating as a match or as incidental as an empty beer can ("it might have fingerprints," Steensland says); if it has evidentiary value, an investigator will bag it and tag it, taking care to note what the object is, who collected it, and where and when it was found.
Perhaps one of the investigators working the Ranch fire will bag a tiny match, or a shard of metal, that ignited California's biggest blaze ever.
More Great WIRED Stories
How Apple's iPhone X changed smartphone design
How to protect yourself against a SIM swap attack
This wild avalanche animation could save your life
A guide to finding your ideal movie ticket subscription
The super-secret sand that makes your phone possible
Looking for more? Sign up for our daily newsletter and never miss our latest and greatest stories
Related Video
Science
This New Satellite Will Help Track Extreme Weather in the West
NOAA's latest GOES satellite will help researchers study, track, and predict storms, fires, floods, and other weather systems.
from All Of Beer http://allofbeer.com/the-gear-that-could-solve-the-next-big-wildfire-whodunit/
0 notes
netmetic · 7 years ago
Text
Three Approaches to HPC and AI Convergence
Artificial Intelligence (AI) is by no means a new concept. The idea has been around since Alan Turing’s publication of “Computing Machinery and Intelligence” in the 1950s. But until recently, the computing power and the massive data sets needed to meaningfully run AI applications weren’t easily available. Now, thanks to developments in computing technology and the associated deluge of data, researchers in government, academia, and enterprise can access the compute performance they need to run AI applications that further drive their mission needs.
Many organizations that already rely on a high-performance computing (HPC) infrastructure to support applications like modeling and simulation are now looking for ways to benefit from AI capabilities. Given that AI and HPC both require strong compute and performance capabilities, existing HPC users who already have HPC-optimized hardware are well placed to start taking advantage of AI. They also have an opportunity to gain efficiency and cost benefits by converging the two applications on one infrastructure.
Approach 1: Using existing HPC infrastructure to run existing AI applications
This usually involves running AI applications developed on infrastructure-optimized AI frameworks, such as TensorFlow*, Caffe* and MXNet* on an HPC system. Companies looking to add AI capabilities to an existing HPC system based on Intel® Xeon® processors should ensure they use the latest optimized framework that best supports their planned use case.
An example of this type of use case can be seen in a recent collaboration project between Intel and Novartis, which used deep neural networks (DNN) to accelerate high content screening capabilities within image analysis. High content screening is fundamental in early drug discovery as it enables the analysis of microscopic images to see the effects of thousands of genetic or chemical treatments on different cell cultures. This is done through classical image-processing techniques to extract information on thousands of pre-defined features such as size, shape and texture. Applying deep learning to this process means the system is automatically learning the features that can distinguish one treatment from another.
By applying DNN acceleration techniques to process multiple images, the team cut the time to train image analysis models from 11 hours to 31 minutes – an improvement of greater than 20 times1. This was done using a typical HPC infrastructure—eight CPU-based servers and a high-speed fabric interconnect—with optimized TensorFlow machine learning framework1. This enabled them to optimize their use of data parallelism in deep learning training, and to make full use of the server platform’s large memory support. As a result, they were able to scale more than 120 3.9-megapixel images per second with 32 TensorFlow workers.
Approach 2: Adding AI to the modeling and simulation workflow to accelerate innovation and discovery
Organizations already using HPC to run modeling and simulation can introduce AI to their existing workflows to gain insights from their results faster. While existing visualization techniques enable scientists to derive insights from simulation results, some of this process can be automated using a continuous workflow that runs a simulation and modeling HPC workload and then feeds the data it creates into an AI workflow for improved insight.
Here is an example of how Princeton University Neuroscience Institute used a similar approach with HPC, machine learning (ML) and AI to analyze data coming from functional magnetic resonance imaging (fMRI) scans to determine what’s going on inside the brain. The study involved using an ML system that has been being trained on real-life scans to create a model of the brain that would be able to recognize different cognitive processes.
he model was then used to look at real-time fMRI brain images of patients reacting to conflicting stimuli to ‘guess’ which cognitive processes were going on (and which stimuli were being paid more attention). This information was then used for immediate feedback by updating the stimuli presented. This ability to quickly analyze fMRI data using HPC and react using ML and AI systems is helping scientists better understand cognitive processes with a view to eventually improving the diagnosis and treatment of psychiatric disorders.
Approach 3: Combining HPC and AI modalities
A more ambitious approach is to embed HPC simulations into AI, where AI uses simulation to augment training data or provide supervised labels for generally unlabeled data. Alternatively, AI could be embedded into HPC simulations, replacing explicit first principles models with learned functions.
In the field of astronomy—typically a heavy user of HPC—numerous new use cases have emerged for accelerating space research by combining HPC and AI modalities. One use case involves using AI to study gravitational lenses, a rare phenomenon that happens when a massive object like a galaxy or black hole comes between a light source and an observer on earth, bending the light and space around it. This means astronomers can see more distant (and much older) parts of the universe that they wouldn’t usually be able to see.
Gravitational lenses are hard to find and traditionally have been identified by manually processing space images. In 2017 researchers from the universities of Bonn, Naples, and Groningen used a Convolutional Neural Network (CNN) to accelerate detection. They started by creating a dataset to train the neural network by feeding six million images of fake gravitational lenses to the AI network, and leaving it to identify patterns. After this training, the AI system was set loose on real images from space, analyzing them to identify gravitational lenses at greater speed than human examination and with incredibly high rates of accuracy.
Another recent use case demonstrated that AI-based models can potentially replace computationally expensive tasks in simulation. In this example, Intel collaborated with High-Energy Physics (HEP) scientists to study what happens during particle collisions. The study used a huge number of CPUs to power its most complex and time-consuming simulation tasks. This included processing information from high-granularity calorimeters—the apparatus that measure particle energy. The team aimed to accelerate their ability to study collision data from these devices in preparation for greater data volumes coming from future collisions.
The team wanted to see if Generative Adversarial Networks (GANs) trained on the calorimeter images could act as a replacement for the computationally expensive Monte Carlo methods currently used to analyze them. GANs were seen as a suitable AI application as they are excellent at generating new variations based on the data studied. GANs were used to generate realistic samples for complicated probability distributions as they also allow multi-modal output, interpolation, and are robust against missing data.
After training the GAN, the team found strong agreement between the images it generated and those produced by the simulation-based Monte Carlo approach. They reviewed both high-level qualities like energy shower shapes, and detailed calorimeter responses at a single-cell level and found that the agreement was incredibly high. This opens a promising avenue for further investigation for machine-learning-generated distributions in place of costly physics-based simulations.
Getting started with AI applications
When taking your first steps towards converged AI and HPC, it is important to understand different AI capabilities and how they can help solve the particular problems your organization is working on. The next step is to find AI frameworks that support your use case. During framework selection, it is best to look for ones that are already optimized for your current HPC infrastructure. For companies wanting to run AI on existing Intel® technology-based infrastructure we’ve created this overview of resources optimized for popular AI frameworks.
The next step is to run an AI workload pilot on your existing HPC infrastructure. At Intel, we work with customers across academia, government and enterprise to help them scope, plan and implement AI capabilities into their HPC environments. To find out more about how to optimize HPC architectures for AI convergence read this solution brief.
For organizations wanting to optimize their existing infrastructure for specific workloads such as professional visualization or simulation and modeling, Intel® Select Solutions for HPC offer easy and quick-to-deploy infrastructure. Optimized for specific HPC applications, Intel® Select Solutions help to accelerate time to breakthrough, actionable insight, and new product design.
1 20x claim based on 21.7x speed up achieved by scaling from single node system to 8-socket cluster. 8-socket cluster node configuration, CPU: Intel® Xeon® 6148 Processor @ 2.4GHz, Cores: 40, Sockets: 2, Hyper-threading: Enabled, Memory/node: 192GB, 2666MHz, NIC: Intel® Omni-Path Host Fabric Interface (Intel® OP HFI), TensorFlow: v1.7.0, Horovod: 0.12.1, OpenMPI: 3.0.0, Cluster: ToR Switch: Intel® Omni-Path Switch. Single node configuration: CPU: Intel® Xeon® Phi Processor 7290F,  192GB DDR4 RAM, 1x 1.6TB Intel® SSD DC S3610 Series SC2BX016T4, 1x 480GB Intel® SSD DC S3520 Series SC2BB480G7, Intel® MKL 2017/DAAL/Intel Caffe.
Tests document performance of components on a particular test, in specific systems. Differences in hardware, software, or configuration will affect actual performance. Consult other sources of information to evaluate performance as you consider your purchase.  For more complete information about performance and benchmark results, visit www.intel.com/benchmarks
Intel® technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration. No computer system can be absolutely secure. Check with your system manufacturer or retailer or learn more at www.intel.com.
The post Three Approaches to HPC and AI Convergence appeared first on IT Peer Network.
Three Approaches to HPC and AI Convergence published first on https://jiohow.tumblr.com/
0 notes
arxt1 · 7 years ago
Text
PIC Simulations of Velocity-Space Instabilities in a Decreasing Magnetic Field: Viscosity and Thermal Conduction. (arXiv:1708.03926v2 [astro-ph.HE] UPDATED)
We use particle-in-cell (PIC) simulations of a collisionless, electron-ion plasma with a decreasing background magnetic field, $B$, to study the effect of velocity-space instabilities on the viscous heating and thermal conduction of the plasma. If $B$ decreases, the adiabatic invariance of the magnetic moment gives rise to pressure anisotropies with $p_{||,j} > p_{\perp,j}$ ($p_{||,j}$ and $p_{\perp,j}$ represent the pressure of species $j$ ($=i$ or $e$) parallel and perpendicular to the magnetic field). Linear theory indicates that, for sufficiently large anisotropies, different velocity-space instabilities can be triggered. These instabilities, which grow on scales comparable to the electron and ion Larmor radii, in principle have the ability to pitch-angle scatter the particles, limiting the growth of the anisotropies. Our PIC simulations focus on the nonlinear, saturated regime of the instabilities. This is done through the permanent decrease of the magnetic field by an imposed shear in the plasma. Our results show that, in the regime $2 \lesssim \beta_j \lesssim 20$ ($\beta_j \equiv 8\pi p_j/B^2$), the saturated ion and electron pressure anisotropies are controlled by the combined effect of the oblique ion firehose (OIF) and the fast magnetosonic/whistler (FM/W) instabilities. These instabilities grow preferentially on the ion Larmor radius scale, and make the ion and electron pressure anisotropies nearly equal: $\Delta p_e/p_{||,e} \approx \Delta p_i/p_{||,i}$ (where $\Delta p_j=p_{\perp,j} - p_{||,j}$). We also quantify the thermal conduction of the plasma by directly calculating the mean free path of electrons along the mean magnetic field, which we find strongly depends on whether $B$ decreases or increases. Our results can be applied in studies of low collisionality plasmas such as the solar wind, the intracluster medium, and some accretion disks around black holes.
from astro-ph.HE updates on arXiv.org http://ift.tt/2w58xKG
0 notes
scepticaladventure · 8 years ago
Text
13  Relativity à la Lorentz 24Aug17
Introduction
In my last four blogs I suggested a model of light that seems to me to be a better fit to the experimental evidence than the inherently self contradictory wave-particle duality interpretation. Which was a lot of fun to write and I hope will spark some fresh thinking by persons more talented than myself.
In the adventure through the experiments on light and the foundations of Special Relativity it struck me that the concept of a lumiferous aether might have been killed off too soon, and that Special Relativity rests on some key assumptions or postulates that are not absolutely proven beyond doubt in every aspect. In fact there are some troublesome paradoxes that hint at the same thing.
So, with due regard for the many remarkable predictions of Special Relativity that are well proven experimentally, the next part of “Heretical Adventures” is going to develop a modified version of Special Relativity in the way that Lorentz might have done if he had lived longer and had not been overshadowed by Einstein’s approach.  It is written for fun and in the hope of being a stimulus to thought.
The approach starts from where Lorentz left off. It comes up with a slightly different version of Special Relativity and in the process suggests solutions to the Ehrenfest paradox and to the Symmetrical Twin Paradox. The new theory is open to experimental verification. Who knows, it might even be a step forward.
Background
Throughout the 19th century, astronomers and other physicists struggled to decipher the nature of light. A lot was discovered, but problems remained. Of particular difficulty was the result of the Michelson Morley aether drift experiment in 1887. The round trip travel time of light going to and fro parallel to the aether stream was supposed to take longer than the travel time back and forth over the same distance but across the aether stream, but it did not.
In developing his Theory of Special Relativity, Einstein leapfrogged the whole issue. He just assumed that the speed of light in vacuo, when measured in an inertial reference frame, was always the same. Combined with a few other postulates, Einstein logically developed a new approach to the description of physics and came up with some marvellous new insights.
A hundred years later, I for one do not think that Einstein’ approach has solved everything. We still can’t properly account for Newton’s pail of water, the Foucault pendulum or Ehrenfest’s rapidly rotating disc, let alone the physics of rotating spiral galaxies. Nor can we work out why the expansion of the Universe is accelerating, if indeed that is the case.
Hence this essay goes back to about a hundred years and re-litigates some old issues. What if the aether theorists were right? What does all the modern evidence suggest? What are facts and what are assumptions?
Approach:  I am going to try to stick to the evidence revealed by experiments and will be careful of just repeating everyone else’s assumptions and interpretations. If I followed the same path as everyone else I would just end up in the same place.
I am also going to try to maintain a distinction between linear accelerations, rotations and rotational accelerations because I think they are fundamentally different.
And I am going to use the old fashioned spelling of ‘aether’ because ‘ether’ is a class of organic chemicals that was sometimes used as an anaesthetic.
The Aether
In 1913 Georges Sagnac thought that his closed path interferometer had proved the existence of the aether. I am going to agree with him.
I am going to imagine the aether to be everywhere and that it is the medium in which light travels. I will think of light as being made up of disturbances in this aether, travelling as fast as anything can travel, and by the quickest route possible. But I reject the description of light as being a wriggly little wave. I think of it as being made up of phots, which are basically two dimensional electromagnetic disturbances. See an earlier blog for more details.
What Sagnac discovered was that you can always detect rotations simply by using a closed path interferometer. You can detect whether you are rotating or not, and if the rate of rotating is accelerating or not. Light will take longer to go around one way than the other.
Mach’s Galactic Reference Frame
I’m not going to agree with Ernst Mach that the distant stars provide a universal reference frame. When Mach was thinking about the origins of inertia no-one had any idea that there were other galaxies other than the Milky Way. So when Mach referred to the “fixed stars” as providing the self evident reference frame for rotational and linear inertial effects, he did not realise that many of the so-called “fixed stars” are in fact other galaxies and that our Milky Way is a rotating spiral galaxy amongst an infinite multitude of others.
What I am going to do is to link the concept of an aether with the concept of a more localised inertial reference frame.  I am going to assume that an inertial reference frame is one which is not rotating or accelerating with respect to the aether.
This would mean that, instead of being undetectable, the aether is extremely detectable in many respects. You do not even need a Sagnac interferometer. You can use a spinning bucket of water. In fact you do not need any device at all. Use your eyes and ears. If you do not feel giddy, and the stars in the night sky are not wheeling around before your eyes, then you are not rotating with respect to the aether.
Now make use of a linear accelerometer. A bucket of water will do if your research grant cannot sponsor anything more sophisticated. If the surface of the water is level and flat then you have detected that the aether and you are not accelerating or rotating with respect to each other.
But maybe you are moving in a smooth constant linear fashion relative to the aether? Or maybe you are at rest in the aether? This is harder to detect. A localized physical experiment is not going to give you the answer. Which is good in a way because it makes day to day physics in real life a lot simpler.
The only way I can think of to detect smooth linear motion relative to the aether is to observe the heavens. If the light from distant stars, or the cosmic microwave background, does not show a higher degree of redshift in one direction or another then you can more or less infer that you are stationary in the aether. Which, by the way, almost nothing in the Universe ever is.
Is the Aether Stationary?
I do not consider it to be self evident or even likely that the aether is everywhere uniform, even and at rest. It is certainly not safe to assume this. In fact I am inclined to think that it would be disturbed by massive objects and may even form giant whirlpools in the vicinity of spiral galaxies. Or maybe spiral galaxies form because there are whirlpools in the aether. But that is a topic for a later blog.
However, I am going to make a working assumption that our own solar system is moving through a three dimensional aether field that is more or less aligned to the gross distribution of galaxies in the Universe, though possibly with some drag effects within the Milky Way.
Time, Distance and the Speed of Light
If nothing else, Einstein managed to highlight that our ordinary everyday notions of measuring time durations, lengths and masses are naïve and do not work well at high speeds. The speed of light affects everything and even the simple task of coordinating clocks in order to be able the measure the speed of a travelling object is fraught with difficulty.
So what do we know? Let’s start with time. We have developed atomic clocks that are extremely precise and stable. For example the modern definition of a second is 9,192,631,770 periods of fluctuation in the hyperfine structure of Cesium 133.
If we put an atomic clock into a centrifuge and spin it up we find that it runs more slowly (see the experiment of Hay et al in 1960 as described in Wikipedia article on the Ives-Stilwell experiment). If we shake a clock back and forth then it also runs more slowly (see experiment of Chou in the same article and again in Science magazine 2010). 
We know that atomic clocks aboard Global Positioning System satellites have to be adjusted for at least 3 separate relativistic effects in order to stay in step with exactly the same clocks back on Earth. So time is anything but straightforward.
The modern definition of a meter is 1/(299,792,458) of the distance that light travels in one standard second. So the modern definition of a meter is derived from the standard definition for one second and is stable only if one standard second is stable.
The modern value for the speed of light is 299,792,458 meters per second. This is clearly dependent on the atomic clock definition for both for the meaning of a meter and the meaning of a second.
You can see that these definitions are interrelated, circular even.
In 1905 Einstein simply cut through all the confusion and postulated that the speed of light in vacuo was an invariant for all inertial observers. It then followed from this and other postulates that time durations and measurements of lengths must vary according to how they were observed.
However the forerunner of Special Relativity, Hendrik Lorentz, had a subtle but fundamentally different perspective. He thought that moving clocks, when viewed from a non-moving reference point, would appear to run slowly. In fact he came up with the concept of time dilation in the first place. But he also thought that a physical system moving against the aether would contract in the direction of motion by the same factor, now called the Lorentz factor   (g = √(1 – v2/c2)  where v is the speed of movement and c is the speed of light). (I would type the gamma symbol if I knew how to do so in Tumblr’s text system)
Observers moving with the experiment would be unable to detect the shrinkage because all their measuring instruments would be affected by the shrinkage. However, non-moving observers standing nearly would measure the shrinkage if they did the measurements properly (which requires that they measure both ends at the same synchronised time in their own reference frame).
Lorentz’ idea would explain the null result of Michelson-Morley experiment. This experiment was designed to try to detect the effects of the earth’s movement through the aether but it failed to find any effects.
In this blog essay I am going to agree with Lorentz. I will assert that the Michelson-Morley experiment did not prove that the aether drift did not exist, it just showed that result of moving through the aether was to affect time durations and lengths in such as way that these types of round-trip interference experiments will always give a null result.
Einstein decided that if the aether was not going to show any effects then it was just a metaphysical concept and served no purpose. His concept of light was (I suppose) that it was a particle travelling through nothing at all. Or maybe that he thought of spacetime as being the fabric of the Universe.
Inertial Frame Physics
If moving through the aether causes time to slow down and lengths to contract then we have to distinguish between two types of inertial systems – those that are moving in a uniform straight line through the aether and those that are at rest in the aether. Let us describe the latter as being stationary.
If we set up an arbitrary reference frame then it could be stationary, moving, accelerating, rotating, increasingly rotating or any combinations of these. So we have to be very careful in our choice of reference frame or things could get very messy.
Physics can be described from the viewpoint of observers who are moving the same way as their system, or one type of system can be described from the reference frame of another type of system.
The easiest situation is where the reference frame is stationary. This is pretty much the same as the starting point in standard Special Relativity but in this essay the interpretation is a little different. I will assume that there will be an absence of non-inertial effects because the reference frame is either stationary in the local aether or it is moving in a smooth constant linear fashion within the local aether.
Fast moving objects display relativist effects – time dilation, length contraction, mass increase etc. not because they are fast moving objects per se, but because they are fast moving objects in the aether.
The second easiest situation is a moving but still inertial reference frame. Same physics as the previous case but again there is a twist. On top of the usual physics we have now introduced a locally undetectable, relativistic Lorentz contraction of length, and also a Lorentz dilation of time.
What happens if we wish to switch the description of physics from the viewpoint of a moving observer across to the viewpoint of a stationary observer? When the observer was based in the moving system their clock was slow and their rulers were short in the dimension parallel to the direction of travel.
The answer is that there is not a problem.
Let us start with the principle of classical relativity. If the moving system sees the stationary system moving past at velocity v, then the stationary system must see the moving system moving past at velocity –v. This remains true because the moving system calculates v with a short ruler and a slow clock, and the stationary system calculates v with a ‘normal’ clock and a ‘normal’ ruler. Since v is distance divided by time, the Lorentzian effects cancel each other and both sets of observers get the same answer for v.
The One Way Speed of Light
Most of the experiments that try to detect aether wind effects, and most experiments measuring the speed of light, involve mirrors and interference effects and can be labeled as two-way path experiments.
One way speed of light measurement experiments are hard to find.
Suppose a group of phots is coming towards you through the aether and you are moving through the aether towards them in the opposite direction. You would expect to meet them sooner than if you remained standing still. But what if your clock slowed down as soon as you started moving? What would the time durations be then? It is not trivial.
And when you meet up and try to measure how fast the phots are travelling in your reference frame you face the further problem that your ruler is contracted. Plus you now have a further issue in getting your measurements done correctly – you have to get your clocks properly synchronised. You need one clock to be able to record the time at which the phots passed Point A and you need another clock to be able to record the time at which the phots passed Point B. The duration of the travel time is the difference between the two times, but this is only meaningful if the two clocks are synchronised. Getting the two clocks to run at the same rate is not the problem. It is getting a meaningful synchronisation that is hard.
Let us use an analogy. Suppose there is a lighthouse and a pilot station set N kilometres apart and the light house keeper and the pilot master try to coordinate their clocks by the use of foghorns. Suppose sound takes 10 seconds to travel between then when the wind is not blowing. They will eventually figure out that if they send a signal and the other one replies instantly then the return signal will take 20 seconds to be heard. They can use that information to synchronise their clocks.
Now suppose there is a constant onshore breeze such that the signal from the lighthouse to the shore takes 9 seconds and the signal from the shore to the lighthouse takes 12 seconds. They both get a return signal in 21 seconds, but if they each set their clock to be 10.5 seconds ahead of when they heard the other’s signal they would not be properly synchronised.
If they take turns to send a signal on the hour, and keep on adjusting their clock so that it records the other’s signal as being heard at 10.5 seconds past the hour, then they will never be able to synchronise properly.
In other words, if they keep on using the Einstein-Poincaré method of clock synchronisation they will fail.
Eventually the lighthouse keeper will have to put his clock into his rowboat and take it to the pilot on shore so that they can both get back on track. Or they will have to realize that the speed of their clock signals is not the same in both directions.
Back to our phots story.
We postulate that if some phots are recorded at A at a certain time and the rest are recorded at B’s clock at a later time then, even with the slow clocks and the short ruler, the time of travel between A and B will be less that the time of travel for phots approaching in the other direction and travelling from B to A.
In other words, the one way speed of light is affected by moving through the aether.
This would be very difficult to check by experiment but in principle it could be done. I don’t know if such an experiment has ever been done. There are a lot of round trip experiments but the difference would not show up in these.
Using the Earth as a test bed has problems. The speed of the earth through the aether is about 1/1000 the speed of light, which is hardly what you would call relativistic.
Altered Foundations for a Theory of Relativity
In the above conjecture, the key foundations of Special Relativity would have to be split into two parts to become:
The one-way and two-way speed of light in vacuo is always the same in any stationary inertial reference frame, irrespective of how that frame is oriented.
The measured two-way speed of light in vacuo is always the same in any inertial reference frame, irrespective of how that frame oriented or moving.
(An inertial reference frame is of course one that is not rotating, accelerating or decelerating or in a gravitational field).
Another postulate underlying Special Relativity is going to be dropped altogether – the Principle of Classical Relativity. I do not know why it was ever included in the first place. It might appear to be obvious, but Special Relativity teaches us that a lot of naively obvious things are not true at relativistic speeds. Nor am I aware of any experimental tests of this postulate at relativistic speeds, let along verifications.
I think that the effect of these changes in the foundation will be a theory which is consistent with Special Relativity in most respects, but which opens up an easier way to understand accelerated and rotating reference frames.
Let us make a start by means of some thought experiments.
A Spaceship in Uniform Linear Acceleration
Consider a long empty rocket undergoing constant linear acceleration towards some distant galaxy. It is moving through the aether at an ever increasing speed. On account of this movement relative to the aether, clocks aboard the spaceship will be slowed by the usual gamma factor and lengths in the direction of the movement will be shortened by the same factor.
Observers aboard the rocket will be unable to detect these Lorentzian effects by most normal experiments. If they had a reliable way to measure the one-way speed of light in their rocket they might be able to tell that the duration for a phot to travel from tail to tip is longer than the duration for a phot to travel from tip to tail. The two-way speed of light however, is the same in any orientation. 
Any attempts to measure the speed of light, or to detect its path, would be significantly affected by the acceleration of the rocket.
If the rocket has parallel mirrors down each side and phots are directed across the rocket at right angles to the axis of acceleration, their path would be a series of curves that creates a kind of curvy zig-zag pattern down towards the rear of the rocket.
As the speed of the rocket increases, phots directed from an emitter at the rear of the rocket towards an absorber at the front would take longer and longer to arrive. Furthermore they would be increasingly Doppler red shifted upon detection. For phots directed from the front of the rocket to the back there would be an ever increasing blue shift.
Would there be a difference in the rate of clocks at the front and at the back? I don’t think so. They are moving as fast as each other and they are at rest relative to each other, and they are both experiencing the same degree of acceleration.
What about other physics? I expect that the inertial resistance of matter to forces would be increasing. You can think of this as the inertia of the matter increasing, or you might prefer to think that they are acquiring extra relativistic mass.
The Very Fast Train Revisited
Let us revisit the very fast train thought experiment from an earlier blog essay, but this time using the above modified postulates. What is different?
Remember, there is a straight station platform just under 300 million kilometres long, with observers every million kilometres who have well synchronised clocks equipped with cameras. We will imagine this platform to be stationary in the aether.
Hurtling down a long straight track comes a train that is the same length as the station when it at rest, but it is travelling at half the speed of light. The train has a driver at one end and a guard at the other and they also have clocks that are synchronised in their reference frame. If they measure the two-way speed of light is comes out to be c. If they do a Michelson-Morley experiment then they do not detect any aether wind effects.
The set of clocks on the train and the set of clocks on the platform have been zeroed so that the driver’s clocks shows zero at the precise moment (Event 1) he passed the stationmaster at the start of the platform, and the latter’s clock is also zero at that Event, as confirmed in everyone’s photos.
The interpretation of what happens is relatively simple. Due to its movement through the aether, the train shrinks by the factor g  (= 1.1547 in this example) and it’s clocks slow down by the same factor.
When the guard passes the stationmaster, the station cameras show the driver has only reached 259.92 million km up the platform. When the driver reaches the end of the platform (Event 2) a station camera back down the platform (Event 3) shows the guard has only reached 259.92 million km back down the platform. The train is definitely shorter. By the usual factor of g.
The driver reaches the end of the platform in 2 seconds on the station clocks, confirming the train is travelling at c/2. The photograph taken by the station clock at that event also shows the driver’s clock, and it shows it to read 1.732 seconds. The driver’s clock is running slow by a factor of g. The platform clock and camera at Event 3 shows the guard’s clock is running slow by the same factor.
So far so good, and all consistent with normal Special Relativity.
But our alternative theory is not as symmetrical as Special Relativity. It does not postulate that classical relativity has to hold true at relativistic speeds. In the case above it is the train that is moving through the aether and not the platform.
What difference does this make? Nothing at all from the point of view of the stationary observers, but what about from the point of view of a reference frame co-moving with the train?
Let us start with time. As the driver (and then the guard) move past the platform they notice that the platform clocks are ahead of theirs. By the time the driver gets to the end, his clock says 1.732 seconds and the platform clock says 2 seconds. The driver can take a photograph to prove it. It tells the same story as the photograph taken from the station.
There are only two possibilities, either the train’s clocks must be slow or the station clocks must be running fast. The driver and the guard  consult their revised textbooks and decide that it must be that their clocks are running slow by the factor g.
What about lengths? The textbooks also tell them that their train will have shrunk by the factor g. The observers on the platform tell them the same thing.
So how fast is the outside world passing by according to their calculations? The best way for them to measure this is to choose a marker in the outside world and derive the time duration for this marker to go from being abreast of the driver to being abreast of the guard. 
So the driver asks the guard to photograph the time on his clock when the back of the train passes the stationmaster standing at leading edge of the platform (call this Event 1a). At the same simultaneous instant (in the train frame) the driver is further up the platform (call this Event 1b).
The stationary observers photograph Event 1b as occurring at location 259.92 million km up the platform at their time of 1.732 seconds. They have no doubt that the driver is travelling at half the speed of light. Their photograph reveals the driver’s clock to be reading 1.5 seconds (= 2seconds/g2) and the driver agrees with this as his camera shows the same data. But it is Event 1a we are most interested in.
The Stationmaster’s clock and camera show that Event 1a happened at time 1.732 seconds in the stationary (platform) frame and 1.5 seconds on the guard’s clock. So the driver and guard are confident that the stationmaster has gone from the tip of the train to the back of the train in 1.5 of their seconds. They divide what they now know to be the length of their train by this duration and come up with the answer of g.c/2. In other words they measure and calculate that the outside world is passing them by at a factor of gamma more than the outside world thinks they are passing by.
The platform based observers see the train going past at c/2 but the train based observers see the platform going past at g.c/2.  
The postulate of Classical Relativity has been broken. Basically because the train shrinks but the platform doesn’t.  And this is because the train is moving very fast relative to the aether but the platform isn’t.
Or you can interpret the result as follows. The moving system has clocks which are running slow by the gamma factor, so they see the outside world moving past at a rate which is increased by the gamma factor.
Surely this would have shown up in some experimental results by now! But has it? Where has there been an experiment based on a reference frame which is moving at relativistic speeds?
Lorentzian Relativity and the Symmetrical Twin Paradox
The new theory resolves the symmetrical twin paradox (see earlier blog) in simple way. Both twins are moving through the aether, so both experience time dilation and both age at the same slower rate. They also age less than their colleagues aboard the stationary mother ship by the same amount.
Rapidly Rotating Disc
Return to the earlier discussion on the Ehrenfest Paradox. This has a rigid disc spinning so fast that points on its circumference are moving at an appreciable proportion of the speed of light. Suppose the external observers see points on the rim moving at X million meters per second in an anti-clockwise direction. The principle of classical relativity says that observers on the rim should measure the territory immediately outside the rim moving at the same speed in the opposite direction. But will this in fact be true?
I am going to argue that they this will not happen. I am going to suggest that the passage of time for the rim based observers will slow down and so they will see the outside world going past faster than X million meters per second.
My argument is simple. External observers see one rim circumference go by in T seconds. A rim based observer sees one rim circumference go by in T’ seconds where T’ is the duration on their local clock and T’<T because their local clock is slow.
Both the external inertial observers are measuring the circumference properly. The external observers can just run a tape measure around it. The rim based observers cannot dispute that this is in fact the length of their rim. The same observer can see both ends of the tape in the same place at the same instant.
Dividing the rim circumference by the smaller value T’ give a bigger answer. The rim based observers see the outside world going fast by a factor of g, and this is simply because their clocks are running slow by a factor of g.
The so-called principle of Classical Relativity is broken, but it makes the situation a lot less paradoxical.
Experimental Verification
The new theory has the same predictions as Special Relativity for all the well know time dilation and relativistic mass increase experiments that I can think of. The best place to look for different predictions is situations where observers are likely to be travelling very fast relative to the conjectured aether, and other observers are not.  And I think a good place to try this out is using satellite systems orbiting the Earth. See next blog.
0 notes