Short description: List of definitions of terms and concepts commonly used in the study of engineering
This glossary of engineering terms is a list of definitions about the major concepts of engineering. Please see the bottom of the page for glossaries of specific fields of engineering.
In electrochemistry, according to an IUPAC definition,[1] is the electrode potential of a metal measured with respect to a universal reference system (without any additional metal–solution interface).
Absolute pressure
Is zero-referenced against a perfect vacuum, using an absolute scale, so it is equal to gauge pressure plus atmospheric pressure.
Is the lower limit of the thermodynamic temperature scale, a state at which the enthalpy and entropy of a cooled ideal gas reach their minimum value, taken as 0. Absolute zero is the point at which the fundamental particles of nature have minimal vibrational motion, retaining only quantum mechanical, zero-point energy-induced particle motion. The theoretical temperature is determined by extrapolating the ideal gas law; by international agreement, absolute zero is taken as −273.15° on the Celsius scale (International System of Units),[2][3] which equals −459.67° on the Fahrenheit scale (United States customary units or Imperial units).[4] The corresponding Kelvin and Rankine temperature scales set their zero points at absolute zero by definition.
Absorbance or decadic absorbance is the common logarithm of the ratio of incident to transmitted radiant power through a material, and spectral absorbance or spectral decadic absorbance is the common logarithm of the ratio of incident to transmitted spectral radiant power through a material.[5]
A type of wastewater treatment process for treating sewage or industrial wastewaters using aeration and a biological floc composed of bacteria and protozoa.
In cellular biology, active transport is the movement of molecules across a membrane from a region of their lower concentration to a region of their higher concentration—against the concentration gradient. Active transport requires cellular energy to achieve this movement. There are two types of active transport: primary active transport that uses ATP, and secondary active transport that uses an electrochemical gradient. An example of active transport in human physiology is the uptake of glucose in the intestines.
A device that accepts 2 inputs (control signal, energy source) and outputs kinetic energy in the form of physical movement (linear, rotary, or oscillatory). The control signal input specifies which motion should be taken. The energy source input is typically either an electric current, hydraulic pressure, or pneumatic pressure. An actuator can be the final element of a control loop
A complex organic chemical that provides energy to drive many processes in living cells, e.g. muscle contraction, nerve impulse propagation, chemical synthesis. Found in all forms of life, ATP is often referred to as the "molecular unit of currency" of intracellular energy transfer.[7]
The tendency of dissimilar particles or surfaces to cling to one another (cohesion refers to the tendency of similar or identical particles/surfaces to cling to one another).
The study of the motion of air, particularly its interaction with a solid object, such as an airplane wing. It is a sub-field of fluid dynamics and gas dynamics, and many aspects of aerodynamics theory are common to these fields.
Is the primary field of engineering concerned with the development of aircraft and spacecraft.[10] It has two major and overlapping branches: Aeronautical engineering and Astronautical Engineering. Avionics engineering is similar, but deals with the electronics side of aerospace engineering.
Alpha particles consist of two protons and two neutrons bound together into a particle identical to a helium-4nucleus. They are generally produced in the process of alpha decay, but may also be produced in other ways. Alpha particles are named after the first letter in the Greek alphabet, α.
An amorphous (from the Greek a, without, morphé, shape, form) or non-crystalline solid is a solid that lacks the long-range order that is characteristic of a crystal.
In chemistry, an amphoteric compound is a molecule or ion that can react both as an acid as well as a base.[20] Many metals (such as copper, zinc, tin, lead, aluminium, and beryllium) form amphoteric oxides or hydroxides. Amphoterism depends on the oxidation states of the oxide. Al2O3 is an example of an amphoteric oxide.
The amplitude of a periodicvariable is a measure of its change over a single period (such as time or spatial period). There are various definitions of amplitude, which are all functions of the magnitude of the difference between the variable's extreme values. In older texts the phase is sometimes called the amplitude.[21]
Is a collection of processes by which microorganisms break down biodegradable material in the absence of oxygen.[22] The process is used for industrial or domestic purposes to manage waste or to produce fuels. Much of the fermentation used industrially to produce food and drink products, as well as home fermentation, uses anaerobic digestion.
Is the rate of change of angular velocity. In three dimensions, it is a pseudovector. In SI units, it is measured in radians per second squared (rad/s2), and is usually denoted by the Greek letter alpha (α).[23]
In physics, angular momentum (rarely, moment of momentum or rotational momentum) is the rotational equivalent of linear momentum. It is an important quantity in physics because it is a conserved quantity—the total angular momentum of a system remains constant unless acted on by an external torque.
In physics, the angular velocity of a particle is the rate at which it rotates around a chosen center point: that is, the time rate of change of its angular displacement relative to the origin (i.e. in layman's terms: how quickly an object goes around something over a period of time - e.g. how fast the earth orbits the sun). It is measured in angle per unit time, radians per second in SI units, and is usually represented by the symbol omega (ω, sometimes Ω). By convention, positive angular velocity indicates counter-clockwise rotation, while negative is clockwise.
Anion
Is an ion with more electrons than protons, giving it a net negative charge (since electrons are negatively charged and protons are positively charged).[24]
In particle physics, annihilation is the process that occurs when a subatomic particle collides with its respective antiparticle to produce other particles, such as an electron colliding with a positron to produce two photons.[25] The total energy and momentum of the initial pair are conserved in the process and distributed among a set of other particles in the final state. Antiparticles have exactly opposite additive quantum numbers from particles, so the sums of all quantum numbers of such an original pair are zero. Hence, any set of particles may be produced whose total quantum numbers are also zero as long as conservation of energy and conservation of momentum are obeyed.[26]
The electrode at which current enters a device such as an electrochemical cell or vacuum tube.
ANSI
The American National Standards Institute is a private non-profit organization that oversees the development of voluntary consensus standards for products, services, processes, systems, and personnel in the United States.[27] The organization also coordinates U.S. standards with international standards so that American products can be used worldwide.
Anti-gravity (also known as non-gravitational field) is a theory of creating a place or object that is free from the force of gravity. It does not refer to the lack of weight under gravity experienced in free fall or orbit, or to balancing the force of gravity with some other force, such as electromagnetism or aerodynamic lift.
Applied engineering
Is the field concerned with the application of management, design, and technical skills for the design and integration of systems, the execution of new product designs, the improvement of manufacturing processes, and the management and direction of physical and/or technical functions of a firm or organization. Applied-engineering degreed programs typically include instruction in basic engineering principles, project management, industrial processes, production and operations management, systems integration and control, quality control, and statistics.[28]
Determining the length of an irregular arc segment is also called rectification of a curve. Historically, many methods were used for specific curves. The advent of infinitesimal calculus led to a general formula that provides closed-form solutions in some cases.
States that the upward buoyant force that is exerted on a body immersed in a fluid, whether fully or partially submerged, is equal to the weight of the fluid that the body displaces and acts in the upward direction at the center of mass of the displaced fluid.[29] Archimedes' principle is a law of physics fundamental to fluid mechanics. It was formulated by Archimedes of Syracuse.[30]
Area moment of inertia
The 2nd moment of area, also known as moment of inertia of plane area, area moment of inertia, or second area moment, is a geometrical property of an area which reflects how its points are distributed with regard to an arbitrary axis. The second moment of area is typically denoted with either an [math]\displaystyle{ I }[/math] for an axis that lies in the plane or with a [math]\displaystyle{ J }[/math] for an axis perpendicular to the plane. In both cases, it is calculated with a multiple integral over the object in question. Its dimension is L (length) to the fourth power. Its unit of dimension when working with the International System of Units is meters to the fourth power, m4.
In mathematics and statistics, the arithmetic mean or simply the mean or average when the context is clear, is the sum of a collection of numbers divided by the number of numbers in the collection.[31]
In mathematics, an arithmetic progression (AP) or arithmetic sequence is a sequence of numbers such that the difference between the consecutive terms is constant. Difference here means the second minus the first. For instance, the sequence 5, 7, 9, 11, 13, 15, . . . is an arithmetic progression with common difference of 2.
An aromatic hydrocarbon or arene[32] (or sometimes aryl hydrocarbon)[33] is a hydrocarbon with sigma bonds and delocalized pi electrons between carbon atoms forming a circle. In contrast, aliphatic hydrocarbons lack this delocalization. The term "aromatic" was assigned before the physical mechanism determining aromaticity was discovered; the term was coined as such simply because many of the compounds have a sweet or pleasant odour. The configuration of six carbon atoms in aromatic compounds is known as a benzene ring, after the simplest possible such hydrocarbon, benzene. Aromatic hydrocarbons can be monocyclic (MAH) or polycyclic (PAH).
The Arrhenius equation is a formula for the temperature dependence of reaction rates. The equation was proposed by Svante Arrhenius in 1889, based on the work of Dutch chemist Jacobus Henricus van 't Hoff who had noted in 1884 that Van 't Hoff's equation for the temperature dependence of equilibrium constants suggests such a formula for the rates of both forward and reverse reactions. This equation has a vast and important application in determining rate of chemical reactions and for calculation of energy of activation. Arrhenius provided a physical justification and interpretation for the formula.[34][35][36] Currently, it is best seen as an empirical relationship.[37]:188 It can be used to model the temperature variation of diffusion coefficients, population of crystal vacancies, creep rates, and many other thermally-induced processes/reactions. The Eyring equation, developed in 1935, also expresses the relationship between rate and energy.
(AI), is intelligence demonstrated by machines, unlike the natural intelligence displayed by humans and animals. Leading AI textbooks define the field as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.[40] Colloquially, the term "artificial intelligence" is often used to describe machines (or computers) that mimic "cognitive" functions that humans associate with the human mind, such as "learning" and "problem solving".[41]
In atomic theory and quantum mechanics, an atomic orbital is a mathematical function that describes the wave-like behavior of either one electron or a pair of electrons in an atom.[42] This function can be used to calculate the probability of finding any electron of an atom in any specific region around the atom's nucleus. The term atomic orbital may also refer to the physical region or space where the electron can be calculated to be present, as defined by the particular mathematical form of the orbital.[43]
An audio frequency (abbreviation: AF), or audible frequency is characterized as a periodicvibration whose frequency is audible to the average human. The SI unit of audio frequency is the hertz (Hz). It is the property of sound that most determines pitch.[44]
Austenitization means to heat the iron, iron-based metal, or steel to a temperature at which it changes crystal structure from ferrite to austenite.[45] The more open structure of the austenite is then able to absorb carbon from the iron-carbides in carbon steel. An incomplete initial austenitization can leave undissolved carbides in the matrix.[46] For some irons, iron-based metals, and steels, the presence of carbides may occur during the austenitization step. The term commonly used for this is two-phase austenitization.[47]
Is the technology by which a process or procedure is performed with minimum human assistance.[48] Automation[49] or automatic control is the use of various control systems for operating equipment such as machinery, processes in factories, boilers and heat treating ovens, switching on telephone networks, steering and stabilization of ships, aircraft and other applications and vehicles with minimal or reduced human intervention. Some processes have been completely automated.
Autonomous vehicle
A vehicle capable of driving from one point to another without input from a human operator.
In chemistry, bases are substances that, in aqueous solution, release hydroxide (OH−) ions, are slippery to the touch, can taste bitter if an alkali,[50] change the color of indicators (e.g., turn red litmus paper blue), react with acids to form salts, promote certain chemical reactions (base catalysis), accept protons from any proton donor, and/or contain completely or partially displaceable OH− ions.
The Beer–Lambert law, also known as Beer's law, the Lambert–Beer law, or the Beer–Lambert–Bouguer law relates the attenuation of light to the properties of the material through which the light is travelling. The law is commonly applied to chemical analysis measurements and used in understanding attenuation in physical optics, for photons, neutrons or rarefied gases. In mathematical physics, this law arises as a solution of the BGK equation.
Is a term describing the friction forces between a belt and a surface, such as a belt wrapped around a bollard. When one end of the belt is being pulled only part of this force is transmitted to the other end wrapped about a surface. The friction force increases with the amount of wrap about a surface and makes it so the tension in the belt can be different at both ends of the belt. Belt friction can be modeled by the Belt friction equation.[51]
In applied mechanics, bending (also known as flexure) characterizes the behavior of a slender structural element subjected to an external load applied perpendicularly to a longitudinal axis of the element. The structural element is assumed to be such that at least one of its dimensions is a small fraction, typically 1/10 or less, of the other two.[52]
In solid mechanics, a bending moment is the reaction induced in a structural element when an external force or moment is applied to the element, causing the element to bend.[53][54] The most common or simplest structural element subjected to bending moments is the beam.
Benefit–cost analysis
Cost–benefit analysis (CBA), sometimes called benefit costs analysis (BCA), is a systematic approach to estimating the strengths and weaknesses of alternatives (for example in transactions, activities, functional business requirements); it is used to determine options that provide the best approach to achieve benefits while preserving savings.[55] It may be used to compare potential (or completed) courses of actions; or estimate (or evaluate) the value against costs of a single decision, project, or policy.
is called a Bernoulli differential equation where [math]\displaystyle{ n }[/math] is any real number and [math]\displaystyle{ n \ne 0 }[/math] and [math]\displaystyle{ n \ne 1 }[/math].[56] It is named after Jacob Bernoulli who discussed it in 1695. Bernoulli equations are special because they are nonlinear differential equations with known exact solutions. A famous special case of the Bernoulli equation is the logistic differential equation.
Bernoulli's equation
An equation for relating several measurements within a fluid flow, such as velocity, pressure, and potential energy.
In fluid dynamics, Bernoulli's principle states that an increase in the speed of a fluid occurs simultaneously with a decrease in pressure or a decrease in the fluid's potential energy.[57](Ch.3)[58](§ 3.5) The principle is named after Daniel Bernoulli who published it in his book Hydrodynamica in 1738.[59] Although Bernoulli deduced that pressure decreases when the flow speed increases, it was Leonhard Euler who derived Bernoulli's equation in its usual form in 1752.[60][61] The principle is only applicable for isentropic flows: when the effects of irreversible processes (like turbulence) and non-adiabatic processes (e.g. heat radiation) are small and can be neglected.
also called beta ray or beta radiation (symbol β), is a high-energy, high-speed electron or positron emitted by the radioactive decay of an atomic nucleus during the process of beta decay. There are two forms of beta decay, β− decay and β+ decay, which produce electrons and positrons respectively.[62]
Biomedical Engineering (BME) or Medical Engineering is the application of engineering principles and design concepts to medicine and biology for healthcare purposes (e.g. diagnostic or therapeutic). This field seeks to close the gap between engineering and medicine, combining the design and problem solving skills of engineering with medical biological sciences to advance health care treatment, including diagnosis, monitoring, and therapy.[63]
Biomimetic
Biomimetics or biomimicry is the imitation of the models, systems, and elements of nature for the purpose of solving complex human problems.[64]
The Biot number (Bi) is a dimensionless quantity used in heat transfer calculations. It is named after the eighteenth century French physicist Jean-Baptiste Biot (1774–1862), and gives a simple index of the ratio of the heat transfer resistances inside of and at the surface of a body. This ratio determines whether or not the temperatures inside a body will vary significantly in space, while the body heats or cools over time, from a thermal gradient applied to its surface.
The boiling point of a substance is the temperature at which the vapor pressure of a liquid equals the pressure surrounding the liquid[70][71] and the liquid changes into a vapor.
Boiling-point elevation describes the phenomenon that the boiling point of a liquid (a solvent) will be higher when another compound is added, meaning that a solution has a higher boiling point than a pure solvent. This happens whenever a non-volatile solute, such as a salt, is added to a pure solvent, such as water. The boiling point can be measured accurately using an ebullioscope.
Boyle's law (sometimes referred to as the Boyle–Mariotte law, or Mariotte's law[81]) is an experimental gas law that describes how the pressure of a gas tends to increase as the volume of the container decreases. A modern statement of Boyle's law is:
The absolute pressure exerted by a given mass of an ideal gas is inversely proportional to the volume it occupies if the temperature and amount of gas remain unchanged within a closed system.[82][83]
In geometry and crystallography, a Bravais lattice, named after Auguste Bravais (1850),[84] is an infinite array (or a finite array, if we consider the edges, obviously) of discrete points generated by a set of discrete translation operations described in three dimensional space by:
where ni are any integers and ai are known as the primitive vectors which lie in different directions (not necessarily mutually perpendicular) and span the lattice. This discrete set of vectors must be closed under vector addition and subtraction. For any choice of position vector R, the lattice looks exactly the same.
The break-even point (BEP) in economics, business—and specifically cost accounting—is the point at which total cost and total revenue are equal, i.e. "even". There is no net loss or gain, and one has "broken even", though opportunity costs have been paid and capital has received the risk-adjusted, expected return. In short, all costs that must be paid are paid, and there is neither profit nor loss.[85][86]
Brewster's angle (also known as the polarization angle) is an angle of incidence at which light with a particular polarization is perfectly transmitted through a transparent dielectric surface, with no reflection. When unpolarized light is incident at this angle, the light that is reflected from the surface is therefore perfectly polarized. This special angle of incidence is named after the Scottish physicist Sir David Brewster (1781–1868).[87][88]
A material is brittle if, when subjected to stress, it breaks without significant plastic deformation. Brittle materials absorb relatively little energy prior to fracture, even those of high strength. Breaking is often accompanied by a snapping sound. Brittle materials include most ceramics and glasses (which do not deform plastically) and some polymers, such as PMMA and polystyrene. Many steels become brittle at low temperatures (see ductile-brittle transition temperature), depending on their composition and processing.
Is an acid–base reaction theory which was proposed independently by Johannes Nicolaus Brønsted and Thomas Martin Lowry in 1923.[89][90] The fundamental concept of this theory is that when an acid and a base react with each other, the acid forms its conjugate base, and the base forms its conjugate acid by exchange of a proton (the hydrogen cation, or H+). This theory is a generalization of the Arrhenius theory.
Brownian motion or pedesis is the random motion of particles suspended in a fluid (a liquid or a gas) resulting from their collision with the fast-moving molecules in the fluid.[91]
A buffer solution (more precisely, pH buffer or hydrogen ion buffer) is an aqueous solution consisting of a mixture of a weak acid and its conjugate base, or vice versa. Its pH changes very little when a small amount of strong acid or base is added to it. Buffer solutions are used as a means of keeping pH at a nearly constant value in a wide variety of chemical applications. In nature, there are many systems that use buffering for pH regulation.
The bulk modulus ([math]\displaystyle{ K }[/math] or [math]\displaystyle{ B }[/math]) of a substance is a measure of how resistant to compression that substance is. It is defined as the ratio of the infinitesimalpressure increase to the resulting relative decrease of the volume.[92]
Other moduli describe the material's response (strain) to other kinds of stress: the shear modulus describes the response to shear, and Young's modulus describes the response to linear stress. For a fluid, only the bulk modulus is meaningful. For a complex anisotropic solid such as wood or paper, these three moduli do not contain enough information to describe its behaviour, and one must use the full generalized Hooke's law.
Capillary action (sometimes capillarity, capillary motion, capillary effect, or wicking) is the ability of a liquid to flow in narrow spaces without the assistance of, or even in opposition to, external forces like gravity. The effect can be seen in the drawing up of liquids between the hairs of a paint-brush, in a thin tube, in porous materials such as paper and plaster, in some non-porous materials such as sand and liquefied carbon fiber, or in a cell. It occurs because of intermolecular forces between the liquid and surrounding solid surfaces. If the diameter of the tube is sufficiently small, then the combination of surface tension (which is caused by cohesion within the liquid) and adhesive forces between the liquid and container wall act to propel the liquid.[93]
A hypothetical thermodynamic cycle for a heat engine; no thermodynamic cycle can be more efficient than a Carnot cycle operating between the same two temperature limits.
Named for Carlo Alberto Castigliano, is a method for determining the displacements of a linear-elastic system based on the partial derivatives of the energy. He is known for his two theorems. The basic concept may be easy to understand by recalling that a change in energy is equal to the causing force times the resulting displacement. Therefore, the causing force is equal to the change in energy divided by the resulting displacement. Alternatively, the resulting displacement is equal to the change in energy divided by the causing force. Partial derivatives are needed to relate causing forces and resulting displacements to the change in energy.
Casting
Forming of an object by pouring molten metal (or other substances) into a mold.
The cell membrane (also known as the plasma membrane or cytoplasmic membrane, and historically referred to as the plasmalemma) is a biological membrane that separates the interior of all cells from the outside environment (the extracellular space) which protects the cell from its environment[94][95] consisting of a lipid bilayer with embedded proteins.
In cell biology, the nucleus (pl. nuclei; from Latinnucleus or nuculeus, meaning kernel or seed) is a membrane-enclosed organelle found in eukaryoticcells. Eukaryotes usually have a single nucleus, but a few cell types, such as mammalian red blood cells, have no nuclei, and a few others including osteoclasts have many.
In biology, cell theory is the historic scientific theory, now universally accepted, that living organisms are made up of cells, that they are the basic structural/organizational unit of all organisms, and that all cells come from pre-existing cells. Cells are the basic unit of structure in all organisms and also the basic unit of reproduction.
Center of gravity
The center of mass of an object, its balance point.
Is the point where the total sum of a pressure field acts on a body, causing a force to act through that point. The total force vector acting at the center of pressure is the value of the integrated vectorial pressure field. The resultant force and center of pressure location produce equivalent force and moment on the body as the original pressure field.
In probability theory, the central limit theorem (CLT) establishes that, in some situations, when independent random variables are added, their properly normalized sum tends toward a normal distribution (informally a "bell curve") even if the original variables themselves are not normally distributed. The theorem is a key concept in probability theory because it implies that probabilistic and statistical methods that work for normal distributions can be applicable to many problems involving other types of distributions.
A central processing unit (CPU) is the electronic circuitry within a computer that carries out the instructions of a computer program by performing the basic arithmetic, logic, controlling and input/output (I/O) operations specified by the instructions. The computer industry has used the term "central processing unit" at least since the early 1960s.[96] Traditionally, the term "CPU" refers to a processor, more specifically to its processing unit and control unit (CU), distinguishing these core elements of a computer from external components such as main memory and I/O circuitry.[97]
In cell biology, the centrosome is an organelle that serves as the main microtubule organizing center (MTOC) of the animal cell as well as a regulator of cell-cycle progression. The centrosome is thought to have evolved only in the metazoan lineage of eukaryotic cells.[98]Fungi and plants lack centrosomes and therefore use structures other than MTOCs to organize their microtubules.[99][100]
Is a sequence of reactions where a reactive product or by-product causes additional reactions to take place. In a chain reaction, positive feedback leads to a self-amplifying chain of events.
Charles's law (also known as the law of volumes) is an experimental gas law that describes how gasestend to expand when heated. A modern statement of Charles's law is:
When the pressure on a sample of a dry gas is held constant, the Kelvin temperature and the volume will be in direct proportion.[101]
Is a lasting attraction between atoms, ions or molecules that enables the formation of chemical compounds. The bond may result from the electrostatic force of attraction between oppositely charged ions as in ionic bonds or through the sharing of electrons as in covalent bonds. The strength of chemical bonds varies considerably; there are "strong bonds" or "primary bonds" such as covalent, ionic and metallic bonds, and "weak bonds" or "secondary bonds" such as dipole–dipole interactions, the London dispersion force and hydrogen bonding.
In a chemical reaction, chemical equilibrium is the state in which both reactants and products are present in concentrations which have no further tendency to change with time, so that there is no observable change in the properties of the system.[102] Usually, this state results when the forward reaction proceeds at the same rate as the reverse reaction. The reaction rates of the forward and backward reactions are generally not zero, but equal. Thus, there are no net changes in the concentrations of the reactant(s) and product(s). Such a state is known as dynamic equilibrium.[103][104]
Chemical kinetics, also known as reaction kinetics, is the study of rates of chemical processes. Chemical kinetics includes investigations of how different experimental conditions can influence the speed of a chemical reaction and yield information about the reaction's mechanism and transition states, as well as the construction of mathematical models that can describe the characteristics of a chemical reaction.
A chemical reaction is a process that leads to the chemical transformation of one set of chemical substances to another.[105] Classically, chemical reactions encompass changes that only involve the positions of electrons in the forming and breaking of chemical bonds between atoms, with no change to the nuclei (no change to the elements present), and can often be described by a chemical equation. Nuclear chemistry is a sub-discipline of chemistry that involves the chemical reactions of unstable and radioactive elements where both electronic and nuclear changes can occur.
Chromate salts contain the chromate anion, CrO2−4. Dichromate salts contain the dichromate anion, Cr2O2−7. They are oxoanions of chromium in the 6+ oxidation state . They are moderately strong oxidizing agents. In an aqueoussolution, chromate and dichromate ions can be interconvertible.
In physics, circular motion is a movement of an object along the circumference of a circle or rotation along a circular path. It can be uniform, with constant angular rate of rotation and constant speed, or non-uniform with a changing rate of rotation. The rotation around a fixed axis of a three-dimensional body involves circular motion of its parts. The equations of motion describe the movement of the center of mass of a body.
The Clausius–Clapeyron relation, named after Rudolf Clausius[110] and Benoît Paul Émile Clapeyron,[111] is a way of characterizing a discontinuous phase transition between two phases of matter of a single constituent. On a pressure–temperature (P–T) diagram, the line separating the two phases is known as the coexistence curve. The Clausius–Clapeyron relation gives the slope of the tangents to this curve. Mathematically,
where [math]\displaystyle{ \mathrm{d}P/\mathrm{d}T }[/math] is the slope of the tangent to the coexistence curve at any point, [math]\displaystyle{ L }[/math] is the specific latent heat, [math]\displaystyle{ T }[/math] is the temperature, [math]\displaystyle{ \Delta v }[/math] is the specific volume change of the phase transition, and [math]\displaystyle{ \Delta s }[/math] is the specific entropy change of the phase transition.
The Clausius theorem (1855) states that a system exchanging heat with external reservoirs and undergoing a cyclic process, is one that ultimately returns a system to its original state,
where [math]\displaystyle{ \delta Q }[/math] is the infinitesimal amount of heat absorbed by the system from the reservoir and [math]\displaystyle{ T_{surr} }[/math] is the temperature of the external reservoir (surroundings) at a particular instant in time. In the special case of a reversible process, the equality holds.[112] The reversible case is used to introduce the entropy state function. This is because in a cyclic process the variation of a state function is zero. In words, the Clausius statement states that it is impossible to construct a device whose sole effect is the transfer of heat from a cool reservoir to a hot reservoir.[113] Equivalently, heat spontaneously flows from a hot body to a cooler one, not the other way around.[114] The generalized "inequality of Clausius"[115]
The coefficient of performance or COP (sometimes CP or CoP) of a heat pump, refrigerator or air conditioning system is a ratio of useful heating or cooling provided to work required.[116][117] Higher COPs equate to lower operating costs. The COP usually exceeds 1, especially in heat pumps, because, instead of just converting work to heat (which, if 100% efficient, would be a COP_hp of 1), it pumps additional heat from a heat source to where the heat is required. For complete systems, COP calculations should include energy consumption of all power consuming auxiliaries. COP is highly dependent on operating conditions, especially absolute temperature and relative temperature between sink and system, and is often graphed or averaged against expected conditions.[118]
In physics, two wave sources are perfectly coherent if they have a constant phase difference and the same frequency, and the same waveform. Coherence is an ideal property of waves that enables stationary (i.e. temporally and spatially constant) interference. It contains several distinct concepts, which are limiting cases that never quite occur in reality but allow an understanding of the physics of waves, and has become a very important concept in quantum physics. More generally, coherence describes all properties of the correlation between physical quantities of a single wave, or between several waves or wave packets.
Or cohesive attraction or cohesive force is the action or property of like molecules sticking together, being mutually attractive. It is an intrinsic property of a substance that is caused by the shape and structure of its molecules, which makes the distribution of orbiting electrons irregular when molecules get close to one another, creating electrical attraction that can maintain a microscopic structure such as a water drop. In other words, cohesion allows for surface tension, creating a "solid-like" state upon which light-weight or low-density materials can be placed.
Cold forming
Or cold working, any metal-working procedure (such as hammering, rolling, shearing, bending, milling, etc.) carried out below the metal's recrystallization temperature.
Or burning,[119] is a high-temperature exothermic redoxchemical reaction between a fuel (the reductant) and an oxidant, usually atmospheric oxygen, that produces oxidized, often gaseous products, in a mixture termed as smoke.
Is planning for side effects or other unintended issues in a design. In a more simpler term, it's a "counter-procedure" plan on expected side effect performed to produce more efficient and useful results. The design of an invention can itself also be to compensate for some other existing issue or exception.
Compressive strength or compression strength is the capacity of a material or structure to withstand loads tending to reduce size, as opposed to tensile strength, which withstands loads tending to elongate. In other words, compressive strength resists compression (being pushed together), whereas tensile strength resists tension (being pulled apart). In the study of strength of materials, tensile strength, compressive strength, and shear strength can be analyzed independently.
A computer is a device that can be instructed to carry out sequences of arithmetic or logical operations automatically via computer programming. Modern computers have the ability to follow generalized sets of operations, called programs. These programs enable computers to perform an extremely wide range of tasks.
Computer-aided design (CAD) is the use of computer systems (or workstations) to aid in the creation, modification, analysis, or optimization of a design.[120] CAD software is used to increase the productivity of the designer, improve the quality of design, improve communications through documentation, and to create a database for manufacturing.[121] CAD output is often in the form of electronic files for print, machining, or other manufacturing operations. The term CADD (for Computer Aided Design and Drafting) is also used.[122]
Computer-aided engineering (CAE) is the broad usage of computer software to aid in engineering analysis tasks. It includes finite element analysis (FEA), computational fluid dynamics (CFD), multibody dynamics (MBD), durability and optimization.
Computer-aided manufacturing (CAM) is the use of software to control machine tools and related ones in the manufacturing of workpieces.[123][124][125][126][127] This is not the only definition for CAM, but it is the most common;[123] CAM may also refer to the use of a computer to assist in all operations of a manufacturing plant, including planning, management, transportation and storage.[128][129]
Computer engineering is a discipline that integrates several fields of computer science and electronics engineering required to develop computer hardware and software.[130]
Is the theory, experimentation, and engineering that form the basis for the design and use of computers. It involves the study of algorithms that process, store, and communicate digitalinformation. A computer scientist specializes in the theory of computation and the design of computational systems.[131]
Concave lens
Lenses are classified by the curvature of the two optical surfaces. A lens is biconvex (or double convex, or just convex) if both surfaces are convex. If both surfaces have the same radius of curvature, the lens is equiconvex. A lens with two concave surfaces is biconcave (or just concave). If one of the surfaces is flat, the lens is plano-convex or plano-concave depending on the curvature of the other surface. A lens with one convex and one concave side is convex-concave or meniscus.
Is the field of physics that deals with the macroscopic and microscopic physical properties of matter. In particular it is concerned with the "condensed" phases that appear whenever the number of constituents in a system is extremely large and the interactions between the constituents are strong.
In statistics, a confidence interval or compatibility interval (CI) is a type of interval estimate, computed from the statistics of the observed data, that might contain the true value of an unknown population parameter. The interval has an associated confidence level that, loosely speaking, quantifies the level of confidence that the parameter lies in the interval. More strictly speaking, the confidence level represents the frequency (i.e. the proportion) of possible confidence intervals that contain the true value of the unknown population parameter. In other words, if confidence intervals are constructed using a given confidence level from an infinite number of independent sample statistics, the proportion of those intervals that contain the true value of the parameter will be equal to the confidence level.[132][133][134]
A conjugate acid, within the Brønsted–Lowry acid–base theory, is a species formed by the reception of a proton (H+) by a base—in other words, it is a base with a hydrogen ion added to it. On the other hand, a conjugate base is what is left over after an acid has donated a proton during a chemical reaction. Hence, a conjugate base is a species formed by the removal of a proton from an acid.[135] Because some acids are capable of releasing multiple protons, the conjugate base of an acid may itself be acidic.
Conjugate base
A conjugate acid, within the Brønsted–Lowry acid–base theory, is a species formed by the reception of a proton (H+) by a base—in other words, it is a base with a hydrogen ion added to it. On the other hand, a conjugate base is what is left over after an acid has donated a proton during a chemical reaction. Hence, a conjugate base is a species formed by the removal of a proton from an acid.[135] Because some acids are capable of releasing multiple protons, the conjugate base of an acid may itself be acidic.
In physics and chemistry, the law of conservation of energy states that the total energy of an isolated system remains constant; it is said to be conserved over time.[136] This law means that energy can neither be created nor destroyed; rather, it can only be transformed or transferred from one form to another.
The law of conservation of mass or principle of mass conservation states that for any system closed to all transfers of matter and energy, the mass of the system must remain constant over time, as system's mass cannot change, so quantity cannot be added nor removed. Hence, the quantity of mass is conserved over time.
A continuity equation in physics is an equation that describes the transport of some quantity. It is particularly simple and powerful when applied to a conserved quantity, but it can be generalized to apply to any extensive quantity. Since mass, energy, momentum, electric charge and other natural quantities are conserved under their respective appropriate conditions, a variety of physical phenomena may be described using continuity equations.
Is a branch of mechanics that deals with the mechanical behavior of materials modeled as a continuous mass rather than as discrete particles. The French mathematician Augustin-Louis Cauchy was the first to formulate such models in the 19th century.
Control engineering
Control engineering or control systems engineering is an engineering discipline that applies automatic control theory to design systems with desired behaviors in control environments.[137] The discipline of controls overlaps and is usually taught along with electrical engineering at many institutions around the world.[137]
.
Convex lens
Lenses are classified by the curvature of the two optical surfaces. A lens is biconvex (or double convex, or just convex) if both surfaces are convex. If both surfaces have the same radius of curvature, the lens is equiconvex. A lens with two concave surfaces is biconcave (or just concave). If one of the surfaces is flat, the lens is plano-convex or plano-concave depending on the curvature of the other surface. A lens with one convex and one concave side is convex-concave or meniscus.
Is a natural process, which converts a refined metal to a more chemically-stable form, such as its oxide, hydroxide, or sulfide. It is the gradual destruction of materials (usually metals) by chemical and/or electrochemical reaction with their environment. Corrosion engineering is the field dedicated to controlling and stopping corrosion.
The coulomb is equivalent to the charge of approximately 6.242×1018 (1.036×10−5mol) protons, and −1 C is equivalent to the charge of approximately 6.242×1018electrons.
A new definition, in terms of the elementary charge, will take effect on 20 May 2019.[139] The new definition, defines the elementary charge (the charge of the proton) as exactly 1.602176634×10−19 coulombs. This would implicitly define the coulomb as 1⁄0.1602176634×1018 elementary charges.
Coulomb's law, or Coulomb's inverse-square law, is a law of physics for quantifying Coulomb's force, or electrostatic force. Electrostatic force is the amount of force with which stationary, electrically charged particles either repel, or attract each other. This force and the law for quantifying it, represent one of the most basic forms of force used in the physical sciences, and were an essential basis to the study and development of the theory and field of classical electromagnetism. The law was first published in 1785 by French physicist Charles-Augustin de Coulomb.[140]
In its scalar form, the law is:
where ke is Coulomb's constant (ke ≈ 9×109 N m2 C−2), q1 and q2 are the signed magnitudes of the charges, and the scalar r is the distance between the charges. The force of the interaction between the charges is attractive if the charges have opposite signs (i.e., F is negative) and repulsive if like-signed (i.e., F is positive).
Being an inverse-square law, the law is analogous to Isaac Newton's inverse-square law of universal gravitation. Coulomb's law can be used to derive Gauss's law, and vice versa.
Crystallization is the (natural or artificial) process by which a solid forms, where the atoms or molecules are highly organized into a structure known as a crystal. Some of the ways by which crystals form are precipitating from a solution, freezing, or more rarely deposition directly from a gas. Attributes of the resulting crystal depend largely on factors such as temperature, air pressure, and in the case of liquid crystals, time of fluid evaporation.
Describes the motion of a moving particle that conforms to a known or fixed curve. The study of such motion involves the use of two co-ordinate systems, the first being planar motion and the latter being cylindrical motion.
In chemistry and physics, Dalton's law (also called Dalton's law of partial pressures) states that in a mixture of non-reacting gases, the total pressure exerted is equal to the sum of the partial pressures of the individual gases.[148]
Damped vibration
Any vibration with a force acting against it to lessen the vibration over time.
In materials science, deformation refers to any changes in the shape or size of an object due to
an applied force (the deformation energy in this case is transferred through work) or
a change in temperature (the deformation energy in this case is transferred through heat).
The first case can be a result of tensile (pulling) forces, compressive (pushing) forces, shear, bending or torsion (twisting).
In the second case, the most significant factor, which is determined by the temperature, is the mobility of the structural defects such as grain boundaries, point vacancies, line and screw dislocations, stacking faults and twins in both crystalline and non-crystalline solids. The movement or displacement of such mobile defects is thermally activated, and thus limited by the rate of atomic diffusion.[150][151]
Deformation in continuum mechanics is the transformation of a body from a reference configuration to a current configuration.[152] A configuration is a set containing the positions of all particles of the body.
A deformation may be caused by external loads,[153]body forces (such as gravity or electromagnetic forces), or changes in temperature, moisture content, or chemical reactions, etc.
In probability theory, the de Moivre–Laplace theorem, which is a special case of the central limit theorem, states that the normal distribution may be used as an approximation to the binomial distribution under certain conditions. In particular, the theorem shows that the probability mass function of the random number of "successes" observed in a series of [math]\displaystyle{ n }[/math] independent Bernoulli trials, each having probability [math]\displaystyle{ p }[/math] of success (a binomial distribution with [math]\displaystyle{ n }[/math] trials), converges to the probability density function of the normal distribution with mean [math]\displaystyle{ np }[/math] and standard deviation[math]\displaystyle{ \sqrt{np(1-p)} }[/math], as [math]\displaystyle{ n }[/math] grows large, assuming [math]\displaystyle{ p }[/math] is not [math]\displaystyle{ 0 }[/math] or [math]\displaystyle{ 1 }[/math].
The density, or more precisely, the volumetric mass density, of a substance is its mass per unit volume. The symbol most often used for density is ρ (the lower case Greek letter rho), although the Latin letter D can also be used. Mathematically, density is defined as mass divided by volume:[154]
[math]\displaystyle{ \rho = \frac{m}{V} }[/math]
where ρ is the density, m is the mass, and V is the volume. In some cases (for instance, in the United States oil and gas industry), density is loosely defined as its weight per unit volume,[155] although this is scientifically inaccurate – this quantity is more specifically called specific weight.
The derivative of a function of a real variable measures the sensitivity to change of the function value (output value) with respect to a change in its argument (input value). Derivatives are a fundamental tool of calculus. For example, the derivative of the position of a moving object with respect to time is the object's velocity: this measures how quickly the position of the object changes when time advances.
Diamagnetic materials are repelled by a magnetic field; an applied magnetic field creates an induced magnetic field in them in the opposite direction, causing a repulsive force. In contrast, paramagnetic and ferromagnetic materials are attracted by a magnetic field. Diamagnetism is a quantum mechanical effect that occurs in all materials; when it is the only contribution to the magnetism, the material is called diamagnetic. In paramagnetic and ferromagnetic substances the weak diamagnetic force is overcome by the attractive force of magnetic dipoles in the material. The magnetic permeability of diamagnetic materials is less than μ0, the permeability of vacuum. In most materials diamagnetism is a weak effect which can only be detected by sensitive laboratory instruments, but a superconductor acts as a strong diamagnet because it repels a magnetic field entirely from its interior.
A differential pulley, also called Weston differential pulley, or colloquially chain fall, is used to manually lift very heavy objects like car engines. It is operated by pulling upon the slack section of a continuous chain that wraps around pulleys. The relative size of two connected pulleys determines the maximum weight that can be lifted by hand. The load will remain in place (and not lower under the force of gravity) until the chain is pulled.[156]
Is the net movement of molecules or atoms from a region of higher concentration (or high chemical potential) to a region of lower concentration (or low chemical potential).
is the analysis of the relationships between different physical quantities by identifying their base quantities (such as length, mass, time, and electric charge) and units of measure (such as miles vs. kilometers, or pounds vs. kilograms) and tracking these dimensions as calculations or comparisons are performed. The conversion of units from one dimensional unit to another is often somewhat complex. Dimensional analysis, or more specifically the factor-label method, also known as the unit-factor method, is a widely used technique for such conversions using the rules of algebra.[157][158][159]
Direct integration is a structural analysis method for measuring internal shear, internal moment, rotation, and deflection of a beam.
For a beam with an applied weight [math]\displaystyle{ w(x) }[/math], taking downward to be positive, the internal shear force is given by taking the negative integral of the weight:
In optics, dispersion is the phenomenon in which the phase velocity of a wave depends on its frequency.[160]
Media having this common property may be termed dispersive media. Sometimes the term chromatic dispersion is used for specificity.
Although the term is used in the field of optics to describe light and other electromagnetic waves, dispersion in the same sense can apply to any sort of wave motion such as acoustic dispersion in the case of sound and seismic waves, in gravity waves (ocean waves), and for telecommunication signals along transmission lines (such as coaxial cable) or optical fiber.
In fluid mechanics, displacement occurs when an object is immersed in a fluid, pushing it out of the way and taking its place. The volume of the fluid displaced can then be measured, and from this, the volume of the immersed object can be deduced (the volume of the immersed object will be exactly equal to the volume of the displaced fluid).
Is a vector whose length is the shortest distance from the initial to the final position of a point P.[161] It quantifies both the distance and direction of an imaginary motion along a straight line from the initial position to the final position of the point. A displacement may be identified with the translation that maps the initial position to the final position.
The Doppler effect (or the Doppler shift) is the change in frequency or wavelength of a wave in relation to an observer who is moving relative to the wave source.[162] It is named after the Austrian physicist Christian Doppler, who described the phenomenon in 1842.
The dose–response relationship, or exposure–response relationship, describes the magnitude of the response of an organism, as a function of exposure (or doses) to a stimulus or stressor (usually a chemical) after a certain exposure time.[163] Dose–response relationships can be described by dose–response curves. A stimulus response function or stimulus response curve is defined more broadly as the response from any type of stimulus, not limited to chemicals.
In fluid dynamics, drag (sometimes called air resistance, a type of friction, or fluid resistance, another type of friction or fluid friction) is a force acting opposite to the relative motion of any object moving with respect to a surrounding fluid.[164] This can exist between two fluid layers (or surfaces) or a fluid and a solid surface. Unlike other resistive forces, such as dry friction, which are nearly independent of velocity, drag forces depend on velocity.[165][166]
Drag force is proportional to the velocity for a laminar flow and the squared velocity for a turbulent flow. Even though the ultimate cause of a drag is viscous friction, the turbulent drag is independent of viscosity.[167] Drag forces always decrease fluid velocity relative to the solid object in the fluid's path.
Is a measure of a material's ability to undergo significant plastic deformation before rupture, which may be expressed as percent elongation or percent area reduction from a tensile test.
In physics and chemistry, effusion is the process in which a gas escapes from a container through a hole of diameter considerably smaller than the mean free path of the molecules.[168]
In physics, elasticity is the ability of a body to resist a distorting influence and to return to its original size and shape when that influence or force is removed. Solid objects will deform when adequate forces are applied to them. If the material is elastic, the object will return to its initial shape and size when these forces are removed.
is the physical property of matter that causes it to experience a force when placed in an electromagnetic field. There are two types of electric charges; positive and negative (commonly carried by protons and electrons respectively). Like charges repel and unlike attract. An object with an absence of net charge is referred to as neutral. Early knowledge of how charged substances interact is now called classical electrodynamics, and is still accurate for problems that do not require consideration of quantum effects.
Electric circuit
Is an electrical network consisting of a closed loop, giving a return path for the current.
Is a flow of electric charge.[169]:2 In electric circuits this charge is often carried by moving electrons in a wire. It can also be carried by ions in an electrolyte, or by both ions and electrons such as in an ionised gas (plasma).[170]
The SI unit for measuring an electric current is the ampere, which is the flow of electric charge across a surface at the rate of one coulomb per second. Electric current is measured using a device called an ammeter.[171]
Surrounds an electric charge, and exerts force on other charges in the field, attracting or repelling them.[172][173] Electric field is sometimes abbreviated as E-field.
Is an electrical machine that converts electrical energy into mechanical energy. Most electric motors operate through the interaction between the motor's magnetic field and winding currents to generate force in the form of rotation. Electric motors can be powered by direct current (DC) sources, such as from batteries, motor vehicles or rectifiers, or by alternating current (AC) sources, such as a power grid, inverters or electrical generators. An electric generator is mechanically identical to an electric motor, but operates in the reverse direction, accepting mechanical energy (such as from flowing water) and converting this mechanical energy into electrical energy.
(Also called the electric field potential, potential drop or the electrostatic potential) is the amount of work needed to move a unit of positive charge from a reference point to a specific point inside the field without producing an acceleration. Typically, the reference point is the Earth or a point at infinity, although any point beyond the influence of the electric field charge can be used.
Electrical potential energy
Electric potential energy, or electrostatic potential energy, is a potential energy (measured in joules) that results from conservativeCoulomb forces and is associated with the configuration of a particular set of point charges within a defined system. An object may have electric potential energy by virtue of two key elements: its own electric charge and its relative position to other electrically charged objects. The term "electric potential energy" is used to describe the potential energy in systems with time-variantelectric fields, while the term "electrostatic potential energy" is used to describe the potential energy in systems with time-invariant electric fields.
Is a technical discipline concerned with the study, design and application of equipment, devices and systems which use electricity, electronics, and electromagnetism. It emerged as an identified activity in the latter half of the 19th century after commercialization of the electric telegraph, the telephone, and electrical power generation, distribution and use. .
The electrical resistance of an object is a measure of its opposition to the flow of electric current. The inverse quantity is electrical conductance, and is the ease with which an electric current passes. Electrical resistance shares some conceptual parallels with the notion of mechanical friction. The SI unit of electrical resistance is the ohm (Ω), while electrical conductance is measured in siemens (S).
Is an object or type of material that allows the flow of charge (electrical current) in one or more directions. Materials made of metal are common electrical conductors. Electrical current is generated by the flow of negatively charged electrons, positively charged holes, and positive or negative ions in some cases.
Is the measure of the opposition that a circuit presents to a current when a voltage is applied. The term complex impedance may be used interchangeably.
Electrical insulator
Is a material whose internal electric charges do not flow freely; very little electric current will flow through it under the influence of an electric field. This contrasts with other materials, semiconductors and conductors, which conduct electric current more easily. The property that distinguishes an insulator is its resistivity; insulators have higher resistivity than semiconductors or conductors.
The electrical resistance of an object is a measure of its opposition to the flow of electric current. The inverse quantity is electrical conductance, and is the ease with which an electric current passes. Electrical resistance shares some conceptual parallels with the notion of mechanical friction. The SI unit of electrical resistance is the ohm (Ω), while electrical conductance is measured in siemens (S).
Is a type of magnet in which the magnetic field is produced by an electric current. Electromagnets usually consist of wire wound into a coil. A current through the wire creates a magnetic field which is concentrated in the hole, denoting the centre of the coil. The magnetic field disappears when the current is turned off. The wire turns are often wound around a magnetic core made from a ferromagnetic or ferrimagnetic material such as iron; the magnetic core concentrates the magnetic flux and makes a more powerful magnet.
Electromechanics[178][179][180][181] combines processes and procedures drawn from electrical engineering and mechanical engineering. Electromechanics focuses on the interaction of electrical and mechanical systems as a whole and how the two systems interact with each other. This process is especially prominent in systems such as those of DC or AC rotating electrical machines which can be designed and operated to generate power from a mechanical process (generator) or used to power a mechanical effect (motor). Electrical engineering in this context also encompasses electronics engineering.
Is a subatomic particle, symbol e− or β−, whose electric charge is negative one elementary charge.[182] Electrons belong to the first generation of the lepton particle family,[183] and are generally thought to be elementary particles because they have no known components or substructure.[184] The electron has a mass that is approximately 1/1836 that of the proton.[185]Quantum mechanical properties of the electron include an intrinsic angular momentum (spin) of a half-integer value, expressed in units of the reduced Planck constant, ħ. Being fermions, no two electrons can occupy the same quantum state, in accordance with the Pauli exclusion principle.[183] Like all elementary particles, electrons exhibit properties of both particles and waves: they can collide with other particles and can be diffracted like light. The wave properties of electrons are easier to observe with experiments than those of other particles like neutrons and protons because electrons have a lower mass and hence a longer de Broglie wavelength for a given energy.
In chemistry, an electron pair, or Lewis pair, consists of two electrons that occupy the same molecular orbital but have opposite spins. Gilbert N. Lewis introduced the concepts of both the electron pair and the covalent bond in a landmark paper he published in 1916.[187]
Symbolized as χ, is the measurement of the tendency of an atom to attract a shared pair of electrons (or electron density).[188] An atom's electronegativity is affected by both its atomic number and the distance at which its valence electrons reside from the charged nucleus. The higher the associated electronegativity, the more an atom or a substituent group attracts electrons.
Is any process with an increase in the enthalpyH (or internal energyU) of the system.[190] In such a process, a closed system usually absorbs thermal energy from its surroundings, which is heat transfer into the system. It may be a chemical process, such as dissolving ammonium nitrate in water, or a physical process, such as the melting of ice cubes.
Is the use of scientific principles to design and build machines, structures, and other items, including bridges, tunnels, roads, vehicles, and buildings.[193] The discipline of engineering encompasses a broad range of more specialized fields of engineering, each with a more specific emphasis on particular areas of applied mathematics, applied science, and types of application. The term engineering is derived from the Latiningenium, meaning "cleverness" and ingeniare, meaning "to contrive, devise".[194]
Engineering economics, previously known as engineering economy, is a subset of economics concerned with the use and "...application of economic principles"[195] in the analysis of engineering decisions.[196] As a discipline, it is focused on the branch of economics known as microeconomics in that it studies the behavior of individuals and firms in making decisions regarding the allocation of limited resources. Thus, it focuses on the decision making process, its context and environment.[195] It is pragmatic by nature, integrating economic theory with engineering practice.[195] But, it is also a simplified application of microeconomic theory in that it assumes elements such as price determination, competition and demand/supply to be fixed inputs from other sources.[195] As a discipline though, it is closely related to others such as statistics, mathematics and cost accounting.[195] It draws upon the logical framework of economics but adds to that the analytical power of mathematics and statistics.[195]
Engineering ethics
Is the field of system of moral principles that apply to the practice of engineering. The field examines and sets the obligations by engineers to society, to their clients, and to the profession. As a scholarly discipline, it is closely related to subjects such as the philosophy of science, the philosophy of engineering, and the ethics of technology.
Or engineering science, refers to the study of the combined disciplines of physics, mathematics, chemistry, biology, and engineering, particularly computer, nuclear, electrical, electronic, aerospace, materials or mechanical engineering. By focusing on the scientific method as a rigorous basis, it seeks ways to apply, design, and develop new solutions in engineering.[199][200][201][202]
Enzymes are proteins that act as biological catalysts (biocatalysts). Catalysts accelerate chemical reactions. The molecules upon which enzymes may act are called substrates, and the enzyme converts the substrates into different molecules known as products. Almost all metabolic processes in the cell need enzyme catalysis in order to occur at rates fast enough to sustain life.[203]:8.1
In statistics, an estimator is a rule for calculating an estimate of a given quantity based on observed data: thus the rule (the estimator), the quantity of interest (the estimand) and its result (the estimate) are distinguished.[204] For example, the sample mean is a commonly used estimator of the population mean. There are point and interval estimators. The point estimators yield single-valued results, although this includes the possibility of single vector-valued results and results that can be expressed as a single function. This is in contrast to an interval estimator, where the result would be a range of plausible values (or vectors or functions).
Euler–Bernoulli beam theory (also known as engineer's beam theory or classical beam theory)[205] is a simplification of the linear theory of elasticity which provides a means of calculating the load-carrying and deflection characteristics of beams. It covers the case for small deflections of a beam that are subjected to lateral loads only. It is thus a special case of Timoshenko beam theory. It was first enunciated circa 1750,[206] but was not applied on a large scale until the development of the Eiffel Tower and the Ferris wheel in the late 19th century. Following these successful demonstrations, it quickly became a cornerstone of engineering and an enabler of the Second Industrial Revolution. Additional mathematical models have been developed such as plate theory, but the simplicity of beam theory makes it an important tool in the sciences, especially structural and mechanical engineering.
In thermodynamics, the term exothermic process (exo- : "outside") describes a process or reaction that releases energy from the system to its surroundings, usually in the form of heat, but also in a form of light (e.g. a spark, flame, or flash), electricity (e.g. a battery), or sound (e.g. explosion heard when burning hydrogen). Its etymology stems from the Greek prefix έξω (exō, which means "outwards") and the Greek word θερμικός (thermikόs, which means "thermal").[207]
(FoS), also known as (and used interchangeably with) safety factor (SF), expresses how much stronger a system is than it needs to be for an intended load.
[208] The farad (symbol: F) is the SI derived unit of electrical capacitance, the ability of a body to store an electrical charge. It is named after the English physicist Michael Faraday.
Both of these values have exact defined values, and hence F has a known exact value. NA is the Avogadro constant (the ratio of the number of particles, N, which is unitless, to the amount of substance, n, in units of moles), and e is the elementary charge or the magnitude of the charge of an electron. This relation holds because the amount of charge of a mole of electrons is equal to the amount of charge in one electron multiplied by the number of electrons in a mole.
In optics, Fermat's principle, or the principle of least time, named after French mathematician Pierre de Fermat, is the principle that the path taken between two points by a ray of light is the path that can be traversed in the least time. This principle is sometimes taken as the definition of a ray of light.[213] However, this version of the principle is not general; a more modern statement of the principle is that rays of light traverse the path of stationary optical length with respect to variations of the path.[214] In other words, a ray of light prefers the path such that there are other paths, arbitrarily nearby on either side, along which the ray would take almost exactly the same time to traverse.
Describe diffusion and were derived by Adolf Fick in 1855. They can be used to solve for the diffusion coefficient, D. Fick's first law can be used to derive his second law which in turn is identical to the diffusion equation.
(FEM), is the most widely used method for solving problems of engineering and mathematical models. Typical problem areas of interest include the traditional fields of structural analysis, heat transfer, fluid flow, mass transport, and electromagnetic potential.
The FEM is a particular numerical method for solving partial differential equations in two or three space variables (i.e., some boundary value problems). To solve a problem, the FEM subdivides a large system into smaller, simpler parts that are called finite elements. This is achieved by a particular space discretization in the space dimensions, which is implemented by the construction of a mesh of the object: the numerical domain for the solution, which has a finite number of points.
The finite element method formulation of a boundary value problem finally results in a system of algebraic equations. The method approximates the unknown function over the domain.[215]
The simple equations that model these finite elements are then assembled into a larger system of equations that models the entire problem. The FEM then uses variational methods from the calculus of variations to approximate a solution by minimizing an associated error function.
FIRST
For Inspiration and Recognition of Science and Technology – is an organization founded by inventor Dean Kamen in 1989 to develop ways to inspire students in engineering and technology fields.
In physics and engineering, fluid dynamics is a subdiscipline of fluid mechanics that describes the flow of fluids—liquids and gases. It has several subdisciplines, including aerodynamics (the study of air and other gases in motion) and hydrodynamics (the study of liquids in motion).
Fluid statics, or hydrostatics, is the branch of fluid mechanics that studies "fluids at rest and the pressure in a fluid or exerted by a fluid on an immersed body".[219]
Flywheel
Is a mechanical device specifically designed to use the conservation of angular momentum so as to efficiently store rotational energy; a form of kinetic energy proportional to the product of its moment of inertia and the square of its rotational speed. In particular, if we assume the flywheel's moment of inertia to be constant (i.e., a flywheel with fixed mass and second moment of area revolving about some fixed axis) then the stored (rotational) energy is directly associated with the square of its rotational speed.
In geometrical optics, a focus, also called an image point, is the point where light rays originating from a point on the object converge.[220] Although the focus is conceptually a point, physically the focus has a spatial extent, called the blur circle. This non-ideal focusing may be caused by aberrations of the imaging optics. In the absence of significant aberrations, the smallest possible blur circle is the Airy disc, which is caused by diffraction from the optical system's aperture. Aberrations tend worsen as the aperture diameter increases, while the Airy circle is smallest for large apertures.
Foot-pound
The foot-pound force (symbol: ft⋅lbf,[221] ft⋅lbf,[222] or ft⋅lb [223]) is a unit of work or energy in the engineering and gravitational systems in United States customary and imperial units of measure. It is the energy transferred upon applying a force of one pound-force (lbf) through a linear displacement of one foot. The corresponding SI unit is the joule.
In materials science, fracture toughness is the critical stress intensity factor of a sharp crack where propagation of the crack suddenly becomes rapid and unlimited. A component's thickness affects the constraint conditions at the tip of a crack with thin components having plane stress conditions and thick components having plane strain conditions. Plane strain conditions give the lowest fracture toughness value which is a material property. The critical value of stress intensity factor in mode I loading measured under plane strain conditions is known as the plane strain fracture toughness, denoted [math]\displaystyle{ K_\text{Ic} }[/math].[224] When a test fails to meet the thickness and other test requirements that are in place to ensure plane strain conditions, the fracture toughness value produced is given the designation [math]\displaystyle{ K_\text{c} }[/math]. Fracture toughness is a quantitative way of expressing a material's resistance to crack propagation and standard values for a given material are generally available.
In physics and optics, the Fraunhofer lines are a set of spectral absorption lines named after the German physicist Joseph von Fraunhofer (1787–1826). The lines were originally observed as dark features (absorption lines) in the optical spectrum of the Sun.
In Newtonian physics, free fall is any motion of a body where gravity is the only force acting upon it. In the context of general relativity, where gravitation is reduced to a space-time curvature, a body in free fall has no force acting on it.
The melting point (or, rarely, liquefaction point) of a substance is the temperature at which it changes state from solid to liquid. At the melting point the solid and liquid phase exist in equilibrium. The melting point of a substance depends on pressure and is usually specified at a standard pressure such as 1 atmosphere or 100 kPa.
When considered as the temperature of the reverse change from liquid to solid, it is referred to as the freezing point or crystallization point. Because of the ability of substances to supercool, the freezing point can easily appear to be below its actual value. When the "characteristic freezing point" of a substance is determined, in fact the actual methodology is almost always "the principle of observing the disappearance rather than the formation of ice, that is, the melting point.[225]
Is the force resisting the relative motion of solid surfaces, fluid layers, and material elements sliding against each other.[226] There are several types of friction:
Dry friction is a force that opposes the relative lateral motion of two solid surfaces in contact. Dry friction is subdivided into static friction ("stiction") between non-moving surfaces, and kinetic friction between moving surfaces. With the exception of atomic or molecular friction, dry friction generally arises from the interaction of surface features, known as asperities (see Figure 1).
Fluid friction describes the friction between layers of a viscous fluid that are moving relative to each other.[227][228]
Lubricated friction is a case of fluid friction where a lubricant fluid separates two solid surfaces.[229][230][231]
Skin friction is a component of drag, the force resisting the motion of a fluid across the surface of a body.
Internal friction is the force resisting motion between the elements making up a solid material while it undergoes deformation.[228]
In mathematics, a function[note 2] is a binary relation between two sets that associates every element of the first set to exactly one element of the second set. Typical examples are functions from integers to integers, or from the real numbers to real numbers.
The fundamental frequency, often referred to simply as the fundamental, is defined as the lowest frequency of a periodic waveform. In music, the fundamental is the musical pitch of a note that is perceived as the lowest partial present. In terms of a superposition of sinusoids, the fundamental frequency is the lowest frequency sinusoidal in the sum of harmonically related frequencies, or the frequency of the difference between adjacent frequencies. In some contexts, the fundamental is usually abbreviated as f0, indicating the lowest frequency counting from zero.[232][233][234] In other contexts, it is more common to abbreviate it as f1, the first harmonic.[235][236][237][238][239] (The second harmonic is then f2 = 2⋅f1, etc. In this context, the zeroth harmonic would be 0 Hz.)
In physics, the fundamental interactions, also known as fundamental forces, are the interactions that do not appear to be reducible to more basic interactions. There are four fundamental interactions known to exist: the gravitational and electromagnetic interactions, which produce significant long-range forces whose effects can be seen directly in everyday life, and the strong and weak interactions, which produce forces at minuscule, subatomic distances and govern nuclear interactions. Some scientists hypothesize that a fifth force might exist, but these hypotheses remain speculative.[240][241][242]
The Fundamentals of Engineering (FE) exam, also referred to as the Engineer in Training (EIT) exam, and formerly in some states as the Engineering Intern (EI) exam, is the first of two examinations that engineers must pass in order to be licensed as a Professional Engineer in the United States . The second examination is Principles and Practice of Engineering Examination. The FE exam is open to anyone with a degree in engineering or a related field, or currently enrolled in the last year of an ABET-accredited engineering degree program. Some state licensure boards permit students to take it prior to their final year, and numerous states allow those who have never attended an approved program to take the exam if they have a state-determined number of years of work experience in engineering. Some states allow those with ABET-accredited "Engineering Technology" or "ETAC" degrees to take the examination. The state of Michigan has no admission pre-requisites for the FE.[243] The exam is administered by the National Council of Examiners for Engineering and Surveying (NCEES).
A galvanic cell or voltaic cell, named after Luigi Galvani or Alessandro Volta, respectively, is an electrochemical cell that derives electrical energy from spontaneous redox reactions taking place within the cell. It generally consists of two different metals immersed in electrolytes, or of individual half-cells with different metals and their ions in solution connected by a salt bridge or separated by a porous membrane. Volta was the inventor of the voltaic pile, the first electrical battery. In common usage, the word "battery" has come to include a single galvanic cell, but a battery properly consists of multiple cells.[244]
Gamma rays
A gamma ray, or gamma radiation (symbol γ or [math]\displaystyle{ \gamma }[/math]), is a penetrating form of electromagnetic radiation arising from the radioactive decay of atomic nuclei. It consists of the shortest wavelength electromagnetic waves and so imparts the highest photon energy.
Is one of the four fundamental states of matter (the others being solid, liquid, and plasma). A pure gas may be made up of individual atoms (e.g. a noble gas like neon), elemental molecules made from one type of atom (e.g. oxygen), or compound molecules made from a variety of atoms (e.g. carbon dioxide). A gas mixture, such as air, contains a variety of pure gases. What distinguishes a gas from liquids and solids is the vast separation of the individual gas particles.
Is an instrument used for detecting and measuring ionizing radiation. Also known as a Geiger–Muller counter (or Geiger–Müller counter), it is widely used in applications such as radiation dosimetry, radiological protection, experimental physics, and the nuclear industry.
In mathematics, the geometric mean is a mean or average, which indicates the central tendency or typical value of a set of numbers by using the product of their values (as opposed to the arithmetic mean which uses their sum). The geometric mean is defined as the nth root of the product of n numbers, i.e., for a set of numbers x1, x2, ..., xn, the geometric mean is defined as
Is, with arithmetic, one of the oldest branches of mathematics. It is concerned with properties of space that are related with distance, shape, size, and relative position of figures.[245] A mathematician who works in the field of geometry is called a geometer.
Also known as geotechnics, is the branch of civil engineering concerned with the engineering behavior of earth materials. It uses the principles and methods of soil mechanics and rock mechanics for the solution of engineering problems and the design of engineering works. It also relies on knowledge of geology, hydrology, geophysics, and other related sciences.
Graham's law of effusion (also called Graham's law of diffusion) was formulated by Scottish physical chemist Thomas Graham in 1848.[252] Graham found experimentally that the rate of effusion of a gas is inversely proportional to the square root of the mass of its particles.[252] This formula can be written as:
Gravitational energy or gravitational potential energy is the potential energy a massive object has in relation to another massive object due to gravity. It is the potential energy associated with the gravitational field, which is released (converted into kinetic energy) when the objects fall towards each other. Gravitational potential energy increases when two objects are brought further apart.
For two pairwise interacting point particles, the gravitational potential energy [math]\displaystyle{ U }[/math] is given by
[math]\displaystyle{ U = -\frac{GMm}{R}, }[/math]
where [math]\displaystyle{ M }[/math] and [math]\displaystyle{ m }[/math] are the masses of the two particles, [math]\displaystyle{ R }[/math] is the distance between them, and [math]\displaystyle{ G }[/math] is the gravitational constant.[253]
Close to the Earth's surface, the gravitational field is approximately constant, and the gravitational potential energy of an object reduces to
[math]\displaystyle{ U = mgh }[/math]
where [math]\displaystyle{ m }[/math] is the object's mass, [math]\displaystyle{ g = GM_E/R_E^2 }[/math] is the gravity of Earth, and [math]\displaystyle{ h }[/math] is the height of the object's center of mass above a chosen reference level.[253]
In physics, a gravitational field is a model used to explain the influences that a massive body extends into the space around itself, producing a force on another massive body.[254] Thus, a gravitational field is used to explain gravitational phenomena, and is measured in newtons per kilogram (N/kg). In its original concept, gravity was a force between point masses. Following Isaac Newton, Pierre-Simon Laplace attempted to model gravity as some kind of radiation field or fluid, and since the 19th century, explanations for gravity have usually been taught in terms of a field model, rather than a point attraction.
In a field model, rather than two particles attracting each other, the particles distort spacetime via their mass, and this distortion is what is perceived and measured as a "force".[citation needed] In such a model one states that matter moves in certain ways in response to the curvature of spacetime,[255] and that there is either no gravitational force,[256] or that gravity is a fictitious force.[257]
Gravity is distinguished from other forces by its obedience to the equivalence principle.
In classical mechanics, the gravitational potential at a location is equal to the work (energy transferred) per unit mass that would be needed to move an object to that location from a fixed reference location. It is analogous to the electric potential with mass playing the role of charge. The reference location, where the potential is zero, is by convention infinitely far away from any mass, resulting in a negative potential at any finite distance.
In mathematics, the gravitational potential is also known as the Newtonian potential and is fundamental in the study of potential theory. It may also be used for solving the electrostatic and magnetostatic fields generated by uniformly charged or polarized ellipsoidal bodies.[258]
Or gravitation, is a natural phenomenon by which all things with mass or energy—including planets, stars, galaxies, and even light[265]—are brought toward (or gravitate toward) one another. On Earth, gravity gives weight to physical objects, and the Moon's gravity causes the ocean tides. The gravitational attraction of the original gaseous matter present in the Universe caused it to begin coalescing and forming stars and caused the stars to group together into galaxies, so gravity is responsible for many of the large-scale structures in the Universe. Gravity has an infinite range, although its effects become increasingly weaker as objects get further away.
The period at which one-half of a quantity of an unstable isotope has decayed into other elements; the time at which half of a substance has diffused out of or otherwise reacted in a system.
Is a measure of the resistance to localized plastic deformation induced by either mechanical indentation or abrasion. Some materials (e.g. metals) are harder than others (e.g. plastics, wood). Macroscopic hardness is generally characterized by strong intermolecular bonds, but the behavior of solid materials under force is complex; therefore, there are different measurements of hardness: scratch hardness, indentation hardness, and rebound hardness. Hardness is dependent on ductility, elasticstiffness, plasticity, strain, strength, toughness, viscoelasticity, and viscosity.
In mathematics, the harmonic mean (sometimes called the subcontrary mean) is one of several kinds of average, and in particular, one of the Pythagorean means. Typically, it is appropriate for situations when the average of rates is desired.
The harmonic mean can be expressed as the reciprocal of the arithmetic mean of the reciprocals of the given set of observations. As a simple example, the harmonic mean of 1, 4, and 4 is
Is a discipline of thermal engineering that concerns the generation, use, conversion, and exchange of thermal energy (heat) between physical systems. Heat transfer is classified into various mechanisms, such as thermal conduction, thermal convection, thermal radiation, and transfer of energy by phase changes. Engineers also consider the transfer of mass of differing chemical species, either cold or hot, to achieve heat transfer. While these mechanisms have distinct characteristics, they often occur simultaneously in the same system.
In thermodynamics, the Helmholtz free energy (or Helmholtz energy) is a thermodynamic potential that measures the useful work obtainable from a closedthermodynamic system at a constant temperature and volume (isothermal, isochoric). The negative of the change in the Helmholtz energy during a process is equal to the maximum amount of work that the system can perform in a thermodynamic process in which volume is held constant. If the volume were not held constant, part of this work would be performed as boundary work. This makes the Helmholtz energy useful for systems held at constant volume. Furthermore, at constant temperature, the Helmholtz free energy is minimized at equilibrium.
can be used to estimate the pH of a buffer solution. The numerical value of the acid dissociation constant, Ka, of the acid is known or assumed. The pH is calculated for given values of the concentrations of the acid, HA and of a salt, MA, of its conjugate base, A−; for example, the solution may contain acetic acid and sodium acetate.
In physical chemistry, Henry's law is a gas law that states that the amount of dissolved gas in a liquid is proportional to its partial pressure above the liquid. The proportionality factor is called Henry's law constant. It was formulated by the English chemist William Henry, who studied the topic in the early 19th century.
Is a device used for lifting or lowering a load by means of a drum or lift-wheel around which rope or chain wraps. It may be manually operated, electrically or pneumatically driven and may use chain, fiber or wire rope as its lifting medium. The most familiar form is an elevator, the car of which is raised and lowered by a hoist mechanism. Most hoists couple to their loads using a lifting hook. Today, there are a few governing bodies for the North American overhead hoist industry which include the Hoist Manufactures Institute (HMI), ASME, and the Occupational Safety and Health Administration (OSHA). HMI is a product counsel of the Material Handling Industry of America consisting of hoist manufacturers promoting safe use of their products.
The Huygens–Fresnel principle (named after Dutch physicistChristiaan Huygens and France physicist Augustin-Jean Fresnel) is a method of analysis applied to problems of wave propagation both in the far-field limit and in near-field diffraction and also reflection. It states that every point on a wavefront is itself the source of spherical wavelets, and the secondary wavelets emanating from different points mutually interfere.[273] The sum of these spherical wavelets forms the wavefront.
where [math]\displaystyle{ P }[/math], [math]\displaystyle{ V }[/math] and [math]\displaystyle{ T }[/math] are the pressure, volume and temperature; [math]\displaystyle{ n }[/math] is the amount of substance; and [math]\displaystyle{ R }[/math] is the ideal gas constant. It is the same for all gases.
It can also be derived from the microscopic kinetic theory, as was achieved (apparently independently) by August Krönig in 1856[276] and Rudolf Clausius in 1857.[277]
In mathematics, an identity is an equality relating one mathematical expression A to another mathematical expression B, such that A and B (which might contain some variables) produce the same value for all values of the variables within a certain range of validity.[278][279] In other words, A = B is an identity if A and B define the same functions, and an identity is an equality between functions that are differently defined. For example, [math]\displaystyle{ (a+b)^2 = a^2 + 2ab + b^2 }[/math] and [math]\displaystyle{ \cos^2\theta + \sin^2\theta =1 }[/math] are identities.[280] Identities are sometimes indicated by the triple bar symbol ≡ instead of =, the equals sign.[281]
Also known as a ramp, is a flat supporting surface tilted at an angle, with one end higher than the other, used as an aid for raising or lowering a load.[282][283][284] The inclined plane is one of the six classical simple machines defined by Renaissance scientists. Inclined planes are widely used to move heavy loads over vertical obstacles; examples vary from a ramp used to load goods into a truck, to a person walking up a pedestrian ramp, to an automobile or railroad train climbing a grade.[284]
In electromagnetism and electronics, inductance is the tendency of an electrical conductor to oppose a change in the electric current flowing through it. The flow of electric current creates a magnetic field around the conductor. The field strength depends on the magnitude of the current, and follows any changes in current. From Faraday's law of induction, any change in magnetic field through a circuit induces an electromotive force (EMF) (voltage) in the conductors, a process known as electromagnetic induction. This induced voltage created by the changing current has the effect of opposing the change in current. This is stated by Lenz's law, and the voltage is called back EMF. Inductance is defined as the ratio of the induced voltage to the rate of change of current causing it. It is a proportionality factor that depends on the geometry of circuit conductors and the magnetic permeability of nearby materials.[286] An electronic component designed to add inductance to a circuit is called an inductor. It typically consists of a coil or helix of wire.
An inductor, also called a coil, choke, or reactor, is a passive two-terminal electrical component that stores energy in a magnetic field when electric current flows through it.[287] An inductor typically consists of an insulated wire wound into a coil.
Is an engineering profession that is concerned with the optimization of complex processes, systems, or organizations by developing, improving and implementing integrated systems of people, money, knowledge, information and equipment. Industrial engineers use specialized knowledge and skills in the mathematical, physical and social sciences, together with the principles and methods of engineering analysis and design, to specify, predict, and evaluate the results obtained from systems and processes.[288] From these results, they are able to create new systems, processes or situations for the useful coordination of labour, materials and machines and also improve the quality and productivity of systems, physical or social.[289]
Is the resistance of any physical object to any change in its velocity. This includes changes to the object's speed, or direction of motion.
An aspect of this property is the tendency of objects to keep moving in a straight line at a constant speed, when no forces act upon them.
Infrasound, sometimes referred to as low-frequency sound, describes sound waves with a frequency below the lower limit of audibility (generally 20 Hz). Hearing becomes gradually less sensitive as frequency decreases, so for humans to perceive infrasound, the sound pressure must be sufficiently high. The ear is the primary organ for sensing low sound, but at higher intensities it is possible to feel infrasound vibrations in various parts of the body.
In mathematics, an integral assigns numbers to functions in a way that describes displacement, area, volume, and other concepts that arise by combining infinitesimal data. The process of finding integrals is called integration. Along with differentiation, integration is a fundamental operation of calculus,[lower-alpha 2] and serves as a tool to solve problems in mathematics and physics involving the area of an arbitrary shape, the length of a curve, and the volume of a solid, among others.
In mathematics, an integral transform maps a function from its original function space into another function space via integration, where some of the properties of the original function might be more easily characterized and manipulated than in the original function space. The transformed function can generally be mapped back to the original function space using the inverse transform.
The International System of Units (SI, abbreviated from the FrenchSystème international (d'unités)) is the modern form of the metric system. It is the only system of measurement with an official status in nearly every country in the world. It comprises a coherent system of units of measurement starting with seven base units, which are the second (the unit of time with the symbol s), metre (length, m), kilogram (mass, kg), ampere (electric current, A), kelvin (thermodynamic temperature, K), mole (amount of substance, mol), and candela (luminous intensity, cd). The system allows for an unlimited number of additional units, called derived units, which can always be represented as products of powers of the base units.[Note 1] Twenty-two derived units have been provided with special names and symbols.[Note 2] The seven base units and the 22 derived units with special names and symbols may be used in combination to express other derived units,[Note 3] which are adopted to facilitate measurement of diverse quantities. The SI system also provides twenty prefixes to the unit names and unit symbols that may be used when specifying power-of-ten (i.e. decimal) multiples and sub-multiples of SI units. The SI is intended to be an evolving system; units and prefixes are created and unit definitions are modified through international agreement as the technology of measurement progresses and the precision of measurements improves.
In statistics, interval estimation is the use of sample data to calculate an interval of possible values of an unknown population parameter; this is in contrast to point estimation, which gives a single value. Jerzy Neyman (1937) identified interval estimation ("estimation by interval") as distinct from point estimation ("estimation by unique estimate"). In doing so, he recognized that then-recent work quoting results in the form of an estimate plus-or-minus a standard deviation indicated that interval estimation was actually the problem statisticians really had in mind.
Is a particle, atom or molecule with a net electrical charge. The charge of the electron is considered negative by convention. The negative charge of an ion is equal and opposite to charged proton(s) considered positive by convention. The net charge of an ion is non-zero due to its total number of electrons being unequal to its total number of protons.
Is a type of chemical bonding that involves the electrostatic attraction between oppositely charged ions, or between two atoms with sharply different electronegativities,[291] and is the primary interaction occurring in ionic compounds. It is one of the main types of bonding along with covalent bonding and metallic bonding. Ions are atoms (or groups of atoms) with an electrostatic charge. Atoms that gain electrons make negatively charged ions (called anions). Atoms that lose electrons make positively charged ions (called cations). This transfer of electrons is known as electrovalence in contrast to covalence. In the simplest case, the cation is a metal atom and the anion is a nonmetal atom, but these ions can be of a more complex nature, e.g. molecular ions like NH+4 or SO2−4. In simpler words, an ionic bond results from the transfer of electrons from a metal to a non-metal in order to obtain a full valence shell for both atoms.
Ionization or ionisation is the process by which an atom or a molecule acquires a negative or positive charge by gaining or losing electrons, often in conjunction with other chemical changes. The resulting electrically charged atom or molecule is called an ion. Ionization can result from the loss of an electron after collisions with subatomic particles, collisions with other atoms, molecules and ions, or through the interaction with electromagnetic radiation. Heterolytic bond cleavage and heterolytic substitution reactions can result in the formation of ion pairs. Ionization can occur through radioactive decay by the internal conversion process, in which an excited nucleus transfers its energy to one of the inner-shell electrons causing it to be ejected.
Isotopes are variants of a particular chemical element which differ in neutron number, and consequently in nucleon number. All isotopes of a given element have the same number of protons but different numbers of neutrons in each atom.[292]
J
J/psi meson
The J/ψ (J/psi) meson/ˈdʒeɪˈsaɪˈmiːzɒn/ or psion[293] is a subatomic particle, a flavor-neutral meson consisting of a charm quark and a charm antiquark. Mesons formed by a bound state of a charm quark and a charm anti-quark are generally known as "charmonium". The J/ψ is the most common form of charmonium, due to its spin of 1 and its low rest mass. The J/ψ has a rest mass of 3.0969 GeV/c2, just above that of the ηc (2.9836 GeV/c2), and a mean lifetime of 7.2×10−21s. This lifetime was about a thousand times longer than expected.[294]
The SI unit of energy.The joule, (symbol: J), is a derived unit of energy in the International System of Units.[295] It is equal to the energy transferred to (or work done on) an object when a force of one newton acts on that object in the direction of the force's motion through a distance of one metre (1 newton metre or N⋅m). It is also the energy dissipated as heat when an electric current of one ampere passes through a resistance of one ohm for one second. It is named after the English physicist James Prescott Joule (1818–1889).[296][297][298] gh a conductor produces heat.
In statistics and control theory, Kalman filtering, also known as linear quadratic estimation (LQE), is an algorithm that uses a series of measurements observed over time, containing statistical noise and other inaccuracies, and produces estimates of unknown variables that tend to be more accurate than those based on a single measurement alone, by estimating a joint probability distribution over the variables for each timeframe. The Kalman filter has numerous applications in technology.
(Or the Heat Engine Statement), of the second law of thermodynamics states that it is impossible to devise a cyclically operating heat engine, the effect of which is to absorb energy in the form of heat from a single thermal reservoir and to deliver an equivalent amount of work.[299]
This implies that it is impossible to build a heat engine that has 100% thermal efficiency.[300]
Is a branch of classical mechanics that describes the motion of points, bodies (objects), and systems of bodies (groups of objects) without considering the forces that caused the motion.[301][302][303]
In fluid dynamics, laminar flow is characterized by fluid particles following smooth paths in layers, with each layer moving smoothly past the adjacent layers with little or no mixing.[304] At low velocities, the fluid tends to flow without lateral mixing, and adjacent layers slide past one another like playing cards. There are no cross-currents perpendicular to the direction of flow, nor eddies or swirls of fluids.[305] In laminar flow, the motion of the particles of the fluid is very orderly with particles close to a solid surface moving in straight lines parallel to that surface.[306]
Laminar flow is a flow regime characterized by high momentum diffusion and low momentum convection.
In mathematics, the Laplace transform, named after its inventor Pierre-Simon Laplace (/ləˈplɑːs/), is an integral transform that converts a function of a real variable [math]\displaystyle{ t }[/math] (often time) to a function of a complex variable[math]\displaystyle{ s }[/math] (complex frequency). The transform has many applications in science and engineering because it is a tool for solving differential equations. In particular, it transforms differential equations into algebraic equations and convolution into multiplication.[307][308][309]
Le Chatelier's principle, also called Chatelier's principle, is a principle of chemistry used to predict the effect of a change in conditions on chemical equilibria. The principle is named after French chemist Henry Louis Le Chatelier, and sometimes also credited to Karl Ferdinand Braun, who discovered it independently. It can be stated as:
When any system at equilibrium for a long period of time is subjected to a change in concentration, temperature, volume, or pressure, (1) the system changes to a new equilibrium, and (2) this change partly counteracts the applied change.
It is common to treat the principle as a more general observation of systems,[310] such as
When a settled system is disturbed, it will adjust to diminish the change that has been made to it
Lenz's law, named after the physicist Emil Lenz who formulated it in 1834,[311] states that the direction of the electric current which is induced in a conductor by a changing magnetic field is such that the magnetic field created by the induced current opposes the initial changing magnetic field.
It is a qualitative law that specifies the direction of induced current, but states nothing about its magnitude. Lenz's law explains the direction of many effects in electromagnetism, such as the direction of voltage induced in an inductor or wire loop by a changing current, or the drag force of eddy currents exerted on moving objects in a magnetic field.
Lenz's law may be seen as analogous to Newton's third law in classical mechanics.[312]
In particle physics, a lepton is an elementary particle of half-integer spin (spin 1⁄2) that does not undergo strong interactions.[313] Two main classes of leptons exist: charged leptons (also known as the electron-like leptons), and neutral leptons (better known as neutrinos). Charged leptons can combine with other particles to form various composite particles such as atoms and positronium, while neutrinos rarely interact with anything, and are consequently rarely observed. The best known of all leptons is the electron.
Is a simple machine consisting of a beam or rigid rod pivoted at a fixed hinge, or fulcrum. A lever is a rigid body capable of rotating on a point on itself. On the basis of the locations of fulcrum, load and effort, the lever is divided into three types. Also, leverage is mechanical advantage gained in a system. It is one of the six simple machines identified by Renaissance scientists. A lever amplifies an input force to provide a greater output force, which is said to provide leverage. The ratio of the output force to the input force is the mechanical advantage of the lever. As such, the lever is a mechanical advantage device, trading off force against movement.
In mathematics, more specifically calculus, L'Hôpital's rule or L'Hospital's rule (French: [lopital],
English: /ˌloʊpiːˈtɑːl/, loh-pee-TAHL) provides a technique to evaluate limits of indeterminate forms. Application (or repeated application) of the rule often converts an indeterminate form to an expression that can be easily evaluated by substitution. The rule is named after the 17th-century FrancemathematicianGuillaume de l'Hôpital. Although the rule is often attributed to L'Hôpital, the theorem was first introduced to him in 1694 by the Swiss mathematician Johann Bernoulli.
L'Hôpital's rule states that for functions f and g which are differentiable on an open intervalI except possibly at a point c contained in I, if [math]\displaystyle{ \lim_{x\to c}f(x)=\lim_{x\to c}g(x)=0 \text{ or } \pm\infty, }[/math] and [math]\displaystyle{ g'(x)\ne 0 }[/math] for all x in I with x ≠ c, and [math]\displaystyle{ \lim_{x\to c}\frac{f'(x)}{g'(x)} }[/math] exists, then
Is an actuator that creates motion in a straight line, in contrast to the circular motion of a conventional electric motor. Linear actuators are used in machine tools and industrial machinery, in computer peripherals such as disk drives and printers, in valves and dampers, and in many other places where linear motion is required. Hydraulic or pneumatic cylinders inherently produce linear motion. Many other mechanisms are used to generate linear motion from a rotating motor.
Is a mathematical model of how solid objects deform and become internally stressed due to prescribed loading conditions. It is a simplification of the more general nonlinear theory of elasticity and a branch of continuum mechanics.
A liquid is a nearly incompressiblefluid that conforms to the shape of its container but retains a (nearly) constant volume independent of pressure. As such, it is one of the four fundamental states of matter (the others being solid, gas, and plasma), and is the only state with a definite volume but no fixed shape. A liquid is made up of tiny vibrating particles of matter, such as atoms, held together by intermolecular bonds. Like a gas, a liquid is able to flow and take the shape of a container. Most liquids resist compression, although others can be compressed. Unlike a gas, a liquid does not disperse to fill every space of a container, and maintains a fairly constant density. A distinctive property of the liquid state is surface tension, leading to wetting phenomena. Water is, by far, the most common liquid on Earth.
In mathematics, the logarithm is the inverse function to exponentiation. That means the logarithm of a given number x is the exponent to which another fixed number, the baseb, must be raised, to produce that number x. In the simplest case, the logarithm counts the number of occurrences of the same factor in repeated multiplication; e.g., since 1000 = 10 × 10 × 10 = 103, the "logarithm base 10" of 1000 is 3, or log10(1000) = 3. The logarithm of x to baseb is denoted as logb(x), or without parentheses, logbx, or even without the explicit base, log x, when no confusion is possible, or when the base does not matter such as in big O notation.
More generally, exponentiation allows any positive real number as base to be raised to any real power, always producing a positive result, so logb(x) for any two positive real numbers b and x, where b is not equal to 1, is always a unique real number y. More explicitly, the defining relation between exponentiation and logarithm is:
[math]\displaystyle{ \log_b(x) = y \ }[/math] exactly if [math]\displaystyle{ \ b^y = x\ }[/math] and [math]\displaystyle{ \ x \gt 0 }[/math] and [math]\displaystyle{ \ b \gt 0 }[/math] and [math]\displaystyle{ \ b \ne 1 }[/math].
For example, log2 64 = 6, as 26 = 64.
The logarithm base 10 (that is b = 10) is called the decimal or common logarithm and is commonly used in science and engineering. The natural logarithm has the number e (that is b ≈ 2.718) as its base; its use is widespread in mathematics and physics, because of its simpler integral and derivative. The binary logarithm uses base 2 (that is b = 2) and is frequently used in computer science. Logarithms are examples of concave functions.[317]
(Also known as log mean temperature difference, LMTD) is used to determine the temperature driving force for heat transfer in flow systems, most notably in heat exchangers. The LMTD is a logarithmic average of the temperature difference between the hot and cold feeds at each end of the double pipe exchanger. For a given heat exchanger with constant area and heat transfer coefficient, the larger the LMTD, the more heat is transferred. The use of the LMTD arises straightforwardly from the analysis of a heat exchanger with constant flow rate and fluid thermal properties.
A lumped-capacitance model, also called lumped system analysis,[319] reduces a thermal system to a number of discrete “lumps” and assumes that the temperature difference inside each lump is negligible. This approximation is useful to simplify otherwise complex differential heat equations. It was developed as a mathematical analog of electrical capacitance, although it also includes thermal analogs of electrical resistance as well.
The lumped-element model (also called lumped-parameter model, or lumped-component model) simplifies the description of the behaviour of spatially distributed physical systems into a topology consisting of discrete entities that approximate the behaviour of the distributed system under certain assumptions. It is useful in electrical systems (including electronics), mechanical multibody systems, heat transfer, acoustics, etc. Mathematically speaking, the simplification reduces the state space of the system to a finite dimension, and the partial differential equations (PDEs) of the continuous (infinite-dimensional) time and space model of the physical system into ordinary differential equations (ODEs) with a finite number of parameters.
(The double integration method) is a technique used in structural analysis to determine the deflection of Euler-Bernoulli beams. Use of Macaulay’s technique is very convenient for cases of discontinuous and/or discrete loading. Typically partial uniformly distributed loads (u.d.l.) and uniformly varying loads (u.v.l.) over the span and a number of concentrated loads are conveniently handled using this technique.
The ratio of the speed of an object to the speed of sound.
Machine
A machine (or mechanical device) is a mechanical structure that uses power to apply forces and control movement to perform an intended action. Machines can be driven by animals and people, by natural forces such as wind and water, and by chemical, thermal, or electrical power, and include a system of mechanisms that shape the actuator input to achieve a specific application of output forces and movement. They can also include computers and sensors that monitor performance and plan movement, often called mechanical systems.
control components such as buttons, switches, indicators, sensors, actuators and computer controllers.[320]
While generally not considered to be a machine element, the shape, texture and color of covers are an important part of a machine that provide a styling and operational interface between the mechanical components of a machine and its users.
Machine elements are basic mechanical parts and features used as the building blocks of most machines.[321] Most are standardized to common sizes, but customs are also common for specialized applications.[322]
(ML), is the study of computer algorithms that improve automatically through experience and by the use of data.[323] It is seen as a part of artificial intelligence. Machine learning algorithms build a model based on sample data, known as "training data", in order to make predictions or decisions without being explicitly programmed to do so.[324] Machine learning algorithms are used in a wide variety of applications, such as in medicine, email filtering, speech recognition, and computer vision, where it is difficult or unfeasible to develop conventional algorithms to perform the needed tasks.[325]
In mathematics, the Taylor series of a function is an infinite sum of terms that are expressed in terms of the function's derivatives at a single point. For most common functions, the function and the sum of its Taylor series are equal near this point. Taylor's series are named after Brook Taylor, who introduced them in 1715. If zero is the point where the derivatives are considered, a Taylor series is also called a Maclaurin series, after Colin Maclaurin, who made extensive use of this special case of Taylor series in the 18th century.
A magnetic field is a vector field that describes the magnetic influence on moving electric charges, electric currents,[326]:ch1[327] and magnetic materials. A moving charge in a magnetic field experiences a force perpendicular to its own velocity and to the magnetic field.[328]:ch13[329] A permanent magnet's magnetic field pulls on ferromagnetic materials such as iron, and attracts or repels other magnets. In addition, a magnetic field that varies with location will exert a force on a range of non-magnetic materials by affecting the motion of their outer atomic electrons. Magnetic fields surround magnetized materials, and are created by electric currents such as those used in electromagnets, and by electric fields varying in time. Since both strength and direction of a magnetic field may vary with location, they are described as a map assigning a vector to each point of space or, more precisely—because of the way the magnetic field transforms under mirror reflection—as a field of pseudovectors.
In electromagnetics, the term "magnetic field" is used for two distinct but closely related vector fields denoted by the symbols B and H. In the International System of Units, H, magnetic field strength, is measured in the SI base units of ampere per meter (A/m).[330]B, magnetic flux density, is measured in tesla (in SI base units: kilogram per second2 per ampere),[331] which is equivalent to newton per meter per ampere. H and B differ in how they account for magnetization. In a vacuum, the two fields are related through the vacuum permeability, [math]\displaystyle{ \mathbf{B}/\mu_0 = \mathbf{H} }[/math]; but in a magnetized material, the terms differ by the material's magnetization at each point.
Is a class of physical attributes that are mediated by magnetic fields. Electric currents and the magnetic moments of elementary particles give rise to a magnetic field, which acts on other currents and magnetic moments. Magnetism is one aspect of the combined phenomenon of electromagnetism. The most familiar effects occur in ferromagnetic materials, which are strongly attracted by magnetic fields and can be magnetized to become permanent magnets, producing magnetic fields themselves. Demagnetizing a magnet is also possible. Only a few substances are ferromagnetic; the most common ones are iron, cobalt and nickel and their alloys. The rare-earth metals neodymium and samarium are less common examples. The prefix ferro- refers to iron, because permanent magnetism was first observed in lodestone, a form of natural iron ore called magnetite, Fe3O4.
Is a branch of professional engineering that shares many common concepts and ideas with other fields of engineering such as mechanical, chemical, electrical, and industrial engineering.
Manufacturing engineering requires the ability to plan the practices of manufacturing; to research and to develop tools, processes, machines and equipment; and to integrate the facilities and systems for producing quality products with the optimum expenditure of capital.[332]
The manufacturing or production engineer's primary focus is to turn raw material into an updated or new product in the most effective, efficient & economic way possible.
A mass balance, also called a material balance, is an application of conservation of mass to the analysis of physical systems. By accounting for material entering and leaving a system, mass flows can be identified which might have been unknown, or difficult to measure without this technique. The exact conservation law used in the analysis of the system depends on the context of the problem, but all revolve around mass conservation, i.e., that matter cannot disappear or be created spontaneously.[333]:59–62
The density (more precisely, the volumetric mass density; also known as specific mass), of a substance is its mass per unit volume. The symbol most often used for density is ρ (the lower case Greek letter rho), although the Latin letter D can also be used. Mathematically, density is defined as mass divided by volume:[334]
[math]\displaystyle{ \rho = \frac{m}{V} }[/math]
where ρ is the density, m is the mass, and V is the volume. In some cases (for instance, in the United States oil and gas industry), density is loosely defined as its weight per unit volume,[335] although this is scientifically inaccurate – this quantity is more specifically called specific weight.
Mass moment of inertia
The moment of inertia, otherwise known as the mass moment of inertia, angular mass, second moment of mass, or most accurately, rotational inertia, of a rigid body is a quantity that determines the torque needed for a desired angular acceleration about a rotational axis, akin to how mass determines the force needed for a desired acceleration. It depends on the body's mass distribution and the axis chosen, with larger moments requiring more torque to change the body's rate of rotation.
The mass number (symbol A, from the German word Atomgewicht [atomic weight]),[336] also called atomic mass number or nucleon number, is the total number of protons and neutrons (together known as nucleons) in an atomic nucleus. It is approximately equal to the atomic (also known as isotopic) mass of the atom expressed in atomic mass units. Since protons and neutrons are both baryons, the mass number A is identical with the baryon numberB of the nucleus (and also of the whole atom or ion). The mass number is different for each different isotope of a chemical element. Hence, the difference between the mass number and the atomic numberZ gives the number of neutrons (N) in a given nucleus: N = A − Z.[337]
The mass number is written either after the element name or as a superscript to the left of an element's symbol. For example, the most common isotope of carbon is carbon-12, or 12C, which has 6 protons and 6 neutrons. The full isotope symbol would also have the atomic number (Z) as a subscript to the left of the element symbol directly below the mass number: 126C.[338]
(MS), is an analytical technique that is used to measure the mass-to-charge ratio of ions. The results are typically presented as a mass spectrum, a plot of intensity as a function of the mass-to-charge ratio. Mass spectrometry is used in many different fields and is applied to pure samples as well as complex mixtures.
Is an interdisciplinary field of materials science and solid mechanics which attempts to predict the conditions under which solid materials fail under the action of external loads. The failure of a material is usually classified into brittle failure (fracture) or ductile failure (yield). Depending on the conditions (such as temperature, state of stress, loading rate) most materials can fail in a brittle or ductile manner or both. However, for most practical situations, a material may be classified as either brittle or ductile.
In mathematical terms, failure theory is expressed in the form of various failure criteria which are valid for specific materials. Failure criteria are functions in stress or strain space which separate "failed" states from "unfailed" states. A precise physical definition of a "failed" state is not easily quantified and several working definitions are in use in the engineering community. Quite often, phenomenological failure criteria of the same form are used to predict brittle failure and ductile yields.
Material properties
A materials property is an intensive property of some material, i.e., a physical property that does not depend on the amount of the material. These quantitative properties may be used as a metric by which the benefits of one material versus another can be compared, thereby aiding in materials selection.
The interdisciplinary field of materials science, also commonly termed materials science and engineering, covers the design and discovery of new materials, particularly solids. The intellectual origins of materials science stem from the Enlightenment, when researchers began to use analytical thinking from chemistry, physics, and engineering to understand ancient, phenomenological observations in metallurgy and mineralogy.[339][340] Materials science still incorporates elements of physics, chemistry, and engineering. As such, the field was long considered by academic institutions as a sub-field of these related fields. Beginning in the 1940s, materials science began to be more widely recognized as a specific and distinct field of science and engineering, and major technical universities around the world created dedicated schools for its study.
Materials scientists emphasize understanding, how the history of a material (processing) influences its structure, and thus the material's properties and performance. The understanding of processing-structure-properties relationships is called the materials paradigm. This paradigm is used to advance understanding in a variety of research areas, including nanotechnology, biomaterials, and metallurgy.
Materials science is also an important part of forensic engineering and failure analysis – investigating materials, products, structures or components, which fail or do not function as intended, causing personal injury or damage to property. Such investigations are key to understanding, for example, the causes of various aviation accidents and incidents.
Mathematical optimization (alternatively spelled optimisation) or mathematical programming is the selection of a best element, with regard to some criterion, from some set of available alternatives.[341] Optimization problems of sorts arise in all quantitative disciplines from computer science and engineering to operations research and economics, and the development of solution methods has been of interest in mathematics for centuries.[342]
In the simplest case, an optimization problem consists of maximizing or minimizing a real function by systematically choosing input values from within an allowed set and computing the value of the function. The generalization of optimization theory and techniques to other formulations constitutes a large area of applied mathematics. More generally, optimization includes finding "best available" values of some objective function given a defined domain (or input), including a variety of different types of objective functions and different types of domains.
Refers to the development of mathematical methods for application to problems in physics. The Journal of Mathematical Physics defines the field as "the application of mathematics to problems in physics and the development of mathematical methods suitable for such applications and for the formulation of physical theories".[343]
is a matrix with two rows and three columns; one say often a "two by three matrix", a "2×3-matrix", or a matrix of dimension 2×3.
Without further specifications, matrices represent linear maps, and allow explicit computations in linear algebra. Therefore, the study of matrices is a large part of linear algebra, and most properties and operations of abstract linear algebra can be expressed in terms of matrices. For example, matrix multiplication represents composition of linear maps.
Not all matrices are related to linear algebra. This is in particular the case, in graph theory, of incidence matrices and adjacency matrices.[353]
In classical physics and general chemistry, matter is any substance that has mass and takes up space by having volume.[354] All everyday objects that can be touched are ultimately composed of atoms, which are made up of interacting subatomic particles, and in everyday as well as scientific usage, "matter" generally includes atoms and anything made up of them, and any particles (or combination of particles) that act as if they have both rest mass and volume. However it does not include massless particles such as photons, or other energy phenomena or waves such as light.[354]:21[355] Matter exists in various states (also known as phases). These include classical everyday phases such as solid, liquid, and gas – for example water exists as ice, liquid water, and gaseous steam – but other states are possible, including plasma, Bose–Einstein condensates, fermionic condensates, and quark–gluon plasma.[356]
Are a set of coupled partial differential equations that, together with the Lorentz force law, form the foundation of classical electromagnetism, classical optics, and electric circuits.
The equations provide a mathematical model for electric, optical, and radio technologies, such as power generation, electric motors, wireless communication, lenses, radar etc. They describe how electric and magnetic fields are generated by charges, currents, and changes of the fields.[note 3] The equations are named after the physicist and mathematician James Clerk Maxwell, who, in 1861 and 1862, published an early form of the equations that included the Lorentz force law. Maxwell first used the equations to propose that light is an electromagnetic phenomenon.
An important consequence of Maxwell's equations is that they demonstrate how fluctuating electric and magnetic fields propagate at a constant speed (c) in a vacuum. Known as electromagnetic radiation, these waves may occur at various wavelengths to produce a spectrum of light from radio waves to gamma rays.
There are several kinds of mean in mathematics, especially in statistics:
For a data set, the arithmetic mean, also known as average or arithmetic average, is a central value of a finite set of numbers: specifically, the sum of the values divided by the number of values. The arithmetic mean of a set of numbers x1, x2, ..., xn is typically denoted by [math]\displaystyle{ \bar{x} }[/math][note 4]. If the data set were based on a series of observations obtained by sampling from a statistical population, the arithmetic mean is the sample mean (denoted [math]\displaystyle{ \bar{x} }[/math]) to distinguish it from the mean, or expected value, of the underlying distribution, the population mean (denoted [math]\displaystyle{ \mu }[/math] or [math]\displaystyle{ \mu_x }[/math][note 5]).[328][357]
In probability and statistics, the population mean, or expected value, is a measure of the central tendency either of a probability distribution or of a random variable characterized by that distribution.[358] In a discrete probability distribution of a random variable X, the mean is equal to the sum over every possible value weighted by the probability of that value; that is, it is computed by taking the product of each possible value x of X and its probability p(x), and then adding all these products together, giving [math]\displaystyle{ \mu = \sum x p(x).... }[/math].[359][360] An analogous formula applies to the case of a continuous probability distribution. Not every probability distribution has a defined mean (see the Cauchy distribution for an example). Moreover, the mean can be infinite for some distributions.
For a finite population, the population mean of a property is equal to the arithmetic mean of the given property, while considering every member of the population. For example, the population mean height is equal to the sum of the heights of every individual—divided by the total number of individuals. The sample mean may differ from the population mean, especially for small samples. The law of large numbers states that the larger the size of the sample, the more likely it is that the sample mean will be close to the population mean.[361]
Outside probability and statistics, a wide range of other notions of mean are often used in geometry and mathematical analysis.
In statistics, a central tendency (or measure of central tendency) is a central or typical value for a probability distribution.[362] It may also be called a center or location of the distribution. Colloquially, measures of central tendency are often called averages. The term central tendency dates from the late 1920s.[363]
The most common measures of central tendency are the arithmetic mean, the median, and the mode. A middle tendency can be calculated for either a finite set of values or for a theoretical distribution, such as the normal distribution. Occasionally authors use central tendency to denote "the tendency of quantitative data to cluster around some central value."[363][364]
The central tendency of a distribution is typically contrasted with its dispersion or variability; dispersion and central tendency are the often characterized properties of distributions. Analysis may judge whether data has a strong or a weak central tendency based on its dispersion.
Is a measure of the force amplification achieved by using a tool, mechanical device or machine system. The device trades off input forces against movement to obtain a desired amplification in the output force. The model for this is the law of the lever. Machine components designed to manage forces and movement in this way are called mechanisms.[365]
An ideal mechanism transmits power without adding to or subtracting from it. This means the ideal mechanism does not include a power source, is frictionless, and is constructed from rigid bodies that do not deflect or wear. The performance of a real system relative to this ideal is expressed in terms of efficiency factors that take into account departures from the ideal.
Is a signal processing filter usually used in place of an electronic filter at radio frequencies. Its purpose is the same as that of a normal electronic filter: to pass a range of signal frequencies, but to block others. The filter acts on mechanical vibrations which are the analogue of the electrical signal. At the input and output of the filter, transducers convert the electrical signal into, and then back from, these mechanical vibrations.
Is a wave that is an oscillation of matter, and therefore transfers energy through a medium.[367] While waves can move over long distances, the movement of the medium of transmission—the material—is limited. Therefore, the oscillating material does not move far from its initial equilibrium position. Mechanical waves transport energy. This energy propagates in the same direction as the wave. Any kind of wave (mechanical or electromagnetic) has a certain energy. Mechanical waves can be produced only in media which possess elasticity and inertia.
Is the area of physics concerned with the motions of physical objects, more specifically the relationships among force, matter, and motion.[368]Forces applied to objects result in displacements, or changes of an object's position relative to its environment.
This branch of physics has its origins in Ancient Greece with the writings of Aristotle and Archimedes[369][370][371] (see History of classical mechanics and Timeline of classical mechanics). During the early modern period, scientists such as Galileo, Kepler, and Newton laid the foundation for what is now known as classical mechanics.
It is a branch of classical physics that deals with particles that are either at rest or are moving with velocities significantly less than the speed of light.
It can also be defined as a branch of science which deals with the motion of and forces on bodies not in the quantum realm. The field is today less widely understood in terms of quantum theory.
Is a device that transforms input forces and movement into a desired set of output forces and movement. Mechanisms generally consist of moving components which may include:
In statistics and probability theory, the median is the value separating the higher half from the lower half of a data sample, a population, or a probability distribution. For a data set, it may be thought of as "the middle" value. The basic feature of the median in describing data compared to the mean (often simply described as the "average") is that it is not skewed by a small proportion of extremely large or small values, and therefore provides a better representation of a "typical" value. Median income, for example, may be a better way to suggest what a "typical" income is, because income distribution can be very skewed. The median is of central importance in robust statistics, as it is the most resistant statistic, having a breakdown point of 50%: so long as no more than half the data are contaminated, the median is not an arbitrarily large or small result.
Melting, or fusion, is a physical process that results in the phase transition of a substance from a solid to a liquid. This occurs when the internal energy of the solid increases, typically by the application of heat or pressure, which increases the substance's temperature to the melting point. At the melting point, the ordering of ions or molecules in the solid breaks down to a less ordered state, and the solid melts to become a liquid.
The melting point (or, rarely, liquefaction point) of a substance is the temperature at which it changes state from solid to liquid. At the melting point the solid and liquid phase exist in equilibrium. The melting point of a substance depends on pressure and is usually specified at a standard pressure such as 1 atmosphere or 100 kPa.
When considered as the temperature of the reverse change from liquid to solid, it is referred to as the freezing point or crystallization point. Because of the ability of substances to supercool, the freezing point can easily appear to be below its actual value. When the "characteristic freezing point" of a substance is determined, in fact the actual methodology is almost always "the principle of observing the disappearance rather than the formation of ice, that is, the melting point."[372]
In particle physics, mesons are hadronicsubatomic particles composed of an equal number of quarks and antiquarks, usually one of each, bound together by strong interactions. Because mesons are composed of quark subparticles, they have a meaningful physical size, a diameter of roughly one femtometer (1×10−15 m),[373] which is about 0.6 times the size of a proton or neutron. All mesons are unstable, with the longest-lived lasting for only a few hundredths of a microsecond. Heavier mesons decay to lighter mesons and ultimately to stable electrons, neutrinos and photons.
[math]\displaystyle{ M=\frac{\max x + \min x}{2}. }[/math]
The mid-range is closely related to the range, a measure of statistical dispersion defined as the difference between maximum and minimum values.
The two measures are complementary in sense that if one knows the mid-range and the range, one can find the sample maximum and minimum values.
The mid-range is rarely used in practical statistical analysis, as it lacks efficiency as an estimator for most distributions of interest, because it ignores all intermediate points, and lacks robustness, as outliers change it significantly. Indeed, it is one of the least efficient and least robust statistics. However, it finds some use in special cases: it is the maximally efficient estimator for the center of a uniform distribution, trimmed mid-ranges address robustness, and as an L-estimator, it is simple to understand and compute.
The midhinge is related to the interquartile range (IQR), the difference of the third and first quartiles (i.e. [math]\displaystyle{ IQR = Q_3 - Q_1 }[/math]), which is a measure of statistical dispersion. The two are complementary in sense that if one knows the midhinge and the IQR, one can find the first and third quartiles.
The use of the term "hinge" for the lower or upper quartiles derives from John Tukey's work on exploratory data analysis in the late 1970s,[380] and "midhinge" is a fairly modern term dating from around that time. The midhinge is slightly simpler to calculate than the trimean ([math]\displaystyle{ TM }[/math]), which originated in the same context and equals the average of the median ([math]\displaystyle{ \tilde{X} = Q_2 = P_{50} }[/math]) and the midhinge.
Mining in the engineering discipline is the extraction of minerals from underneath, above or on the ground. Mining engineering is associated with many other disciplines, such as mineral processing, exploration, excavation, geology, and metallurgy, geotechnical engineering and surveying. A mining engineer may manage any phase of mining operations, from exploration and discovery of the mineral resources, through feasibility study, mine design, development of plans, production and operations to mine closure.
Miller indices
Miller indices form a notation system in crystallography for planes in crystal (Bravais) lattices.
In particular, a family of lattice planes is determined by three integersh, k, and ℓ, the Miller indices. They are written (hkℓ), and denote the family of planes orthogonal to [math]\displaystyle{ h\mathbf{b_1} + k\mathbf{b_2} + \ell\mathbf{b_3} }[/math], where [math]\displaystyle{ \mathbf{b_i} }[/math] are the basis of the reciprocal lattice vectors (note that the plane is not always orthogonal to the linear combination of direct lattice vectors [math]\displaystyle{ h\mathbf{a_1} + k\mathbf{a_2} + \ell\mathbf{a_3} }[/math] because the lattice vectors need not be mutually orthogonal). By convention, negative integers are written with a bar, as in 3 for −3. The integers are usually written in lowest terms, i.e. their greatest common divisor should be 1. Miller indices are also used to designate reflections in X-ray crystallography. In this case the integers are not necessarily in lowest terms, and can be thought of as corresponding to planes spaced such that the reflections from adjacent planes would have a phase difference of exactly one wavelength (2π), regardless of whether there are atoms on all these planes or not.
There are also several related notations:[381]
the notation {hkℓ} denotes the set of all planes that are equivalent to (hkℓ) by the symmetry of the lattice.
In the context of crystal directions (not planes), the corresponding notations are:
[hkℓ], with square instead of round brackets, denotes a direction in the basis of the direct lattice vectors instead of the reciprocal lattice; and
similarly, the notation <hkℓ> denotes the set of all directions that are equivalent to [hkℓ] by symmetry.
Is a robot that is capable of moving in the surrounding (locomotion).[382] Mobile robotics is usually considered to be a subfield of robotics and information engineering.[383]
Mobile robots have the capability to move around in their environment and are not fixed to one physical location. Mobile robots can be "autonomous" (AMR - autonomous mobile robot) which means they are capable of navigating an uncontrolled environment without the need for physical or electro-mechanical guidance devices.[384] Alternatively, mobile robots can rely on guidance devices that allow them to travel a pre-defined navigation route in relatively controlled space.[385] By contrast, industrial robots are usually more-or-less stationary, consisting of a jointed arm (multi-linked manipulator) and gripper assembly (or end effector), attached to a fixed surface. The joint-arm are controlled by linear actuator or servo motor or stepper motor.
The mode is the value that appears most often in a set of data values.[386] If X is a discrete random variable, the mode is the value x (i.e, X = x) at which the probability mass function takes its maximum value. In other words, it is the value that is most likely to be sampled.
Like the statistical mean and median, the mode is a way of expressing, in a (usually) single number, important information about a random variable or a population. The numerical value of the mode is the same as that of the mean and median in a normal distribution, and it may be very different in highly skewed distributions.
Modulus of elasticity
An elastic modulus (also known as modulus of elasticity) is a quantity that measures an object or substance's resistance to being deformed elastically (i.e., non-permanently) when a stress is applied to it. The elastic modulus of an object is defined as the slope of its stress–strain curve in the elastic deformation region:[387] A stiffer material will have a higher elastic modulus. An elastic modulus has the form:
where stress is the force causing the deformation divided by the area to which the force is applied and strain is the ratio of the change in some parameter caused by the deformation to the original value of the parameter. Since strain is a dimensionless quantity, the units of [math]\displaystyle{ \delta }[/math] will be the same as the units of stress.[388]
Is a measure of the number of moles of solute in a solution corresponding to 1 kg or 1000 g of solvent. This contrasts with the definition of molarity which is based on a specified volume of solution.
A commonly used unit for molality in chemistry is mol/kg. A solution of concentration 1 mol/kg is also sometimes denoted as 1 molal. The unit mol/kg requires that molar mass be expressed in kg/mol, instead of the usual g/mol or kg/kmol.
Is a measurement of how strongly a chemical species attenuates light at a given wavelength. It is an intrinsic property of the species. The SI unit of molar attenuation coefficient is the square metre per mole (m2/mol), but in practice, quantities are usually expressed in terms of M−1⋅cm−1 or L⋅mol−1⋅cm−1 (the latter two units are both equal to 0.1 m2/mol). In older literature, the cm2/mol is sometimes used; 1 M−1⋅cm−1 equals 1000 cm2/mol. The molar attenuation coefficient is also known as the molar extinction coefficient and molar absorptivity, but the use of these alternative terms has been discouraged by the IUPAC.[389][390]
Molar concentration (also called molarity, amount concentration or substance concentration) is a measure of the concentration of a chemical species, in particular of a solute in a solution, in terms of amount of substance per unit volume of solution. In chemistry, the most commonly used unit for molarity is the number of moles per liter, having the unit symbol mol/L or mol⋅dm−3 in SI unit. A solution with a concentration of 1 mol/L is said to be 1 molar, commonly designated as 1 M. To avoid confusion with SI prefixmega, which has the same abbreviation, small caps ᴍ or italicized M are also used in journals and textbooks.[391]
In chemistry, the molar mass of a chemical compound is defined as the mass of a sample of that compound divided by the amount of substance in that sample, measured in moles.[392] It is the mass of 1 mole of the substance or 6.022×1023 particles, expressed in grams. The molar mass is a bulk, not molecular, property of a substance. The molar mass is an average of many instances of the compound, which often vary in mass due to the presence of isotopes. Most commonly, the molar mass is computed from the standard atomic weights and is thus a terrestrial average and a function of the relative abundance of the isotopes of the constituent atoms on Earth. The molar mass is appropriate for converting between the mass of a substance and the amount of a substance for bulk quantities.
Molding (American English) or moulding (British and Commonwealth English; see spelling differences) is the process of manufacturing by shaping liquid or pliable raw material using a rigid frame called a mold or matrix.[393] This itself may have been made using a pattern or model of the final object.
A molecule is an electrically neutral group of two or more atoms held together by chemical bonds.[394][395][396][397][398] Molecules are distinguished from ions by their lack of electrical charge.
In quantum physics, organic chemistry, and biochemistry, the distinction from ions is dropped and molecule is often used when referring to polyatomic ions.
In the kinetic theory of gases, the term molecule is often used for any gaseous particle regardless of its composition. This violates the definition that a molecule contain two or more atoms, since the noble gases are individual atoms.[399]
A molecule may be homonuclear, that is, it consists of atoms of one chemical element, as with two atoms in the oxygen molecule (O2); or it may be heteronuclear, a chemical compound composed of more than one element, as with water (two hydrogen atoms and one oxygen atom; H2O).
Atoms and complexes connected by non-covalent interactions, such as hydrogen bonds or ionic bonds, are typically not considered single molecules.[400]
The moment of inertia, otherwise known as the mass moment of inertia, angular mass, second moment of mass, or most accurately, rotational inertia, of a rigid body is a quantity that determines the torque needed for a desired angular acceleration about a rotational axis, akin to how mass determines the force needed for a desired acceleration. It depends on the body's mass distribution and the axis chosen, with larger moments requiring more torque to change the body's rate of rotation.
Is the study of the dynamic behavior of interconnected rigid or flexible bodies, each of which may undergo large translational and rotational displacements.
(MDO), is a field of engineering that uses optimization methods to solve design problems incorporating a number of disciplines. It is also known as multidisciplinary system design optimization (MSDO).
MDO allows designers to incorporate all relevant disciplines simultaneously. The optimum of the simultaneous problem is superior to the design found by optimizing each discipline sequentially, since it can exploit the interactions between the disciplines. However, including all disciplines simultaneously significantly increases the complexity of the problem.
Mutual inductance
Is the ratio between the electromotive force induced in one loop or coil by the rate of change of current in another loop or coil. Mutual inductance is given the symbol M.
The muon, from the Greek letter mu (μ) used to represent it) is an elementary particle similar to the electron, with an electric charge of −1 e and a spin of 1/2, but with a much greater mass. It is classified as a lepton. As with other leptons, the muon is not known to have any sub-structure – that is, it is not thought to be composed of any simpler particles.
The muon is an unstable subatomic particle with a mean lifetime of 2.2 μs, much longer than many other subatomic particles. As with the decay of the non-elementary neutron (with a lifetime around 15 minutes), muon decay is slow (by subatomic standards) because the decay is mediated only by the weak interaction (rather than the more powerful strong interaction or electromagnetic interaction), and because the mass difference between the muon and the set of its decay products is small, providing few kinetic degrees of freedom for decay. Muon decay almost always produces at least three particles, which must include an electron of the same charge as the muon and two types of neutrinos.
Is the practice of engineering on the nanoscale. It derives its name from the nanometre, a unit of measurement equalling one billionth of a meter. Nanoengineering is largely a synonym for nanotechnology, but emphasizes the engineering rather than the pure science aspects of the field.
A neutrino (denoted by the Greek letter ν) is a fermion (an elementary particle with spin of 1/2) that interacts only via the weak subatomic force and gravity.[402][403] The neutrino is so named because it is electrically neutral and because its rest mass is so small (-ino) that it was long thought to be zero. The mass of the neutrino is much smaller than that of the other known elementary particles.[404] The weak force has a very short range, the gravitational interaction is extremely weak, and neutrinos do not participate in the strong interaction.[405] Thus, neutrinos typically pass through normal matter unimpeded and undetected.[406][403]
Is a fluid in which the viscous stresses arising from its flow, at every point, are linearly[407] correlated to the local strain rate—the rate of change of its deformation over time.[408][409][410] That is equivalent to saying those forces are proportional to the rates of change of the fluid's velocity vector as one moves away from the point in question in various directions. More precisely, a fluid is Newtonian only if the tensors that describe the viscous stress and the strain rate are related by a constant viscosity tensor that does not depend on the stress state and velocity of the flow. If the fluid is also isotropic (that is, its mechanical properties are the same along any direction), the viscosity tensor reduces to two real coefficients, describing the fluid's resistance to continuous shear deformation and continuous compression or expansion, respectively.
In direct-current circuit theory, Norton's theorem (aka Mayer–Norton theorem) is a simplification that can be applied to networks made of linear time-invariant resistances, voltage sources, and current sources. At a pair of terminals of the network, it can be replaced by a current source and a single resistor in parallel. For alternating current (AC) systems the theorem can be applied to reactive impedances as well as resistances.
Is a device designed to control the direction or characteristics of a fluid flow (especially to increase velocity) as it exits (or enters) an enclosed chamber or pipe. A nozzle is often a pipe or tube of varying cross sectional area, and it can be used to direct or modify the flow of a fluid (liquid or gas). Nozzles are frequently used to control the rate of flow, speed, direction, mass, shape, and/or the pressure of the stream that emerges from them. In a nozzle, the velocity of fluid increases at the expense of its pressure energy.
Is a reaction in which two or more atomic nuclei are combined to form one or more different atomic nuclei and subatomic particles (neutrons or protons). The difference in mass between the reactants and products is manifested as either the release or the absorption of energy. This difference in mass arises due to the difference in atomic binding energy between the nuclei before and after the reaction. Fusion is the process that powers active or main sequencestars and other high-magnitude stars, where large amounts of energy are released.
In mathematics, parity is the property of an integer of whether it is even or odd. An integer's parity is even if it is divisible by two with no remainders left and its parity is odd if its remainder is 1.[411] For example, -4, 0, 82, and 178 are even because there is no remainder when dividing it by 2. By contrast, -3, 5, 7, 21 are odd numbers as they leave a remainder of 1 when divided by 2.
In quantum mechanics, a parity transformation (also called parity inversion) is the flip in the sign of onespatial coordinate. In three dimensions, it can also refer to the simultaneous flip in the sign of all three spatial coordinates (a point reflection):
It can also be thought of as a test for chirality of a physical phenomenon, in that a parity inversion transforms a phenomenon into its mirror image. All fundamental interactions of elementary particles, with the exception of the weak interaction, are symmetric under parity. The weak interaction is chiral and thus provides a means for probing chirality in physics. In interactions that are symmetric under parity, such as electromagnetism in atomic and molecular physics, parity serves as a powerful controlling principle underlying quantum transitions.
A matrix representation of P (in any number of dimensions) has determinant equal to −1, and hence is distinct from a rotation, which has a determinant equal to 1. In a two-dimensional plane, a simultaneous flip of all coordinates in sign is not a parity transformation; it is the same as a 180°-rotation.
In quantum mechanics, wave functions that are unchanged by a parity transformation are described as even functions, while those that change sign under a parity transformation are odd functions.fn=A hydrocarbon compound, solid at room temperature.
Is a form of magnetism whereby some materials are weakly attracted by an externally applied magnetic field, and form internal, induced magnetic fields in the direction of the applied magnetic field. In contrast with this behavior, diamagnetic materials are repelled by magnetic fields and form induced magnetic fields in the direction opposite to that of the applied magnetic field.[412] Paramagnetic materials include most chemical elements and some compounds;[413] they have a relative magnetic permeability slightly greater than 1 (i.e., a small positive magnetic susceptibility) and hence are attracted to magnetic fields. The magnetic moment induced by the applied field is linear in the field strength and rather weak. It typically requires a sensitive analytical balance to detect the effect and modern measurements on paramagnetic materials are often conducted with a SQUIDmagnetometer.
Particle displacement or displacement amplitude is a measurement of distance of the movement of a sound particle from its equilibrium position in a medium as it transmits a sound wave.[415]
The SI unit of particle displacement is the metre (m). In most cases this is a longitudinal wave of pressure (such as sound), but it can also be a transverse wave, such as the vibration of a taut string. In the case of a sound wave travelling through air, the particle displacement is evident in the oscillations of air molecules with, and against, the direction in which the sound wave is travelling.[416]
Particle physics (also known as high energy physics) is a branch of physics that studies the nature of the particles that constitute matter and radiation. Although the word particle can refer to various types of very small objects (e.g. protons, gas particles, or even household dust), particle physics usually investigates the irreducibly smallest detectable particles and the fundamental interactions necessary to explain their behaviour. In current understanding, these elementary particles are excitations of the quantum fields that also govern their interactions. The currently dominant theory explaining these fundamental particles and fields, along with their dynamics, is called the Standard Model. Thus, modern particle physics generally investigates the Standard Model and its various possible extensions, e.g. to the newest "known" particle, the Higgs boson, or even to the oldest known force field, gravity.[417][418]
Pascal's law (also Pascal's principle[419][420][421] or the principle of transmission of fluid-pressure) is a principle in fluid mechanics that states that a pressure change occurring anywhere in a confined incompressible fluid is transmitted throughout the fluid such that the same change occurs everywhere.[422] The law was established by French mathematicianBlaise Pascal[30] in 1647–48.[423]
Is a weight suspended from a pivot so that it can swing freely.[424] When a pendulum is displaced sideways from its resting, equilibrium position, it is subject to a restoring force due to gravity that will accelerate it back toward the equilibrium position. When released, the restoring force acting on the pendulum's mass causes it to oscillate about the equilibrium position, swinging back and forth. The time for one complete cycle, a left swing and a right swing, is called the period. The period depends on the length of the pendulum and also to a slight degree on the amplitude, the width of the pendulum's swing.
Is a field of engineering concerned with the activities related to the production of Hydrocarbons, which can be either crude oil or natural gas.[425] Exploration and production are deemed to fall within the upstream sector of the oil and gas industry. Exploration, by earth scientists, and petroleum engineering are the oil and gas industry's two main subsurface disciplines, which focus on maximizing economic recovery of hydrocarbons from subsurface reservoirs. Petroleum geology and geophysics focus on provision of a static description of the hydrocarbon reservoir rock, while petroleum engineering focuses on estimation of the recoverable volume of this resource using a detailed understanding of the physical behavior of oil, water and gas within porous rock at very high pressure.
In the physical sciences, a phase is a region of space (a thermodynamic system), throughout which all physical properties of a material are essentially uniform.[426][427]:86[428]:3 Examples of physical properties include density, index of refraction, magnetization and chemical composition. A simple description is that a phase is a region of material that is chemically uniform, physically distinct, and (often) mechanically separable. In a system consisting of ice and water in a glass jar, the ice cubes are one phase, the water is a second phase, and the humid air is a third phase over the ice and water. The glass of the jar is another separate phase. (See state of matter § Glass)
In physics and mathematics, the phase of a periodic function[math]\displaystyle{ F }[/math] of some real variable [math]\displaystyle{ t }[/math] (such as time) is an angle-like quantity representing the fraction of the cycle covered up to [math]\displaystyle{ t }[/math]. It is denoted [math]\displaystyle{ \phi(t) }[/math] and expressed in such a scale that it varies by one full turn as the variable [math]\displaystyle{ t }[/math] goes through each period (and [math]\displaystyle{ F(t) }[/math] goes through each complete cycle). It may be measured in any angular unit such as degrees or radians, thus increasing by 360° or [math]\displaystyle{ 2\pi }[/math] as the variable [math]\displaystyle{ t }[/math] completes a full period.[429]
A physical quantity is a property of a material or system that can be quantified by measurement. A physical quantity can be expressed as a value, which is the algebraic multiplication of a numerical value and a unit. For example, the physical quantity mass can be quantified as nkg, where n is the numerical value and kg is the unit. A physical quantity possesses at least two characteristics in common. One is numerical magnitude and the other is the unit in which it is measured.
The Planck constant, or Planck's constant, is a fundamental physical constant denoted [math]\displaystyle{ h }[/math], and is of fundamental importance in quantum mechanics. A photon's energy is equal to its frequency multiplied by the Planck constant. Due to mass–energy equivalence, the Planck constant also relates mass to frequency.
In metrology it is used, together with other constants, to define the kilogram, an SI unit.[437] The SI units are defined in such a way that, when the Planck constant is expressed in SI units, it has the exact value [math]\displaystyle{ h }[/math] = 6.62607015×10−34 J⋅s.[438][439]
In physics and materials science, plasticity, also known as plastic deformation, is the ability of a solidmaterial to undergo permanent deformation, a non-reversible change of shape in response to applied forces.[442][443] For example, a solid piece of metal being bent or pounded into a new shape displays plasticity as permanent changes occur within the material itself. In engineering, the transition from elastic behavior to plastic behavior is known as yielding.
In statistics, point estimation involves the use of sample data to calculate a single value (known as a point estimate since it identifies a point in some parameter space) which is to serve as a "best guess" or "best estimate" of an unknown population parameter (for example, the population mean). More formally, it is the application of a point estimator to the data to obtain a point estimate.
Point estimation can be contrasted with interval estimation: such interval estimates are typically either confidence intervals, in the case of frequentist inference, or credible intervals, in the case of Bayesian inference. More generally, a point estimator can be contrasted with a set estimator. Examples are given by confidence sets or credible sets. A point estimator can also be contrasted with a distribution estimator. Examples are given by confidence distributions, randomized estimators, and Bayesian posteriors.
Polyphase system
An electrical system that uses a set of alternating currents at different phases.
In physics, power is the amount of energy transferred or converted per unit time. In the International System of Units, the unit of power is the watt, equal to one joule per second. In older works, power is sometimes called activity.[445][446][447] Power is a scalar quantity.
In electrical engineering, the power factor of an AC power system is defined as the ratio of the real power absorbed by the load to the apparent power flowing in the circuit, and is a dimensionless number in the closed interval of −1 to 1. A power factor of less than one indicates the voltage and current are not in phase, reducing the average product of the two. Real power is the instantaneous product of voltage and current and represents the capacity of the electricity for performing work. Apparent power is the product of RMS current and voltage. Due to energy stored in the load and returned to the source, or due to a non-linear load that distorts the wave shape of the current drawn from the source, the apparent power may be greater than the real power. A negative power factor occurs when the device (which is normally the load) generates power, which then flows back towards the source.
Pressure (symbol: p or P) is the force applied perpendicular to the surface of an object per unit area over which that force is distributed.:445[448] Gauge pressure (also spelled gage pressure)[lower-alpha 6] is the pressure relative to the ambient pressure.
Various units are used to express pressure. Some of these derive from a unit of force divided by a unit of area; the SI unit of pressure, the pascal (Pa), for example, is one newton per square metre (N/m2); similarly, the pound-force per square inch (psi) is the traditional unit of pressure in the imperial and U.S. customary systems. Pressure may also be expressed in terms of standard atmospheric pressure; the atmosphere (atm) is equal to this pressure, and the torr is defined as 1⁄760 of this. Manometric units such as the centimetre of water, millimetre of mercury, and inch of mercury are used to express pressures in terms of the height of column of a particular fluid in a manometer.
Is the branch of mathematics concerning numerical descriptions of how likely an event is to occur, or how likely it is that a proposition is true. The probability of an event is a number between 0 and 1, where, roughly speaking, 0 indicates impossibility of the event and 1 indicates certainty.[note 6][449][450] The higher the probability of an event, the more likely it is that the event will occur. A simple example is the tossing of a fair (unbiased) coin. Since the coin is fair, the two outcomes ("heads" and "tails") are both equally probable; the probability of "heads" equals the probability of "tails"; and since no other outcomes are possible, the probability of either "heads" or "tails" is 1/2 (which could also be written as 0.5 or 50%).
In probability theory and statistics, a probability distribution is the mathematical function that gives the probabilities of occurrence of different possible outcomes for an experiment.[451][452] It is a mathematical description of a random phenomenon in terms of its sample space and the probabilities of events (subsets of the sample space).[360]
For instance, if X is used to denote the outcome of a coin toss ("the experiment"), then the probability distribution of X would take the value 0.5 (1 in 2 or 1/2) for X = heads, and 0.5 for X = tails (assuming that the coin is fair). Examples of random phenomena include the weather condition in a future date, the height of a randomly selected person, the fraction of male students in a school, the results of a survey to be conducted, etc.[453]
is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expressing it through a set of axioms. Typically these axioms formalise probability in terms of a probability space, which assigns a measure taking values between 0 and 1, termed the probability measure, to a set of outcomes called the sample space. Any specified subset of these outcomes is called an event.
Central subjects in probability theory include discrete and continuous random variables, probability distributions, and stochastic processes, which provide mathematical abstractions of non-deterministic or uncertain processes or measured quantities that may either be single occurrences or evolve over time in a random fashion.
Although it is not possible to perfectly predict random events, much can be said about their behavior. Two major results in probability theory describing such behaviour are the law of large numbers and the central limit theorem.
As a mathematical foundation for statistics, probability theory is essential to many human activities that involve quantitative analysis of data.[454] Methods of probability theory also apply to descriptions of complex systems given only partial knowledge of their state, as in statistical mechanics or sequential estimation. A great discovery of twentieth-century physics was the probabilistic nature of physical phenomena at atomic scales, described in quantum mechanics.[455]
Is a wheel on an axle or shaft that is designed to support movement and change of direction of a taut cable or belt, or transfer of power between the shaft and cable or belt. In the case of a pulley supported by a frame or shell that does not transfer power to a shaft, but is used to guide the cable or exert a force, the supporting shell is called a block, and the pulley may be called a sheave.
A pulley may have a groove or grooves between flanges around its circumference to locate the cable or belt. The drive element of a pulley system can be a rope, cable, belt, or chain.
Is a device that moves fluids (liquids or gases), or sometimes slurries, by mechanical action, typically converted from electrical energy into hydraulic energy. Pumps can be classified into three major groups according to the method they use to move the fluid: direct lift, displacement, and gravity pumps.[456]
Pumps operate by some mechanism (typically reciprocating or rotary), and consume energy to perform mechanical work moving the fluid. Pumps operate via many energy sources, including manual operation, electricity, engines, or wind power, and come in many sizes, from microscopic for use in medical applications, to large industrial pumps.
Relative density, or specific gravity,[459][460] is the ratio of the density (mass of a unit volume) of a substance to the density of a given reference material. Specific gravity for liquids is nearly always measured with respect to water at its densest (at 4 °C or 39.2 °F); for gases, the reference is air at room temperature (20 °C or 68 °F). The term "relative density" is often preferred in scientific usage.
The relative velocity[math]\displaystyle{ \vec{v}_{B\mid A} }[/math] (also [math]\displaystyle{ \vec{v}_{BA} }[/math] or [math]\displaystyle{ \vec{v}_{B \operatorname{rel} A} }[/math]) is the velocity of an object or observer B in the rest frame of another object or observer A.
Is a sub-discipline of systems engineering that emphasizes the ability of equipment to function without failure. Reliability describes the ability of a system or component to function under stated conditions for a specified period of time.[461] Reliability is closely related to availability, which is typically described as the ability of a component or system to function at a specified moment or interval of time.
Electrical resistivity (also called specific electrical resistance or volume resistivity) and its inverse, electrical conductivity, is a fundamental property of a material that quantifies how strongly it resists or conducts electric current. A low resistivity indicates a material that readily allows electric current. Resistivity is commonly represented by the Greek letter ρ (rho). The SI unit of electrical resistivity is the ohm-meter (Ω⋅m).[462][463][464] For example, if a 1 m × 1 m × 1 m solid cube of material has sheet contacts on two opposite faces, and the resistance between these contacts is 1 Ω, then the resistivity of the material is 1 Ω⋅m.
Is a passivetwo-terminalelectrical component that implements electrical resistance as a circuit element. In electronic circuits, resistors are used to reduce current flow, adjust signal levels, to divide voltages, bias active elements, and terminate transmission lines, among other uses. High-power resistors that can dissipate many watts of electrical power as heat, may be used as part of motor controls, in power distribution systems, or as test loads for generators.
Fixed resistors have resistances that only change slightly with temperature, time or operating voltage. Variable resistors can be used to adjust circuit elements (such as a volume control or a lamp dimmer), or as sensing devices for heat, light, humidity, force, or chemical activity.
The Reynolds number (Re) helps predict flow patterns in different fluid flow situations. At low Reynolds numbers, flows tend to be dominated by laminar (sheet-like) flow, while at high Reynolds numbers flows tend to be turbulent. The turbulence results from differences in the fluid's speed and direction, which may sometimes intersect or even move counter to the overall direction of the flow (eddy currents). These eddy currents begin to churn the flow, using up energy in the process, which for liquids increases the chances of cavitation. Reynolds numbers are an important dimensionless quantity in fluid mechanics.
Is the study of the flow of matter, primarily in a liquid or gas state, but also as "soft solids" or solids under conditions in which they respond with plastic flow rather than deforming elastically in response to an applied force. Rheology is a branch of physics, and it is the science that deals with the deformation and flow of materials, both solids and liquids.[465]
In physics, a rigid body (also known as a rigid object[466]) is a solid body in which deformation is zero or so small it can be neglected. The distance between any two given points on a rigid body remains constant in time regardless of external forces or moments exerted on it. A rigid body is usually considered as a continuous distribution of mass. In the study of special relativity, a perfectly rigid body does not exist; and objects can only be assumed to be rigid if they are not moving near the speed of light. In quantum mechanics, a rigid body is usually thought of as a collection of point masses. For instance, molecules (consisting of the point masses: electrons and nuclei) are often seen as rigid bodies (see classification of molecules as rigid rotors).
A development project conducted by NASA to create humanoid robots capable of using space tools and working in similar environments to suited astronauts.
Robotic surgery are types of surgical procedures that are done using robotic systems. Robotically-assisted surgery was developed to try to overcome the limitations of pre-existing minimally-invasive surgical procedures and to enhance the capabilities of surgeons performing open surgery.
In the case of robotically-assisted minimally-invasive surgery, instead of directly moving the instruments, the surgeon uses one of two methods to administer the instruments. These include using a direct telemanipulator or through computer control. A telemanipulator is a remote manipulator that allows the surgeon to perform the normal movements associated with the surgery. The robotic arms carry out those movements using end-effectors and manipulators to perform the actual surgery. In computer-controlled systems, the surgeon uses a computer to control the robotic arms and its end-effectors, though these systems can also still use telemanipulators for their input. One advantage of using the computerized method is that the surgeon does not have to be present, leading to the possibility for remote surgery.
In the physics of gas molecules, the root-mean-square speed is defined as the square root of the average squared-speed. The RMS speed of an ideal gas is calculated using the following equation:
where R represents the gas constant, 8.314 J/(mol·K), T is the temperature of the gas in kelvins, and M is the molar mass of the gas in kilograms per mole. In physics, speed is defined as the scalar magnitude of velocity. For a stationary gas, the average speed of its molecules can be in the order of thousands of km/hr, even though the average velocity of its molecules is zero.
Rotational speed (or speed of revolution) of an object rotating around an axis is the number of turns of the object divided by time, specified as revolutions per minute (rpm), cycles per second (cps), radians per second (rad/s), etc.[471]
The symbol for rotational speed is [math]\displaystyle{ \omega_\text{cyc} }[/math][citation needed](the Greek lowercase letter "omega").
Tangential speedv, rotational speed [math]\displaystyle{ \omega_\text{cyc} }[/math], and radial distance r, are related by the following equation:[472]
[math]\displaystyle{ v = 2\pi r\omega_\text{cyc} }[/math]
[math]\displaystyle{ v = r\omega_\text{rad} }[/math]
An algebraic rearrangement of this equation allows us to solve for rotational speed:
[math]\displaystyle{ \omega_\text{cyc} = v/2\pi r }[/math]
Thus, the tangential speed will be directly proportional to r when all parts of a system simultaneously have the same ω, as for a wheel, disk, or rigid wand. The direct proportionality of v to r is not valid for the planets, because the planets have different rotational speeds (ω).
Rotational speed can measure, for example, how fast a motor is running. Rotational speed and angular speed are sometimes used as synonyms, but typically they are measured with a different unit. Angular speed, however, tells the change in angle per time unit, which is measured in radians per second in the SI system. Since there are 2π radians per cycle, or 360 degrees per cycle, we can convert angular speed to rotational speed by
[math]\displaystyle{ \omega_\text{cyc}\, }[/math] is rotational speed in cycles per second
[math]\displaystyle{ \omega_\text{rad}\, }[/math] is angular speed in radians per second
[math]\displaystyle{ \omega_\text{deg}\, }[/math] is angular speed in degrees per second
For example, a stepper motor might turn exactly one complete revolution each second.
Its angular speed is 360 degrees per second (360°/s), or 2π radians per second (2π rad/s), while the rotational speed is 60 rpm.
Rotational speed is not to be confused with tangential speed, despite some relation between the two concepts. Imagine a rotating merry-go-round. No matter how close or far you stand from the axis of rotation, your rotational speed will remain constant. However, your tangential speed does not remain constant. If you stand two meters from the axis of rotation, your tangential speed will be double the amount if you were standing only one meter from the axis of rotation.
S
Safe failure fraction (SFF)
A term used in functional safety for the proportion of failures that are either non-hazardous or detected automatically. The opposite of SFF is the proportion of undetected, hazardous failures.[473]
A safety data sheet (SDS),[474] material safety data sheet (MSDS), or product safety data sheet (PSDS) are documents that list information relating to occupational safety and health for the use of various substances and products. SDSs are a widely used system for cataloguing information on chemicals, chemical compounds, and chemical mixtures. SDS information may include instructions for the safe use and potential hazards associated with a particular material or product, along with spill-handling procedures. The older MSDS formats could vary from source to source within a country depending on national requirements; however, the newer SDS format is internationally standardized.
Sanitary engineering, also known as public health engineering or wastewater engineering, is the application of engineering methods to improve sanitation of human communities, primarily by providing the removal and disposal of human waste, and in addition to the supply of safe potable water.
Saturated compound
In chemistry, a saturated compound is a chemical compound (or ion) that resists the addition reactions, such as hydrogenation, oxidative addition, and binding of a Lewis base. The term is used in many contexts and for many classes of chemical compounds. Overall, saturated compounds are less reactive than unsaturated compounds. Saturation is derived from the Latin word saturare, meaning 'to fill')[475]
In mathematics, scalar multiplication is one of the basic operations defining a vector space in linear algebra[476][477][478] (or more generally, a module in abstract algebra[479][480]). In common geometrical contexts, scalar multiplication of a realEuclidean vector by a positive real number multiplies the magnitude of the vector—without changing its direction. The term "scalar" itself derives from this usage: a scalar is that which scales vectors. Scalar multiplication is the multiplication of a vector by a scalar (where the product is a vector), and is to be distinguished from inner product of two vectors (where the product is a scalar).
A screw is a mechanism that converts rotational motion to linear motion, and a torque (rotational force) to a linear force.[481] It is one of the six classical simple machines. The most common form consists of a cylindrical shaft with helical grooves or ridges called threads around the outside.[482][483] The screw passes through a hole in another object or medium, with threads on the inside of the hole that mesh with the screw's threads. When the shaft of the screw is rotated relative to the stationary threads, the screw moves along its axis relative to the medium surrounding it; for example rotating a wood screw forces it into wood. In screw mechanisms, either the screw shaft can rotate through a threaded hole in a stationary object, or a threaded collar such as a nut can rotate around a stationary screw shaft.[484][485] Geometrically, a screw can be viewed as a narrow inclined plane wrapped around a cylinder.[481]
Series circuit
An electrical circuit in which the same current passes through each component, with only one path.
Servo
A motor that moves to and maintains a set position under command, rather than continuously moving.
Is the strength of a material or component against the type of yield or structural failure when the material or component fails in shear. A shear load is a force that tends to produce a sliding failure on a material along a plane that is parallel to the direction of the force. When a paper is cut with scissors, the paper fails in shear.
In structural and mechanical engineering, the shear strength of a component is important for designing the dimensions and materials to be used for the manufacture or construction of the component (e.g. beams, plates, or bolts). In a reinforced concrete beam, the main purpose of reinforcing bar (rebar) stirrups is to increase the shear strength.
Shear stress, often denoted by τ (Greek: tau), is the component of stress coplanar with a material cross section. It arises from the shear force, the component of force vector parallel to the material cross section. Normal stress, on the other hand, arises from the force vector component perpendicular to the material cross section on which it acts.
Shortwave radiation (SW) is radiant energy with wavelengths in the visible (VIS), near-ultraviolet (UV), and near-infrared (NIR) spectra.
There is no standard cut-off for the near-infrared range; therefore, the shortwave radiation range is also variously defined. It may be broadly defined to include all radiation with a wavelength of 0.1μm and 5.0μm or narrowly defined so as to include only radiation between 0.2μm and 3.0μm.
There is little radiation flux (in terms of W/m2) to the Earth's surface below 0.2μm or above 3.0μm, although photon flux remains significant as far as 6.0μm, compared to shorter wavelength fluxes. UV-C radiation spans from 0.1μm to .28μm, UV-B from 0.28μm to 0.315μm, UV-A from 0.315μm to 0.4μm, the visible spectrum from 0.4μm to 0.7μm, and NIR arguably from 0.7μm to 5.0μm, beyond which the infrared is thermal.[488]
Shortwave radiation is distinguished from longwave radiation. Downward shortwave radiation is sensitive to solar zenith angle, cloud cover.[489]
SI units
The International System of Units (SI, abbreviated from the FrenchSystème international (d'unités)) is the modern form of the metric system. It is the only system of measurement with an official status in nearly every country in the world. It comprises a coherent system of units of measurement starting with seven base units, which are the second (the unit of time with the symbol s), metre (length, m), kilogram (mass, kg), ampere (electric current, A), kelvin (thermodynamic temperature, K), mole (amount of substance, mol), and candela (luminous intensity, cd). The system allows for an unlimited number of additional units, called derived units, which can always be represented as products of powers of the base units.[lower-alpha 7] Twenty-two derived units have been provided with special names and symbols.[lower-alpha 8] The seven base units and the 22 derived units with special names and symbols may be used in combination to express other derived units,[lower-alpha 9] which are adopted to facilitate measurement of diverse quantities. The SI also provides twenty prefixes to the unit names and unit symbols that may be used when specifying power-of-ten (i.e. decimal) multiples and sub-multiples of SI units. The SI is intended to be an evolving system; units and prefixes are created and unit definitions are modified through international agreement as the technology of measurement progresses and the precision of measurements improves.
Is an electrical engineering subfield that focuses on analysing, modifying, and synthesizing signals such as sound, images, and scientific measurements.[490] Signal processing techniques can be used to improve transmission, storage efficiency and subjective quality and to also emphasize or detect components of interest in a measured signal.[491]
Is a mechanical device that changes the direction or magnitude of a force.[492] In general, they can be defined as the simplest mechanisms that use mechanical advantage (also called leverage) to multiply force.[493] Usually the term refers to the six classical simple machines that were defined by Renaissance scientists:[494][495][496]
Also known as mechanics of solids, is the branch of continuum mechanics that studies the behavior of solid materials, especially their motion and deformation under the action of forces, temperature changes, phase changes, and other external or internal agents.
Is a type of alloying that can be used to improve the strength of a pure metal.[328] The technique works by adding atoms of one element (the alloying element) to the crystalline lattice of another element (the base metal), forming a solid solution. The local nonuniformity in the lattice due to the alloying element makes plastic deformation more difficult by impeding dislocation motion through stress fields. In contrast, alloying beyond the solubility limit can form a second phase, leading to strengthening via other mechanisms (e.g. the precipitation of intermetallic compounds).
Is the property of a solid, liquid or gaseouschemical substance called solute to dissolve in a solid, liquid or gaseous solvent. The solubility of a substance fundamentally depends on the physical and chemical properties of the solute and solvent as well as on temperature, pressure and presence of other chemicals (including changes to the pH) of the solution. The extent of the solubility of a substance in a specific solvent is measured as the saturation concentration, where adding more solute does not increase the concentration of the solution and begins to precipitate the excess amount of solute.
Is a type of dynamic equilibrium that exists when a chemical compound in the solid state is in chemical equilibrium with a solution of that compound. The solid may dissolve unchanged, with dissociation or with chemical reaction with another constituent of the solution, such as acid or alkali. Each solubility equilibrium is characterized by a temperature-dependent solubility product which functions like an equilibrium constant. Solubility equilibria are important in pharmaceutical, environmental and many other scenarios.
In physics, the special theory of relativity, or special relativity for short, is a scientific theory regarding the relationship between space and time. In Albert Einstein's original treatment, the theory is based on two postulates:[497][498][499]
Spontaneous combustion or spontaneous ignition is a type of combustion which occurs by self-heating (increase in temperature due to exothermic internal reactions), followed by thermal runaway (self heating which rapidly accelerates to high temperatures) and finally, autoignition.[500]
In physics, a state of matter is one of the distinct forms in which matter can exist. Four states of matter are observable in everyday life: solid, liquid, gas, and plasma. Many intermediate states are known to exist, such as liquid crystal, and some states only exist under extreme conditions, such as Bose–Einstein condensates, neutron-degenerate matter, and quark–gluon plasma, which only occur, respectively, in situations of extreme cold, extreme density, and extremely high energy. For a complete list of all exotic states of matter, see the list of states of matter.
Is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data.[503][504][505] In applying statistics to a scientific, industrial, or social problem, it is conventional to begin with a statistical population or a statistical model to be studied. Populations can be diverse groups of people or objects such as "all people living in a country" or "every atom composing a crystal". Statistics deals with every aspect of data, including the planning of data collection in terms of the design of surveys and experiments.[506]
Steam table
Thermodynamic data table containing steam or water properties .[507]
The Stefan–Boltzmann law describes the power radiated from a black body in terms of its temperature. Specifically, the Stefan–Boltzmann law states that the total energy radiated per unit surface area of a black body across all wavelengths per unit time[math]\displaystyle{ j^{\star} }[/math] (also known as the black-body radiant emittance) is directly proportional to the fourth power of the black body's thermodynamic temperatureT:
[math]\displaystyle{ L = \frac{j^{\star}}\pi = \frac\sigma\pi T^{4}. }[/math]
A body that does not absorb all incident radiation (sometimes known as a grey body) emits less total energy than a black body and is characterized by an emissivity, [math]\displaystyle{ \varepsilon \lt 1 }[/math]:
The radiant emittance [math]\displaystyle{ j^{\star} }[/math] has dimensions of energy flux (energy per unit time per unit area), and the SI units of measure are joules per second per square metre, or equivalently, watts per square metre. The SI unit for absolute temperature T is the kelvin. [math]\displaystyle{ \varepsilon }[/math] is the emissivity of the grey body; if it is a perfect blackbody, [math]\displaystyle{ \varepsilon=1 }[/math]. In the still more general (and realistic) case, the emissivity depends on the wavelength, [math]\displaystyle{ \varepsilon=\varepsilon(\lambda) }[/math].
To find the total power radiated from an object, multiply by its surface area, [math]\displaystyle{ A }[/math]:
[math]\displaystyle{ P= A j^{\star} = A \varepsilon\sigma T^{4}. }[/math]
Wavelength- and subwavelength-scale particles,[508]metamaterials,[509] and other nanostructures are not subject to ray-optical limits and may be designed to exceed the Stefan–Boltzmann law.
Is a type of parallel manipulator that has six prismatic actuators, commonly hydraulic jacks or electric linear actuators, attached in pairs to three positions on the platform's baseplate, crossing over to three mounting points on a top plate. All 12 connections are made via universal joints. Devices placed on the top plate can be moved in the six degrees of freedom in which it is possible for a freely-suspended body to move: three linear movements x, y, z (lateral, longitudinal, and vertical), and the three rotations (pitch, roll, and yaw).
Is the extent to which an object resists deformation in response to an applied force.[510]
The complementary concept is flexibility or pliability: the more flexible an object is, the less stiff it is.[511]
Refers to the relationship between the quantities of reactants and products before, during, and following chemical reactions.
Stoichiometry is founded on the law of conservation of mass where the total mass of the reactants equals the total mass of the products, leading to the insight that the relations among quantities of reactants and products typically form a ratio of positive integers. This means that if the amounts of the separate reactants are known, then the amount of the product can be calculated. Conversely, if one reactant has a known quantity and the quantity of the products can be empirically determined, then the amount of the other reactants can also be calculated.
Work hardening, also known as strain hardening, is the strengthening of a metal or polymer by plastic deformation. Work hardening may be desirable, undesirable, or inconsequential, depending on the context.
This strengthening occurs because of dislocation movements and dislocation generation within the crystal structure of the material.[512] Many non-brittle metals with a reasonably high melting point as well as several polymers can be strengthened in this fashion.[513] Alloys not amenable to heat treatment, including low-carbon steel, are often work-hardened. Some materials cannot be work-hardened at low temperatures, such as indium,[514] however others can be strengthened only via work hardening, such as pure copper and aluminum.[515]
The field of strength of materials, also called mechanics of materials, typically refers to various methods of calculating the stresses and strains in structural members, such as beams, columns, and shafts. The methods employed to predict the response of a structure under loading and its susceptibility to various failure modes takes into account the properties of the materials such as its yield strength, ultimate strength, Young's modulus, and Poisson's ratio. In addition, the mechanical element's macroscopic properties (geometric properties) such as its length, width, thickness, boundary constraints and abrupt changes in geometry such as holes are considered.
In continuum mechanics, stress is a physical quantity that expresses the internal forces that neighbouring particles of a continuous material exert on each other, while strain is the measure of the deformation of the material. For example, when a solid vertical bar is supporting an overhead weight, each particle in the bar pushes on the particles immediately below it. When a liquid is in a closed container under pressure, each particle gets pushed against by all the surrounding particles. The container walls and the pressure-inducing surface (such as a piston) push against them in (Newtonian) reaction. These macroscopic forces are actually the net result of a very large number of intermolecular forces and collisions between the particles in those molecules. Stress is frequently represented by a lowercase Greek letter sigma (σ).
Stress–strain analysis (or stress analysis) is an engineering discipline that uses many methods to determine the stresses and strains in materials and structures subjected to forces. In continuum mechanics, stress is a physical quantity that expresses the internal forces that neighboring particles of a continuous material exert on each other, while strain is the measure of the deformation of the material.
In simple terms we can define stress as the force of resistance per unit per unit area, offered by a body against deformation. Stress is the ratio of force over area (S =R/A, where S is the stress, R is the internal resisting force and A is the cross-sectional area). Strain is the ratio of change in length to the original length, when a given body is subjected to some external force (Strain= change in length÷the original length).
Is the transition of a substance directly from the solid to the gas state,[520] without passing through the liquid state.[521] Sublimation is an endothermic process that occurs at temperatures and pressures below a substance's triple point in its phase diagram, which corresponds to the lowest pressure at which the substance can exist as a liquid. The reverse process of sublimation is deposition or desublimation, in which a substance passes directly from a gas to a solid phase.[522] Sublimation has also been used as a generic term to describe a solid-to-gas transition (sublimation) followed by a gas-to-solid transition (deposition).[523] While vaporization from liquid to gas occurs as evaporation from the surface if it occurs below the boiling point of the liquid, and as boiling with formation of bubbles in the interior of the liquid if it occurs at the boiling point, there is no such distinction for the solid-to-gas transition which always occurs as sublimation from the surface.
Is a reactive robotic architecture heavily associated with behavior-based robotics which was very popular in the 1980s and 90s. The term was introduced by Rodney Brooks and colleagues in 1986.[524][525][526] Subsumption has been widely influential in autonomous robotics and elsewhere in real-timeAI.
Is the tendency of liquid surfaces at rest to shrink into the minimum surface area possible. Surface tension is what allows objects with a higher density than water to float on a water surface without becoming even partly submerged.
Is a set of physical properties observed in certain materials where electrical resistance vanishes and magnetic flux fields are expelled from the material. Any material exhibiting these properties is a superconductor. Unlike an ordinary metallic conductor, whose resistance decreases gradually as its temperature is lowered even down to near absolute zero, a superconductor has a characteristic critical temperature below which the resistance drops abruptly to zero. An electric current through a loop of superconducting wire can persist indefinitely with no power source.[527][528][529][530]
Is a material with a hardness value exceeding 40 gigapascals (GPa) when measured by the Vickers hardness test.[531][532][533][534] They are virtually incompressible solids with high electron density and high bond covalency. As a result of their unique properties, these materials are of great interest in many industrial areas including, but not limited to, abrasives, polishing and cutting tools, disc brakes, and wear-resistant and protective coatings.
Supersaturation occurs with a chemical solution when the concentration of a solute exceeds the concentration specified by the value equilibrium solubility. Most commonly the term is applied to a solution of a solid in a liquid. A supersaturated solution is in a metastable state; it may be brought to equilibrium by forcing the excess of solute to separate from the solution. The term can also be applied to a mixture of gases.
a unit vector tangent to the path pointing in the direction of motion at the chosen moment in time. Taking into account both the changing speed v(t) and the changing direction of ut, the acceleration of a particle moving on a curved path can be written using the chain rule of differentiation[535] for the product of two functions of time as:
where un is the unit (inward) normal vector to the particle's trajectory (also called the principal normal), and r is its instantaneous radius of curvature based upon the osculating circle at time t. These components are called the tangential acceleration and the normal or radial acceleration (or centripetal acceleration in circular motion, see also circular motion and centripetal force).
Geometrical analysis of three-dimensional space curves, which explains tangent, (principal) normal and binormal, is described by the Frenet–Serret formulas.[536][537]
A technical standard is an established norm or requirement for a repeatable technical task. It is usually a formal document that establishes uniform engineering or technical criteria, methods, processes, and practices. In contrast, a custom, convention, company product, corporate standard, and so forth that becomes generally accepted and dominant is often called a de facto standard.
Is a physical quantity that expresses hot and cold. It is the manifestation of thermal energy, present in all matter, which is the source of the occurrence of heat, a flow of energy, when a body is in contact with another that is colder. Temperature is measured with a thermometer. Thermometers are calibrated in various temperature scales that historically have used various reference points and thermometric substances for definition. The most common scales are the Celsius scale (formerly called centigrade, denoted °C), the Fahrenheit scale (denoted °F), and the Kelvin scale (denoted K), the last of which is predominantly used for scientific purposes by conventions of the International System of Units (SI).
Heat treatment to alter the crystal structure of a metal such as steel.
Tensile force
Pulling force, tending to lengthen an object.
Tensile modulus
Young's modulus[math]\displaystyle{ E }[/math], the Young modulus, or the modulus of elasticity in tension, is a mechanical property that measures the tensile stiffness of a solid material. It quantifies the relationship between tensile stress[math]\displaystyle{ \sigma }[/math] (force per unit area) and axial strain[math]\displaystyle{ \varepsilon }[/math] (proportional deformation) in the linear elastic region of a material and is determined using the formula:[538][math]\displaystyle{ E = \frac{\sigma}{\varepsilon} }[/math]
Young's moduli are typically so large that they are expressed not in pascals but in gigapascals (GPa).
Ultimate tensile strength (UTS), often shortened to tensile strength (TS), ultimate strength, or [math]\displaystyle{ F_\text{tu} }[/math] within equations,[539][540][541] is the maximum stress that a material can withstand while being stretched or pulled before breaking. In brittle materials the ultimate tensile strength is close to the yield point, whereas in ductile materials the ultimate tensile strength can be higher.
Tensile testing, also known as tension testing,[542] is a fundamental materials science and engineering test in which a sample is subjected to a controlled tension until failure. Properties that are directly measured via a tensile test are ultimate tensile strength, breaking strength, maximum elongation and reduction in area.[543] From these measurements the following properties can also be determined: Young's modulus, Poisson's ratio, yield strength, and strain-hardening characteristics.[544]Uniaxial tensile testing is the most commonly used for obtaining the mechanical characteristics of isotropic materials. Some materials use biaxial tensile testing. The main difference between these testing machines being how load is applied on the materials.
Tension member
Tension members are structural elements that are subjected to axial tensile forces. Examples of tension members are bracing for buildings and bridges, truss members, and cables in suspended roof systems.
Is the transfer of internal energy by microscopic collisions of particles and movement of electrons within a body. The colliding particles, which include molecules, atoms and electrons, transfer disorganized microscopic kinetic and potential energy, jointly known as internal energy. Conduction takes place in all phases: solid, liquid, and gas.
Two physical systems are in thermal equilibrium if there is no net flow of thermal energy between them when they are connected by a path permeable to heat. Thermal equilibrium obeys the zeroth law of thermodynamics. A system is said to be in thermal equilibrium with itself if the temperature within the system is spatially uniform and temporally constant.
Systems in thermodynamic equilibrium are always in thermal equilibrium, but the converse is not always true. If the connection between the systems allows transfer of energy as 'change in internal energy' but does not allow transfer of matter or transfer of energy as work, the two systems may reach thermal equilibrium without reaching thermodynamic equilibrium.
Usually encompasses two interrelated theories by Albert Einstein: special relativity and general relativity, proposed and published in 1905 and 1915, respectively.[545] Special relativity applies to all physical phenomena in the absence of gravity. General relativity explains the law of gravitation and its relation to other forces of nature.[546] It applies to the cosmological and astrophysical realm, including astronomy.[547]
Ultimate tensile strength (UTS), often shortened to tensile strength (TS), ultimate strength, or Ftu within equations,[539][540][541] is the capacity of a material or structure to withstand loads tending to elongate, as opposed to compressive strength, which withstands loads tending to reduce size. In other words, tensile strength resists tension (being pulled apart), whereas compressive strength resists compression (being pushed together). Ultimate tensile strength is measured by the maximum stress that a material can withstand while being stretched or pulled before breaking. In the study of strength of materials, tensile strength, compressive strength, and shear strength can be analyzed independently.
In mathematics, a unit vector in a normed vector space is a vector (often a spatial vector) of length 1. A unit vector is often denoted by a lowercase letter with a circumflex, or "hat": [math]\displaystyle{ {\hat{\imath}} }[/math] (pronounced "i-hat"). The term direction vector is used to describe a unit vector being used to represent spatial direction, and such quantities are commonly denoted as d. .
Unsaturated compound
.
Upthrust
Buoyancy, or upthrust, is an upward force exerted by a fluid that opposes the weight of a partially or fully immersed object. In a column of fluid, pressure increases with depth as a result of the weight of the overlying fluid. Thus the pressure at the bottom of a column of fluid is greater than at the top of the column. Similarly, the pressure at the bottom of an object submerged in a fluid is greater than at the top of the object. The pressure difference results in a net upward force on the object. The magnitude of the force is proportional to the pressure difference, and (as explained by Archimedes' principle) is equivalent to the weight of the fluid that would otherwise occupy the submerged volume of the object, i.e. the displaced fluid.
The utility frequency, (power) line frequency (American English) or mains frequency (British English) is the nominal frequency of the oscillations of alternating current (AC) in a wide area synchronous grid transmitted from a power station to the end-user. In large parts of the world this is 50 Hz, although in the Americas and parts of Asia it is typically 60 Hz. Current usage by country or region is given in the list of mains electricity by country.
Is a membrane-bound organelle which is present in plant and fungalcells and some protist, animal[550] and bacterial cells.[551] Vacuoles are essentially enclosed compartments which are filled with water containing inorganic and organic molecules including enzymes in solution, though in certain cases they may contain solids which have been engulfed. Vacuoles are formed by the fusion of multiple membrane vesicles and are effectively just larger forms of these.[552] The organelle has no basic shape or size; its structure varies according to the requirements of the cell.
In solid-state physics, the valence band and conduction band are the bands closest to the Fermi level and thus determine the electrical conductivity of the solid. In non-metals, the valence band is the highest range of electronenergies in which electrons are normally present at absolute zero temperature, while the conduction band is the lowest range of vacant electronic states. On a graph of the electronic band structure of a material, the valence band is located below the Fermi level, while the conduction band is located above it. The distinction between the valence and conduction bands is meaningless in metals, because conduction occurs in one or more partially filled bands that take on the properties of both the valence and conduction bands.
In chemistry, valence bond (VB) theory is one of the two basic theories, along with molecular orbital (MO) theory, that were developed to use the methods of quantum mechanics to explain chemical bonding. It focuses on how the atomic orbitals of the dissociated atoms combine to give individual chemical bonds when a molecule is formed. In contrast, molecular orbital theory has orbitals that cover the whole molecule.[554]
In chemistry and physics, a valence electron is an outer shell electron that is associated with an atom, and that can participate in the formation of a chemical bond if the outer shell is not closed; in a single covalent bond, both atoms in the bond contribute one valence electron in order to form a shared pair.
Valence shell
The valence shell is the set of orbitals which are energetically accessible for accepting electrons to form chemical bonds. For main group elements, the valence shell consists of the ns and np orbitals in the outermost electron shell. In the case of transition metals (the (n-1)d orbitals), and lanthanides and actinides (the (n-2)f and (n-1)d orbitals), the orbitals involved can also be in an inner electron shell. Thus, the shell terminology is a misnomer as there is no correspondence between the valence shell and any particular electron shell in a given element. A scientifically correct term would be valence orbital to refer to the energetically accessible orbitals of an element.
Is a device or natural object that regulates, directs or controls the flow of a fluid (gases, liquids, fluidized solids, or slurries) by opening, closing, or partially obstructing various passageways. Valves are technically fittings, but are usually discussed as a separate category. In an open valve, fluid flows in a direction from higher pressure to lower pressure. The word is derived from the Latin valva, the moving part of a door, in turn from volvere, to turn, roll.
In molecular physics, the Van der Waals force, named after Dutch physicist Johannes Diderik van der Waals, is a distance-dependent interaction between atoms or molecules. Unlike ionic or covalent bonds, these attractions do not result from a chemical electronic bond; they are comparatively weak and therefore more susceptible to disturbance. The Van der Waals force quickly vanishes at longer distances between interacting molecules.
The viscosity of a fluid is the measure of its resistance to gradual deformation by shear stress or tensile stress.[555] For liquids, it corresponds to the informal concept of "thickness": for example, honey has a higher viscosity than water.[556]
(VA), is the unit used for the apparent power in an electrical circuit. The apparent power equals the product of root-mean-square (RMS) voltage and RMS current.[557] In direct current (DC) circuits, this product is equal to the real power (active power)[558] in watts. Volt-amperes are useful only in the context of alternating current (AC) circuits. The volt-ampere is dimensionally equivalent to the watt (in SI units, 1 VA = 1 N m A−1 s −1 A = 1 N m s −1 = 1 J s −1 = 1 W). VA rating is most useful in rating wires and switches (and other power handling equipment) for inductive loads.
The Volta potential (also called Volta potential difference, contact potential difference, outer potential difference, Δψ, or "delta psi") in electrochemistry, is the electrostatic potential difference between two metals (or one metal and one electrolyte) that are in contact and are in thermodynamic equilibrium. Specifically, it is the potential difference between a point close to the surface of the first metal, and a point close to the surface of the second metal (or electrolyte).[559]
Voltage, electric potential difference, electric pressure or electric tension is the difference in electric potential between two points. The difference in electric potential between two points (i.e., voltage) is defined as the work needed per unit of charge against a static electric field to move a test charge between the two points. In the International System of Units, the derived unit for voltage is named volt.[560] In SI units, work per unit charge is expressed as joules per coulomb, where 1 volt = 1 joule (of work) per 1 coulomb (of charge). The official SI definition for volt uses power and current, where 1 volt = 1 watt (of power) per 1 ampere (of current).[560]
Also known as volume flow rate, rate of fluid flow or volume velocity, is the volume of fluid which passes per unit time; usually represented by the symbol Q (sometimes V̇). The SI unit is m3/s (cubic metres per second).
The von Mises yield criterion (also known as the maximum distortion energy criterion[561]) suggests that yielding of a ductile material begins when the second deviatoric stress invariant[math]\displaystyle{ J_2 }[/math] reaches a critical value.[562] It is part of plasticity theory that applies best to ductile materials, such as some metals. Prior to yield, material response can be assumed to be of a nonlinear elastic, viscoelastic, or linear elastic behavior.
In materials science and engineering the von Mises yield criterion can also be formulated in terms of the von Mises stress or equivalent tensile stress, [math]\displaystyle{ \sigma_v }[/math]. This is a scalar value of stress that can be computed from the Cauchy stress tensor. In this case, a material is said to start yielding when the von Mises stress reaches a value known as yield strength, [math]\displaystyle{ \sigma_y }[/math]. The von Mises stress is used to predict yielding of materials under complex loading from the results of uniaxial tensile tests. The von Mises stress satisfies the property where two stress states with equal distortion energy have an equal von Mises stress.
Is a disturbance that transfers energy through matter or space, with little or no associated mass transport. Waves consist of oscillations or vibrations of a physical medium or a field, around relatively fixed locations. From the perspective of mathematics, waves, as functions of time and space, are a class of signals.[563]
Is the spatial period of a periodic wave—the distance over which the wave's shape repeats.[564][565]
It is thus the inverse of the spatial frequency. Wavelength is usually determined by considering the distance between consecutive corresponding points of the same phase, such as crests, troughs, or zero crossings and is a characteristic of both traveling waves and standing waves, as well as other spatial wave patterns.[566][567]
Wavelength is commonly designated by the Greek letter lambda (λ).
The term wavelength is also sometimes applied to modulated waves, and to the sinusoidal envelopes of modulated waves or waves formed by interference of several sinusoids.[568]' .
Wedge
Is a triangular shaped tool, and is a portable inclined plane, and one of the six classical simple machines. It can be used to separate two objects or portions of an object, lift up an object, or hold an object in place. It functions by converting a force applied to its blunt end into forces perpendicular (normal) to its inclined surfaces. The mechanical advantage of a wedge is given by the ratio of the length of its slope to its width.[569][570] Although a short wedge with a wide angle may do a job faster, it requires more force than a long wedge with a narrow angle.
The weighted arithmetic mean is similar to an ordinary arithmetic mean (the most common type of average), except that instead of each of the data points contributing equally to the final average, some data points contribute more than others. The notion of weighted mean plays a role in descriptive statistics and also occurs in a more general form in several other areas of mathematics. If all the weights are equal, then the weighted mean is the same as the arithmetic mean. While weighted means generally behave in a similar fashion to arithmetic means, they do have a few counterintuitive properties, as captured for instance in Simpson's paradox.
Are one of six simple machines identified by Renaissance scientists drawing from Greek texts on technology.[571] The wheel and axle consists of a wheel attached to a smaller axle so that these two parts rotate together in which a force is transferred from one to the other. A hinge or bearing supports the axle, allowing rotation. It can amplify force; a small force applied to the periphery of the large wheel can move a larger load attached to the axle.
Is a winsorized statistical measure of central tendency, much like the mean and median, and even more similar to the truncated mean. It involves the calculation of the mean after replacing given parts of a probability distribution or sample at the high and low end with the most extreme remaining values,[572] typically doing so for an equal amount of both extremes; often 10 to 25 percent of the ends are replaced. The winsorized mean can equivalently be expressed as a weighted average of the truncated mean and the quantiles at which it is limited, which corresponds to replacing parts with the corresponding quantiles.
Also known as strain hardening, is the strengthening of a metal or polymer by plastic deformation. This strengthening occurs because of dislocation movements and dislocation generation within the crystal structure of the material.[573]
In algebraic geometry, the axis on a graph that is usually drawn from bottom to top and usually shows the range of values of variable dependent on one other variable, or the second of two independent variables.[575]
In algebraic geometry, the axis on a graph of at least three dimensions that is usually drawn vertically and usually shows the range of values of a variable dependent on two other variables or the third independent variable.[576]
Zero defects
A quality assurance philosophy that aims to reduce the need for inspection of components by improving their quality.
In the field of engineering mechanics, a zero force member is a member (a single truss segment) in a truss which, given a specific load, is at rest: neither in tension, nor in compression. In a truss a zero force member is often found at pins (any connections within the truss) where no external load is applied and three or fewer truss members meet. Recognizing basic zero force members can be accomplished by analyzing the forces acting on an individual pin in a physical system.
NOTE: If the pin has an external force or moment applied to it, then all of the members attached to that pin are not zero force members UNLESS the external force acts in a manner that fulfills one of the rules below:
If two non-collinear members meet in an unloaded joint, both are zero-force members.
If three members meet in an unloaded joint of which two are collinear, then the third member is a zero-force member.
Reasons for Zero-force members in a truss system
These members contribute to the stability of the structure, by providing buckling prevention for long slender members under compressive forces
These members can carry loads in the event that variations are introduced in the normal external loading configuration.
↑Strictly speaking, a probability of 0 indicates that an event almost never takes place, whereas a probability of 1 indicates than an event almost certainly takes place. This is an important distinction when the sample space is infinite. For example, for the continuous uniform distribution on the real interval [5, 10], there are an infinite number of possible outcomes, and the probability of any given outcome being observed — for instance, exactly 7 — is 0. This means that when we make an observation, it will almost surely not be exactly 7. However, it does not mean that exactly 7 is impossible. Ultimately some specific outcome (with probability 0) will be observed, and one possibility for that specific outcome is exactly 7.
↑"Newtonian constant of gravitation" is the name introduced for G by Boys (1894). Use of the term by T.E. Stern (1928) was misquoted as "Newton's constant of gravitation" in Pure Science Reviewed for Profound and Unsophisticated Students (1930), in what is apparently the first use of that term. Use of "Newton's constant" (without specifying "gravitation" or "gravity") is more recent, as "Newton's constant" was also
used for the heat transfer coefficient in Newton's law of cooling, but has by now become quite common, e.g.
Calmet et al, Quantum Black Holes (2013), p. 93; P. de Aquino, Beyond Standard Model Phenomenology at the LHC (2013), p. 3.
The name "Cavendish gravitational constant", sometimes "Newton–Cavendish gravitational constant", appears to have been common in the 1970s to 1980s, especially in (translations from) Soviet-era Russian literature, e.g. Sagitov (1970 [1969]), Soviet Physics: Uspekhi 30 (1987), Issues 1–6, p. 342 [etc.].
"Cavendish constant" and "Cavendish gravitational constant" is also used in Charles W. Misner, Kip S. Thorne, John Archibald Wheeler, "Gravitation", (1973), 1126f.
Colloquial use of "Big G", as opposed to "little g" for gravitational acceleration dates to the 1960s (R.W. Fairbridge, The encyclopedia of atmospheric sciences and astrogeology, 1967, p. 436; note use of "Big G's" vs. "little g's" as early as the 1940s of the Einstein tensorGμν vs. the metric tensorgμν, Scientific, medical, and technical books published in the United States of America: a selected list of titles in print with annotations: supplement of books published 1945–1948, Committee on American Scientific and Technical Bibliography National Research Council, 1950, p. 26).
↑Integral calculus is a very well established mathematical discipline for which there are many sources. See Apostol 1967 and Anton, Bivens & Davis 2016, for example.
↑The photon's invariant mass (also called "rest mass" for massive particles) is believed to be exactly zero. This is the notion of particle mass generally used by modern physicists. The photon does have a nonzero relativistic mass, depending on its energy, but this varies according to the frame of reference.
↑The term "universe" is defined as everything that physically exists: the entirety of space and time, all forms of matter, energy and momentum, and the physical laws and constants that govern them. However, the term "universe" may also be used in slightly different contextual senses, denoting concepts such as the cosmos or the philosophical world.
↑The preferred spelling varies by country and even by industry. Further, both spellings are often used within a particular industry or country. Industries in British English-speaking countries typically use the "gauge" spelling.
↑For example, the SI unit of velocity is the metre per second, m⋅s−1; of acceleration is the metre per second squared, m⋅s−2; etc.
↑For example the newton (N), the unit of force, equivalent to kg⋅m⋅s−2; the joule (J), the unit of energy, equivalent to kg⋅m2⋅s−2, etc. The most recently named derived unit, the katal, was defined in 1999.
↑For example, the recommended unit for the electric field strength is the volt per metre, V/m, where the volt is the derived unit for electric potential difference. The volt per metre is equal to kg⋅m⋅s−3⋅A−1 when expressed in terms of base units.
↑For example, the SI unit of velocity is the metre per second, m⋅s−1; of acceleration is the metre per second squared, m⋅s−2; etc.
↑For example the newton (N), the unit of force, equivalent to kg⋅m⋅s−2; the joule (J), the unit of energy, equivalent to kg⋅m2⋅s−2, etc. The most recently named derived unit, the katal, was defined in 1999.
↑For example, the recommended unit for the electric field strength is the volt per metre, V/m, where the volt is the derived unit for electric potential difference. The volt per metre is equal to kg⋅m⋅s−3⋅A−1 when expressed in terms of base units.