Classical physics
Introduction and History
Definition and Scope
Classical physics encompasses the foundational theories of physics developed prior to the 20th century, including Newtonian mechanics for the motion of macroscopic bodies, Maxwell's electromagnetism for electric and magnetic phenomena, and thermodynamics for heat and energy processes. These theories collectively describe the behavior of physical systems under deterministic laws, assuming a continuous spacetime continuum where signals, such as gravitational influences, propagate at infinite speeds.[7] The framework emphasizes causality, where the state of a system at any future time is uniquely and predictably determined by its initial conditions and the governing equations. The scope of classical physics is primarily applicable to macroscopic scales and non-relativistic velocities, where speeds are much less than the speed of light and quantum effects do not dominate.[8] It accurately models everyday phenomena but breaks down for atomic or subatomic particles, where probabilistic quantum mechanics is required, and for extreme conditions involving high speeds or intense gravitational fields, which necessitate relativistic corrections.[9] This delimitation ensures its utility in practical engineering and natural observations at human scales, while highlighting its limitations in fundamental realms. Central assumptions underpinning classical physics include the existence of absolute space and time, providing a universal reference frame independent of observers, and strict conservation laws for quantities such as energy, linear momentum, and angular momentum, which remain invariant across interactions.[10] Furthermore, the underlying equations are time-reversible, allowing physical processes to be computed equivalently in forward or backward directions without inherent directionality.[11] Representative applications within this scope include planetary orbits calculated via Newtonian gravitational laws, fluid flows analyzed through classical hydrodynamic equations, and the efficiency of heat engines evaluated using thermodynamic principles.[12]Historical Development
The roots of classical physics trace back to ancient Greek thought, where Aristotelian physics dominated with its teleological view of motion, positing that natural objects move toward their inherent purposes or ends, such as elements seeking their natural places (earth downward, fire upward).[13] This framework emphasized qualitative explanations over quantitative analysis, influencing Western science for centuries. Complementing this, Archimedes around 250 BCE laid early foundations in statics through his work on levers, demonstrating that equilibrium depends on the product of weights and distances from the fulcrum, providing the first mathematical treatment of mechanical balance.[14] During the Renaissance, Galileo Galilei advanced empirical methods in the early 1600s with experiments on falling bodies and inclined planes, showing that objects accelerate uniformly regardless of mass and establishing the principle of inertia—bodies maintain their state of motion unless acted upon by external forces.[15] These investigations, detailed in his Discorsi e Dimostrazioni Matematiche (1638), shifted physics toward mathematical description and experimentation, challenging Aristotelian teleology.[16] Isaac Newton's Philosophiæ Naturalis Principia Mathematica (1687) synthesized these developments into a unified framework, applying the same laws of motion and universal gravitation to both terrestrial and celestial phenomena, thereby unifying mechanics across scales.[17] This work formalized classical mechanics, enabling predictions of planetary orbits and projectile motion under a single inverse-square law.[18] In the 19th century, classical physics expanded with Michael Faraday's experimental discoveries in the 1830s, including electromagnetic induction, which revealed the interplay between electric currents and magnetic fields.[19] James Clerk Maxwell built on this in the 1860s, formulating a set of equations that described electromagnetism as a unified field propagating as waves, predicting the speed of light and linking it to electromagnetic phenomena.[20] Concurrently, thermodynamics emerged through Sadi Carnot's 1824 analysis of heat engines, establishing efficiency limits for converting heat to work.[21] James Prescott Joule's experiments in the 1840s demonstrated the mechanical equivalent of heat, quantifying energy conservation.[22] Rudolf Clausius in the 1850s formalized the second law of thermodynamics, introducing entropy to describe irreversible processes.[23] A pivotal articulation came in 1814 with Pierre-Simon Laplace's vision of a deterministic universe, where complete knowledge of initial conditions would allow perfect prediction of all future states, epitomizing classical physics' mechanistic worldview at its zenith before 1900.[24] This era's progress was deeply embedded in the Enlightenment's cultural context, which championed empiricism—knowledge derived from sensory observation—and mathematical rigor as tools for understanding nature, fostering institutions like academies that promoted collaborative scientific inquiry.[25]Fundamental Concepts
Space, Time, and Motion
In classical physics, space and time are conceived as absolute entities independent of the observer or material bodies. Isaac Newton introduced these concepts in his Philosophiæ Naturalis Principia Mathematica (1687), defining absolute space as a fixed, homogeneous, and immovable backdrop that exists without relation to external objects, serving as the arena for all physical events.[26] Absolute time, by contrast, flows uniformly and continuously, akin to duration itself, unaffected by external influences or the occurrence of events, and distinct from relative time measured by observable cycles such as the rotation of the Earth.[27] This framework posits that true motion is the translation of a body from one absolute place to another, allowing for an objective determination of rest and motion against this unchanging stage.[26] Reference frames provide the coordinate systems from which motion is described relative to an observer. In classical mechanics, inertial frames are those in which objects not subject to interactions move in straight lines at constant speed, embodying the principle of inertia first articulated by Galileo Galilei in his Dialogue Concerning the Two Chief World Systems (1632), where he used the thought experiment of a ship in uniform motion to illustrate that mechanical experiments yield identical results whether the frame is at rest or moving steadily without acceleration.[28] Non-inertial frames, such as those undergoing rotation or linear acceleration, introduce fictitious effects like centrifugal forces that alter apparent motion, distinguishing them from inertial ones.[28] The transformation between inertial frames moving at constant relative velocity along the x-axis follows the Galilean transformations: which preserve the uniformity of time and the additivity of velocities, ensuring the laws of motion remain invariant across such frames.[28] Kinematics in classical physics describes motion without regard to its causes, focusing on position, velocity, and acceleration as functions of time. Uniform motion occurs when velocity remains constant, so position , representing straight-line travel at steady speed in an inertial frame.[29] Accelerated motion involves changing velocity, quantified by acceleration , which can be constant as in free fall or variable in more complex paths.[17] A canonical example is projectile motion under uniform gravitational acceleration, where the horizontal component of velocity remains constant while the vertical component undergoes constant downward acceleration , resulting in a parabolic trajectory as analyzed by Galileo in Discourses and Mathematical Demonstrations Relating to Two New Sciences (1638).[29] The concept of inertia underpins these descriptions, stating that an object persists in its state of rest or uniform rectilinear motion unless compelled to change by external interactions, a principle Newton formalized as the first law in the Principia.[17] This inertial tendency implies that deviations from straight-line constant-speed motion require influences, while free fall exemplifies accelerated motion due to gravity, with all objects accelerating equally regardless of mass when air resistance is negligible, as demonstrated in Galileo's inclined-plane experiments.[29] In the absence of such influences, motion remains inertial, highlighting the foundational role of absolute space and uniform time in classical kinematics.[26]
Forces and Energy
In classical physics, forces are vector quantities that cause changes in the motion of objects by producing acceleration, as described by Newton's second law of motion. These forces can be categorized into several types, including gravitational forces, which act between masses and follow Newton's law of universal gravitation, given by $ F = G \frac{m_1 m_2}{r^2} $, where $ G $ is the gravitational constant, $ m_1 $ and $ m_2 $ are the masses, and $ r $ is the distance between their centers. Electrostatic forces between charged particles obey Coulomb's law, expressed as $ F = k \frac{q_1 q_2}{r^2} $, with $ k $ as Coulomb's constant, $ q_1 $ and $ q_2 $ as the charges, and $ r $ as the separation distance.[30] Contact forces, such as friction or normal forces, arise from direct physical interaction between surfaces and do not follow inverse-square laws but depend on material properties and geometry. The concepts of work and energy connect forces to the broader dynamics of systems. Work $ W $ done by a force on an object is the line integral $ W = \int \mathbf{F} \cdot d\mathbf{x} $ along the path of displacement, representing the transfer of energy from the force to the object. Kinetic energy $ KE $ of an object is $ \frac{1}{2} m v^2 $, where $ m $ is mass and $ v $ is speed, quantifying the energy associated with motion. Potential energy stores work done against conservative forces; for gravity near Earth's surface, it simplifies to $ PE = m g h $, with $ g $ as acceleration due to gravity and $ h $ as height. The work-energy theorem states that the net work done equals the change in kinetic energy, linking forces directly to energy transformations. Conservation of energy is a fundamental principle in classical mechanics for closed systems without non-conservative forces. In such systems, total mechanical energy $ E = KE + PE $ remains constant, as established in the mechanical context of the first law of thermodynamics. Conservative forces, like gravity and electrostatic forces, allow this conservation because the work they do depends only on initial and final positions, not the path taken; for example, gravitational potential energy converts fully to kinetic energy in free fall without loss.[31] Non-conservative forces, such as friction, dissipate energy as heat, preventing full mechanical conservation by making path-dependent work negative.[31] Power quantifies the rate of energy transfer or work done, defined as $ P = \frac{dW}{dt} = \mathbf{F} \cdot \mathbf{v} $, where $ \mathbf{v} $ is velocity; this measures how quickly a force imparts energy to a system.[32] A representative example of energy dynamics occurs in simple harmonic motion, such as a mass on a spring, where total energy oscillates between kinetic and elastic potential forms without net loss in ideal conditions, illustrating conservative force behavior: at maximum displacement, all energy is potential ($ PE = \frac{1}{2} k A^2 $, with $ k $ as spring constant and $ A $ as amplitude), converting fully to kinetic at equilibrium.[33]Major Branches
Classical Mechanics
Classical mechanics is the branch of physics that describes the motion of macroscopic objects under the action of forces, applicable at non-relativistic speeds and scales where quantum effects are negligible. It provides a deterministic framework for predicting trajectories and interactions based on initial conditions and applied forces. The foundational principles were established by Isaac Newton in his Philosophiæ Naturalis Principia Mathematica (1687), which revolutionized the understanding of physical phenomena through mathematical rigor.[17] Newton's first law, the law of inertia, states that every body perseveres in its state of rest or of uniform motion in a right line unless compelled to change that state by forces impressed upon it.[17] This law implies that if the net external force on a body is zero, it remains in equilibrium, either at rest or moving with constant velocity.[17] Newton's second law quantifies the relationship between force and motion: the change of motion is proportional to the motive force impressed and occurs in the direction of the straight line in which that force acts, expressed in vector form as , where is the net force, is the inertial mass, and is the acceleration of the body's center of mass.[17] The third law asserts that to every action there is always opposed an equal reaction, meaning the mutual actions of two bodies on each other are equal in magnitude and opposite in direction.[17] Linear momentum for a particle is defined as , and the second law can be reformulated as .[34] For a system of particles in an isolated environment with no net external force (), the total momentum is conserved: , so remains constant over time.[34] This conservation arises from the pairwise cancellation of internal forces via the third law. In applications to rigid body dynamics, a rigid body—where inter-particle distances remain fixed—is treated as having translational motion of its center of mass governed by (with the total mass) and rotational motion about the center of mass described by (with the net torque, the moment of inertia, and the angular acceleration). Friction, a contact force opposing relative motion, includes static friction (up to , where is the static coefficient and the normal force) that prevents sliding and kinetic friction () that acts during motion, often modeled as an external force in systems like inclines or brakes.[35] For uniform circular motion, such as satellites in orbit, the required centripetal force is (or ), directed inward, typically supplied by gravity.[36] Pendulums illustrate oscillatory motion; a simple pendulum of length with small angular displacement approximates simple harmonic motion with period , where is gravitational acceleration./11%3A_Simple_Harmonic_Motion/11.03%3A_Pendulums) Central force problems, where force depends only on distance from a fixed center (e.g., for gravity), yield closed orbits that are conic sections. Newton's inverse-square law derives Kepler's laws of planetary motion: (1) orbits are ellipses with the central body at one focus, given by (with semi-major axis, eccentricity); (2) the radius vector sweeps equal areas in equal times, from constant angular momentum ; (3) , linking period to orbit size.[37] Representative examples include Atwood's machine, where two masses connected by an inextensible string over a massless frictionless pulley accelerate with , demonstrating tension and net force balance./04%3A_Hamiltons_Principle_and_Noethers_Theorem/4.08%3A_Example_1-__One_Degree_of_Freedom-_Atwoods_Machine) In collisions, elastic collisions conserve both momentum and kinetic energy (e.g., two billiard balls glancing off), while inelastic collisions conserve only momentum, with kinetic energy converted to heat or deformation (e.g., a car crash where vehicles stick together).[38]Electromagnetism
Electromagnetism in classical physics describes the interactions between electric charges and currents through the concepts of electric and magnetic fields, unifying previously separate phenomena into a coherent framework. The electric field at a point is defined as the force experienced by a test charge divided by that charge, , representing the influence of other charges on it. This field arises from stationary charges according to Coulomb's law, but in the broader context, it permeates space and exerts forces on other charges. Gauss's law quantifies the relationship between the electric field and charge distribution: the flux of through a closed surface is , where is the enclosed charge and is the vacuum permittivity. This integral form, originally formulated by Carl Friedrich Gauss in 1835, implies that electric fields diverge from positive charges and converge on negative ones, with no net flux through charge-free regions.[39] The electric field can also be expressed in terms of a scalar potential , where , allowing solutions via Poisson's equation for charge density . Magnetic fields , in contrast, arise from moving charges or currents and do not originate from isolated monopoles. The Biot-Savart law gives the field due to a steady current element: , where is the vacuum permeability, derived from experiments by Jean-Baptiste Biot and Félix Savart in 1820. Ampère's circuital law extends this to closed loops: , stating that the line integral of around a path equals times the enclosed current , as formulated by André-Marie Ampère in 1826. Unlike electric fields, magnetic fields form closed loops with zero net flux through any closed surface, reflecting the absence of magnetic monopoles.[40][41] James Clerk Maxwell unified these fields in 1865 through four equations that govern their dynamic interplay. Gauss's law for magnetism, , confirms no monopoles. Faraday's law of electromagnetic induction, , discovered by Michael Faraday in 1831, shows a changing magnetic flux induces an electromotive force . Maxwell's Ampère's law includes the displacement current: , enabling consistency in time-varying fields. These predict electromagnetic waves propagating at speed m/s in vacuum, identifying light as such a wave.[42][43] The Lorentz force law describes how fields act on charges: , where is the charge's velocity, combining electric and magnetic contributions; Hendrik Lorentz derived this complete form in 1895. In examples, capacitors store energy in electric fields between plates, with for surface charge density . Inductors store energy in magnetic fields from currents, with self-inductance relating flux to current via . Electromagnetic induction powers generators, where a coil's motion in a magnetic field induces voltage per Faraday's law, foundational to electrical machinery. These principles underpin classical electromagnetism's predictive power for circuits and waves.[44]Thermodynamics
Thermodynamics, a cornerstone of classical physics, examines the relationships between heat, work, and energy in macroscopic systems, emphasizing macroscopic properties without reference to atomic structure. It emerged in the 19th century amid efforts to improve steam engines and understand heat as a form of energy, providing laws that govern energy transformations in thermal processes. These principles apply to systems like gases, engines, and fluids, where temperature differences drive changes, and they establish limits on efficiency and directionality of processes.[23] The zeroth law of thermodynamics establishes the concept of temperature through thermal equilibrium: if two systems are each in thermal equilibrium with a third system, then they are in thermal equilibrium with each other. This transitive property allows the definition of a temperature scale, enabling thermometers to measure consistent values across isolated systems. Formally introduced by Ralph H. Fowler in the 1930s to precede the other laws logically, the principle underpins empirical temperature scales like the Celsius or Kelvin, where equilibrium implies no net heat flow.[45] The first law of thermodynamics expresses the conservation of energy in thermal systems, stating that the change in internal energy of a system equals the heat added to it minus the work done by it: . This formulation, building on experiments showing heat equivalent to mechanical work, implies that energy cannot be created or destroyed in isolated processes involving heat and work. Julius Robert von Mayer proposed the equivalence in 1842 based on biological and physical observations, while James Prescott Joule's precise measurements in the 1840s quantified the mechanical equivalent of heat, and Hermann von Helmholtz formalized it mathematically in 1847 as the conservation of force.[46][47] The second law of thermodynamics introduces irreversibility, asserting that the entropy of an isolated system never decreases: , with equality only for reversible processes. It explains why certain processes, like heat flow, occur spontaneously in one direction. Sadi Carnot's 1824 analysis of ideal heat engines laid the groundwork by showing efficiency limits without full energy conservation details. Rudolf Clausius formalized it in 1850, stating that heat cannot spontaneously flow from a colder to a hotter body (Clausius statement), and introduced entropy in 1865 as to quantify unavailable energy. William Thomson (Lord Kelvin) independently stated in 1851 that no engine can convert heat entirely into work without rejecting some to a colder reservoir (Kelvin-Planck statement), emphasizing the impossibility of perpetual motion of the second kind.[48][23] For ideal gases, classical thermodynamics models behavior using the equation of state , where is pressure, volume, moles, the gas constant, and absolute temperature. This combines Boyle's 1662 inverse proportionality of pressure and volume at constant temperature, Charles's and Gay-Lussac's direct proportionality of volume and temperature at constant pressure, and Avogadro's relation of volume to particle number; Benoît Paul Émile Clapeyron synthesized it in 1834. In adiabatic processes, where no heat exchanges (), the first law yields , with the heat capacity ratio, describing reversible compression or expansion without thermal interaction.[49] Heat engines convert thermal energy to mechanical work, limited by the second law. The Carnot cycle, an ideal reversible cycle between hot reservoir at and cold at , achieves maximum efficiency , derived from the cycle's isothermal and adiabatic steps where work output equals heat input differences scaled by temperatures. Real engines, like steam or internal combustion types, approach but never reach this limit due to irreversibilities, with the second law dictating entropy production in practical operations. Examples illustrate these principles in everyday phenomena. Phase transitions, such as water boiling at 100°C under standard pressure, involve latent heat absorption at constant temperature, where the first law accounts for energy to break intermolecular bonds without temperature change, and entropy increases as the system moves to a higher-disorder state. Specific heat capacities quantify energy needed to raise temperature, like water's high value of 4.184 J/g·K enabling climate moderation, reflecting molecular vibrational storage under the first law without phase change. These macroscopic behaviors highlight thermodynamics' predictive power for energy transfers in bulk matter.[23]Mathematical Frameworks
Newtonian Formulation
The Newtonian formulation of classical physics centers on Isaac Newton's three laws of motion, particularly the second law, which posits that the net force on a particle of mass equals the product of its mass and acceleration: . This equation establishes the foundational mathematical framework for describing the dynamics of mechanical systems in terms of ordinary differential equations (ODEs). In vector form, it yields a system of second-order ODEs for the position components, such as , where is the position vector and dots denote time derivatives. These equations encapsulate the deterministic evolution of systems under deterministic forces, assuming absolute space and time.[50] For a simple case like a mass-spring system, Newton's second law applied to a restoring force (Hooke's law) produces the second-order ODE . The general solution is , where is the angular frequency, is the amplitude, and is the phase, determined by initial conditions. This oscillatory solution illustrates how the Newtonian approach yields exact analytical forms for linear systems with time-independent forces.[51][52] Solving these ODEs analytically is feasible for systems with constant or simple forces, such as uniform gravitational fields leading to parabolic trajectories under , where the velocity integrates to and position to . For more complex, nonlinear, or time-varying forces, numerical integration methods are employed, including Euler's method or higher-order Runge-Kutta schemes, which approximate solutions by discretizing time steps and iteratively updating positions and velocities while preserving energy and stability where possible. These techniques enable simulations of multi-body interactions beyond analytical reach.[53][54] In celestial mechanics, the Newtonian formulation reduces the two-body problem—governed by inverse-square gravitational forces —to an effective one-body problem via the center-of-mass frame, yielding conic section orbits (ellipses, parabolas, or hyperbolas) depending on the total energy. For bound systems with negative energy, orbits are ellipses, as demonstrated by integrating the radial equation to obtain the trajectory , where is angular momentum, is the reduced mass, and is eccentricity. Perturbation theory extends this by treating small additional forces (e.g., from other bodies) as deviations from the integrable two-body solution, expanding orbits in series to approximate long-term behavior like orbital precession.[55][56][57] To simplify the vector form of , appropriate coordinate systems are chosen based on symmetry: Cartesian coordinates for linear motions, polar for planar central forces (expressing acceleration as and ), and spherical for three-dimensional central potentials, decoupling radial and angular parts via conserved angular momentum. These transformations reduce the dimensionality of the ODEs, facilitating analytical or numerical solutions.[58][59] A key application is planetary motion, where Newton's universal gravitation predicts elliptical orbits with the Sun at one focus, aligning with Kepler's first law and deriving from the two-body solution under . Tidal forces exemplify differential gravity in this framework: for a body like Earth in the Moon's field, the gravitational acceleration varies across its diameter, yielding a stretching force , where is Earth's radius, the Earth-Moon distance, and the Moon's mass, causing ocean bulges. Energy-based methods, such as conservation of mechanical energy, can complement force integration but are secondary to direct ODE solving here.[60][61][62]Variational Principles
Variational principles provide a foundational framework in classical physics for deriving the equations of motion through optimization of a quantity known as the action, offering an alternative to direct force-based approaches. This method posits that the actual path taken by a physical system extremizes the action functional, which is the time integral of the Lagrangian. The Lagrangian itself is defined as the difference between the kinetic energy and potential energy of the system, . The principle of least action, formulated as , states that variations in the path that keep the endpoints fixed yield zero change in the action, leading to the true trajectory. This principle was first proposed by Pierre-Louis Maupertuis in 1744 for optical and mechanical systems, and later rigorously developed by Leonhard Euler and Joseph-Louis Lagrange.[63][64] From the principle of least action, the Euler-Lagrange equations emerge as the governing differential equations for the system's dynamics. For a system described by generalized coordinates and their time derivatives , the Euler-Lagrange equation is given by for each coordinate . These equations, derived in Euler's 1744 work Methodus inveniendi lineas curvas and systematized by Lagrange in his 1788 treatise Mécanique Analytique, transform the variational problem into a set of second-order ordinary differential equations equivalent to Newton's laws but applicable in arbitrary coordinates.[63][64] A further reformulation leads to Hamiltonian mechanics, which shifts the description from generalized coordinates and velocities to coordinates and momenta, facilitating analysis in phase space. The Hamiltonian is defined as , where the canonical momenta are . The dynamics are then governed by Hamilton's canonical equations:
This framework was introduced by William Rowan Hamilton in his 1834 paper "On a General Method in Dynamics," where it unifies the treatment of mechanical systems through a single characteristic function.[65] Variational principles offer several advantages over Newtonian formulations, particularly in handling complex systems with constraints or non-Cartesian coordinates, as the Lagrangian incorporates these naturally without introducing auxiliary forces. Moreover, they reveal deep connections between symmetries of the Lagrangian and conservation laws via Noether's theorem, which states that every continuous symmetry of the action corresponds to a conserved quantity. For instance, time translation invariance implies energy conservation, and spatial translation invariance implies momentum conservation. This theorem, proven by Emmy Noether in her 1918 paper "Invariante Variationsprobleme," underscores the symmetry-based structure of classical physics.[63][66] Illustrative examples highlight the practicality of these methods. For a double pendulum, consisting of two masses connected by massless rods, the Lagrangian is constructed in terms of angular coordinates and their derivatives, yielding coupled Euler-Lagrange equations that capture chaotic motion more straightforwardly than direct Newtonian analysis in Cartesian coordinates. Similarly, for central force problems like planetary orbits, polar coordinates simplify the Lagrangian to , leading to conserved angular momentum from rotational symmetry and radial motion equations that recover Kepler's laws.[63]
Limitations and Transitions
Relation to Relativity
Classical physics, particularly classical electromagnetism, assumes that signals and forces propagate instantaneously across any distance, allowing for infinite speeds in the Galilean framework of Newtonian mechanics. This prediction leads to inconsistencies with experimental observations, such as the null result of the Michelson-Morley experiment in 1887, which sought to detect the Earth's motion relative to a hypothetical luminiferous ether but found no evidence of ether drift, implying that the speed of light is constant regardless of the observer's motion.[67] To resolve these issues, Albert Einstein introduced special relativity in 1905, which replaces the Galilean transformations with Lorentz transformations to maintain the invariance of the speed of light . The Lorentz transformations for coordinates between two inertial frames moving at relative velocity along the x-axis are given by where is the Lorentz factor.[68] These transformations ensure that Maxwell's equations of electromagnetism are form-invariant across frames, unlike the Galilean versions , , which would alter the perceived speed of light. In special relativity, the total energy of a particle includes its rest energy, expressed as for a particle at rest, where is the rest mass; this relation arises from the relativistic extension of kinetic energy and highlights the equivalence of mass and energy.[69] Relativistic kinematics further modifies classical concepts, with the relativistic mass (where is the rest mass) and momentum , preventing velocities from exceeding and ensuring conservation laws hold in all inertial frames.[68] These adjustments show that classical physics approximates special relativity well at speeds much less than (, where ), but deviates significantly near , where time dilation and length contraction become pronounced. Extending to gravity, Einstein's general relativity (1915) describes it not as a force but as the curvature of spacetime caused by mass-energy, governed by the equivalence principle, which states that the effects of gravity are locally indistinguishable from acceleration. In the weak-field limit, where gravitational potentials are small () and velocities are non-relativistic, general relativity reduces to Newtonian gravity via the metric approximation , , yielding Poisson's equation for the gravitational potential .[70] A key validation is the anomalous precession of Mercury's perihelion, observed at 43 arcseconds per century beyond Newtonian predictions, which general relativity explains through spacetime curvature around the Sun. Similarly, the Global Positioning System (GPS) requires corrections from both special and general relativity: satellite clocks run faster by about 38 microseconds per day due to weaker gravity (general) outweighing velocity effects (special), necessitating adjustments to maintain meter-level accuracy.[71]