• Driver core-skin temperature gradients and blackouts

    Whilst it is highly beneficial to reduce the surface-to-bulk temperature gradient of a racing-tyre, the same cannot be said for the cognitive organisms controlling the slip-angles and slip-ratios of those tyres.

    A 2014 paper in the Journal of Thermal Biology, Physiological strain of stock car drivers during competitive racing, revealed that not only does the core body temperature increase during a motor-race, (if we do indeed count a stock-car race as such), but the skin temperature can also rise to such a degree that the core-to-skin temperature delta decreases from ~2 degrees to ~1.3 degrees.

    The authors suggest that a reduced core-to-skin temperature gradient increases the cardiovascular stress “by reducing central blood volume.” Citing a 1972 study of military pilots, they also suggest that when such conditions are combined with G-forces, the grayout (sic) threshold is reduced.
    Intriguingly, in the wake of the Fernando Alonso’s alien abduction incident at Barcelona last week, they also assert that “A consequence of this combination may possibly result in a lower blackout tolerance.”

    Source: mccabism

  • Proof that Formula 1 was better in the past

    If you’re a long-time Formula 1 fan, then the chances are that you believe the sport was better in the past. However, the chances are that you will have also read arguments from younger journalists and fans, to the effect that Formula 1 in the modern era is better than it was in the past.

    Fortunately, there is an objective means to resolve this dispute: churn.

    In sport, churn provides a straightforward measure of the uncertainty of outcome. Churn is simply the average difference between the relative rankings of the competitors at two different measurement points. One can measure the churn at an individual race by comparing finishing positions to grid positions; one can measure the churn from one race to another within a season by comparing the finishing positions in each race; and one can measure the inter-seasonal churn by comparing the championship positions from one year to another.

    The latter measure provides an objective means of tracking the level of seasonal uncertainty in Formula 1, and F1 Data Junkie Tony Hirst has recently compiled precisely these statistics, for both the drivers’ championship and the constructors’ championship, (see figures below). In each case, Hirst compiled the churn and the ‘adjusted churn’. The latter is the better measure because it normalises the statistics using the maximum possible value of the churn in each year. The maximum can change as the number of competitors changes.

    The results for the drivers’ championship indicates that churn peaked in 1980. Given that the interest of many, if not most spectators, is dominated by the outcome of the drivers championship, this suggests that Formula 1 peaked circa 1980.

    The results for the manufacturers’ championship are slightly different, suggesting that uncertainty peaked in the late 1960s, (although the best-fit line peaks in the middle 1970s).

    One could, of course, make the alternative proposal that the churn within individual races is more important to spectators’ interest, but at the very least we now have an objective statistical measure which provides good reason for believing that Formula 1 was better in the 1970s and early 1980s.

    Source: mccabism

  • Lovelock and emergentism

    In James Lovelock’s 2006 work, The Revenge of Gaia, he concludes the chapter entitled What is Gaia? with a description of the regulator in James Watt’s steam engine, and the following argument:

    “Simple working regulators, the physiological systems in our bodies that regulate our temperature, blood pressure and chemical composition…are all outside the sharply-defined boundary of Cartesian cause-and-effect thinking. Whenever an engineer like Watt ‘closes the loop’ linking the parts of his regulator and sets the engine running, there is no linear way to explain its working. The logic becomes circular; more importantly, the whole thing has become more than the sum of its parts. From the collection of elements now in operation, a new property, self-regulation, emerges – a property shared by all living things, mechanisms like thermostats, automatic pilots, and the Earth itself.

    “The philosopher Mary Midgley in her pellucid writing reminds us that the twentieth century was the time when Cartesian science triumphed…Life, the universe, consciousness, and even simpler things like riding a bicycle, are inexplicable in words. We are only just beginning to tackle these emergent phenomena, and in Gaia they are as difficult as the near magic of the quantum physics of entanglement.”

    Now Lovelock is an elegant and fascinating author, but here his thought is lazy, sloganistic and poorly-informed. There are multiple confusions here, and such confusions are endemic amongst a number of writers and journalists who take an interest in science, so let’s try and clear them up.

    Firstly, we encounter the slogan that a system can be ‘more than the sum of its parts’. Unfortunately, the authors who make this statement never seem to conjoin the assertion with a definition of what they mean by the phrase ‘sum of its parts’. Most scientists would say that the sum of the parts of a system comprises the parts of the system, their properties, and all the relationships and interactions between the parts. If you think that there is more to a whole system than its parts, their properties and the relationships between the parts, then that amounts to a modern form of vitalism and/or dualism, the notion that living things and/or conscious things depend upon non-physical elements. Calling it ’emergentism’ is simply a way of trying to dress up a disreputable idea in different language, rather in the manner than creationism was re-marketed as ‘intelligent design’.

    Assertions that a system can be more than the sum of its parts are frequently combined with attacks on so-called ‘reductionistic’ science. Anti-reductionistic authors can often be found pointing out that whole systems possess properties which are not possessed by any of the parts of which that system is composed. However, if such authors think this is somehow anti-reductionistic, then they have profoundly mis-understood what reductionistic science does. Scientists understand that whole systems possess properties which are not possessed by any of the parts; that’s precisely because the parts engage in various relationships and interactions. A primary objective of reductionistic science is to try and understand the properties of a whole system in terms of its parts, and the relationships between the parts: diamond and graphite, for example, are both composed of the same parts, (carbon atoms), but what gives diamond and graphite their different properties are the different arrangements of the carbon atoms. Explaining the different properties of carbon and diamond in terms of the different relationships between the parts of which they are composed is a triumph of so-called ‘reductionistic’ science.

    The next confusion we find in Lovelock’s argument is the notion that twentieth-century science was somehow linear, or Cartesian, and non-linear systems with feedback somehow lie outside the domain of this world-view. Given the huge body of twentieth-century science devoted to non-linear systems, this will come as something of surprise to many scientists. For example, in General Relativity, (that exemplar of twentieth-century science), the field equations are non-linear. Lovelock might even have heard the phrase ‘matter tells space how to curve, and space tells matter how to move’; a feedback cycle, in other words! Yet General Relativity is also a prime exemplar of determinism: the state of the universe at one moment in time uniquely determines its state at all other moments in time. There is clearly no reason to accept the implication that cause-and-effect must be confined to linear chains; non-linear systems with feedback are causal systems just as much as linear systems.

    It is amusing the note that Lovelock concludes his attack on so-called ‘Cartesian’ science with an allusion to quantum entanglement. Clearly, quantum entanglement is a product of quantum physics, that other exemplar of twentieth century physics. So, in one and same breath, twentieth century science is accused of being incapable of dealing with emergentism, yet also somehow yields the primary example of emergentism!

    Authors such as Lovelock, Midgley, and their journalistic brethren, are culpable here of insufficient curiosity and insufficient understanding. The arguments they raise against twentieth-century science merely indicate that they have failed to fully understand twentieth-century science and physics.

    Source: mccabism

  • Highlights From The London Classic Car Show 2015

    For any classic car fans, the 8-11 January 2015 saw an inaugural event which was most certainly deemed to set their gears into motion. Hosted at the ExCel centre, thousands of Classic Car enthusiasts flocked to London’s docklands to feature their eyes on prestigious delights such as the Citroen DS3 and the indomitable LaFerrari, as well as a whole host of timeless treats at the first eve. Here are some highlights from London Classic Car Show 2015:

    Maclaren M23

    1) Visitors could see prestigious F1 legends such as the Maclaren M23 pictured above

    2) One of the unique highlights of the show was the ‘Car Runway’ Where attendees could see their favourite vehicles driving – just as they were meant to be seen.

    Car Runway

    3) Another shot of the inagural runway, which no other similar car show has been before.

    Car Runway

    4) Spectators enjoyed looking at the classic cars

    Cool classic cars on show

    5) James May showcased his top cars that changed the world, and this 1964 Ford Mustang was one of the thirty delights on show.

    1964 Ford Mustang

    6) The 1954 Austin Mini was another of James May’s top picks.

    7) Nicholas Mee was there with over half a dozen classic Aston Martin’s, such as this one above.

    Lamborghini Diablo

    8) This 1994 Lamborghini Diablo was one of the cars shown on the Classic runway

    Mclaren F1-GTR Harrods

    9) This Mclaren F1-GTR Harrods was part of the Le Mans- The Icons exhibition

    10) As well as seeing some of the beautiful exteriors, the Classic Car Show gave a chance for revellers to peek inside the engines of some of our most loved models. Find out more about The London Classic Car Show at www.thelondonclassiccarshow.co.uk

    By Natasha Colyer from Alternative Route Finance (www.alternativeroutefinance.co.uk)

    Continue Reading…

  • Formula 1 turbines and enthalpy

    A couple of interesting developments occurred around the exhaust systems on both the Ferrari and Mercedes-engined Formula 1 cars in 2014: the Ferrari-engined vehicles acquired insulation around the exhaust-pipes, and the Mercedes-equipped cars appeared with a so-called log-type exhaust.

    The purpose of the insulation was to increase the temperature of the exhaust gases entering the turbine. Similarly, increasing the exhaust gas temperature was a purported beneficial side-effect of the log-type exhaust on the Mercedes.

    A couple of general points about the physics of turbines might provide some useful context here. First, the work done by the exhaust gases on the turbine comes from the total enthalpy (aka stagnation enthalpy) of the exhaust gas flow.

    This is perhaps a subtle concept. The total energy E in the fluid-flow through any type of turbine consists of:

    E = kinetic energy + potential (gravitational) energy + internal energy

    However, to understand the change of fluid-energy between the inlet and outlet of a turbine, it is necessary to introduce the enthalpy h, the sum of the internal energy e and the so-called flow-work pv:

    h = e + pv ,

    where p is the pressure, and v is the specific volume, (the volume occupied by a unit mass of fluid).

    One way of looking at the flow-work is that it is part of the energy expended by the fluid maintaining the flow; the fluid performs work upon itself, (in addition to the external work it performs exerting a torque on the turbine), and this work can be divided into that performed by the pressure gradient and the work done in compression/expansion.

    Another way of looking at it is that the energy released into the fluid from a combustion process may have been released at a constant pressure as the fluid performed work expanding against its environment. The internal energy e doesn’t take that into account, but the enthalpy h = e + pv does. As the diagram above from Daniel Schroeder’s Thermal Physics suggests, the enthalpy counts not only the current internal energy of a system, but the internal energy which would be expended creating the volume which the system occupies.

    For a system which is flowing, it possesses energy of motion (kinetic energy) in addition to enthalpy. The so-called total enthalpy hT is simply the sum of the enthalpy and kinetic energy:

    hT= e + pv + 1/2 &#961 v2 ,

    where &#961 is the mass density and v is the fluid-flow velocity.

    This quantity is also called the stagnation enthalpy because if you brought a fluid parcel to a stagnation point, at zero velocity, without allowing any heat transfer to take place to adjacent fluid or solid walls, the kinetic energy component of the total energy in that parcel would be transformed into enthalpy.

    In the case of a Formula 1 turbine, there is no difference in the potential energy of the exhaust gas at the inlet and outlet, so this term can be omitted from the expression for the change in energy. What remains entails that the rate at which a turbine develops power is determined by subtracting the enthalpy-flow rate at the outlet from the enthalpy-flow-rate at the inlet. The greater the decrease in total enthalpy, the greater the power generated by the turbine.

    As the exhaust gases pass through the turbine, they lose both kinetic energy and static pressure, but gain some internal energy due to friction. As a consequence, the entropy of the exhaust gas increases, and the enthalpy reduction is not quite as large as it would otherwise be (see diagram above from Fluid Mechanics, J.F.Douglas, J.M.Gasiorek and J.A.Swaffield).

    However, (and here is the crux of the matter), for a given pressure difference between the turbine inlet and outlet, the reduction in total enthalpy increases with increasing temperature at the inlet. In other words, this is another expression of the fact that the thermal efficiency of a turbine is greater at higher temperatures (a fact which also dominates the design of nuclear reactors).

    So, all other things being equal, increasing exhaust gas temperature with insulation or a log-type exhaust geometry will increase the loss of total enthalpy between the inlet and outlet of the turbine, increasing the power generated by the turbine.

    However, there is another side to this coin: the required pressure drop between the turbine inlet and outlet for a desired enthalpy-reduction, decreases as the inlet temperature increases. Hence, if there is a required turbine power-level, it can be achieved with a lower pressure drop if the exhaust gases are hotter. This could be important, because the lower the pressure at the inlet side of the turbine, the lower the back-pressure which otherwise potentially inhibits the power generated by the internal combustion engine upstream. So increasing exhaust gas temperatures might be about getting the same turbine power with less detrimental back-pressure on the engine.

    Source: mccabism

dd