Edit model card

SetFit with sentence-transformers/multi-qa-mpnet-base-cos-v1

This is a SetFit model that can be used for Text Classification. This SetFit model uses sentence-transformers/multi-qa-mpnet-base-cos-v1 as the Sentence Transformer embedding model. A SetFitHead instance is used for classification.

The model has been trained using an efficient few-shot learning technique that involves:

  1. Fine-tuning a Sentence Transformer with contrastive learning.
  2. Training a classification head with features from the fine-tuned Sentence Transformer.

Model Details

Model Description

Model Sources

Model Labels

Label Examples
28
  • 'alpha mathbb q sqrt dtusqrt dtuin mathbb q the discriminant δ displaystyle delta of the field q d displaystyle mathbb q sqrt d is defined by δ 4 d if d ≡ 2 or 3 m o d 4 d if d ≡ 1 m o d 4 displaystyle delta begincases4dtext if dequiv 2text or 3mathrm mod 41mmdtext if dequiv 1mathrm mod 4endcases let d = 1 displaystyle dneq 1 be a rational integer without square factors except 1 the set of quadratic algebraic integers of radicand d displaystyle d is denoted by o d displaystyle osqrt d this set is given by o d a b d a b ∈ z if d ≡ 2 or 3 m o d 4 a b d 2 a b ∈ z a ≡ b m o d 2 if d ≡ 1 m o d 4 displaystyle osqrt dbegincasesabsqrt dabin mathbb z text if dequiv 2text or 3mathrm mod 41mmabsqrt d2abin mathbb z aequiv bmathrm mod 2text if dequiv 1mathrm mod 4endcases o d displaystyle osqrt d is a ring under ordinary addition and multiplication if we let δ − d if δ is even 1 − d 2 if δ is odd displaystyle delta begincasessqrt dtext if delta text is even1mm1sqrt d2text if delta text is oddendcases then o d a b δ a b ∈ z displaystyle osqrt dabdelta abin mathbb z let a displaystyle mathbf a be an ideal in the ring of integers o d displaystyle osqrt d that is let a displaystyle mathbf a be a nonempty subset of o d displaystyle osqrt d such that for any α β ∈ a displaystyle alpha beta in mathbf a and any λ μ ∈ o d displaystyle lambda mu in osqrt d λ α μ β ∈ a displaystyle lambda alpha mu beta in mathbf a an ideal a displaystyle mathbf a as defined here is sometimes referred to as an integral ideal to distinguish from fractional ideal to be defined below if a displaystyle mathbf a is an ideal in o d displaystyle osqrt d then one can find α 1 α 2 ∈ o d displaystyle alpha 1alpha 2'
  • 'n displaystyle mathbb z 2n to r displaystyle mathbb r since hom z g [UNK] g displaystyle texthommathbb z gcong g for any abelian group g displaystyle g we have hom z 2 n r [UNK] r 2 n displaystyle texthommathbb z 2nmathbb r cong mathbb r 2n after composing with the complex exponential we find that hom z 2 n u 1 [UNK] r 2 n z 2 n displaystyle texthommathbb z 2nu1cong mathbb r 2nmathbb z 2n which is the expected result since every finitely generated abelian group is isomorphic to g [UNK] z n ⊕ [UNK] i 1 m z a i displaystyle gcong mathbb z noplus bigoplus i1mmathbb z ai the character group can be easily computed in all finitely generated cases from universal properties and the isomorphism between finite products and coproducts we have the character groups of g displaystyle g is isomorphic to hom z c ∗ ⊕ n ⊕ [UNK] i 1 k hom z n i c ∗ displaystyle texthommathbb z mathbb c oplus noplus bigoplus i1ktexthommathbb z nimathbb c for the first case this is isomorphic to c ∗ ⊕ n displaystyle mathbb c oplus n the second is computed by looking at the maps which send the generator 1 ∈ z n i displaystyle 1in mathbb z ni to the various powers of the n i displaystyle ni th roots of unity ζ n i exp 2 π i n i displaystyle zeta niexp2pi ini consider the n × n displaystyle ntimes n matrix a ag whose matrix elements are a j k f j g k displaystyle ajkfjgk where g k displaystyle gk is the kth element of g the sum of the entries in the jth row of a is given by [UNK] k 1 n a j k [UNK] k 1 n f j g k 0 displaystyle sum k1najksum k1nfjgk0 if j = 1 displaystyle jneq 1 and [UNK] k 1 n a 1 k n displaystyle sum k1na1kn the sum of the entries in the kth column of a is given by [UNK] j 1 n a j k [UNK] j 1 n f j g k 0 displaystyle sum j1najksum j1'
  • 'the cunningham project is a collaborative effort started in 1925 to factor numbers of the form bn ± 1 for b 2 3 5 6 7 10 11 12 and large n the project is named after allan joseph champneys cunningham who published the first version of the table together with herbert j woodall there are three printed versions of the table the most recent published in 2002 as well as an online version by samuel wagstaffthe current limits of the exponents are two types of factors can be derived from a cunningham number without having to use a factorization algorithm algebraic factors of binomial numbers eg difference of two squares and sum of two cubes which depend on the exponent and aurifeuillean factors which depend on both the base and the exponent from elementary algebra b k n − 1 b n − 1 [UNK] r 0 k − 1 b r n displaystyle bkn1bn1sum r0k1brn for all k and b k n 1 b n 1 [UNK] r 0 k − 1 − 1 r ⋅ b r n displaystyle bkn1bn1sum r0k11rcdot brn for odd k in addition b2n − 1 bn − 1bn 1 thus when m divides n bm − 1 and bm 1 are factors of bn − 1 if the quotient of n over m is even only the first number is a factor if the quotient is odd bm 1 is a factor of bn − 1 if m divides n and the quotient is odd in fact b n − 1 [UNK] d [UNK] n φ d b displaystyle bn1prod dmid nphi db and b n 1 [UNK] d [UNK] 2 n d [UNK] n φ d b displaystyle bn1prod dmid 2ndnmid nphi db see this page for more information when the number is of a particular form the exact expression varies with the base aurifeuillean factorization may be used which gives a product of two or three numbers the following equations give aurifeuillean factors for the cunningham project bases as a product of f l and mlet b s2 × k with squarefree k if one of the conditions holds then φ n b displaystyle phi nb have aurifeuillean factorization i k ≡ 1 mod 4 displaystyle kequiv 1mod 4 and n ≡ k mod 2 k displaystyle nequiv kpmod 2k ii k ≡ 2 3 mod 4 displaystyle kequiv 23pmod 4 and n ≡ 2 k'
0
  • 'the decibel symbol db is a relative unit of measurement equal to one tenth of a bel b it expresses the ratio of two values of a power or rootpower quantity on a logarithmic scale two signals whose levels differ by one decibel have a power ratio of 10110 approximately 126 or rootpower ratio of 101⁄20 approximately 112the unit expresses a relative change or an absolute value in the latter case the numeric value expresses the ratio of a value to a fixed reference value when used in this way the unit symbol is often suffixed with letter codes that indicate the reference value for example for the reference value of 1 volt a common suffix is v eg 20 dbvtwo principal types of scaling of the decibel are in common use when expressing a power ratio it is defined as ten times the logarithm in base 10 that is a change in power by a factor of 10 corresponds to a 10 db change in level when expressing rootpower quantities a change in amplitude by a factor of 10 corresponds to a 20 db change in level the decibel scales differ by a factor of two so that the related power and rootpower levels change by the same value in linear systems where power is proportional to the square of amplitude the definition of the decibel originated in the measurement of transmission loss and power in telephony of the early 20th century in the bell system in the united states the bel was named in honor of alexander graham bell but the bel is seldom used instead the decibel is used for a wide variety of measurements in science and engineering most prominently for sound power in acoustics in electronics and control theory in electronics the gains of amplifiers attenuation of signals and signaltonoise ratios are often expressed in decibels the decibel originates from methods used to quantify signal loss in telegraph and telephone circuits until the mid1920s the unit for loss was miles of standard cable msc 1 msc corresponded to the loss of power over one mile approximately 16 km of standard telephone cable at a frequency of 5000 radians per second 7958 hz and matched closely the smallest attenuation detectable to a listener a standard telephone cable was a cable having uniformly distributed resistance of 88 ohms per loopmile and uniformly distributed shunt capacitance of 0054 microfarads per mile approximately corresponding to 19 gauge wirein 1924 bell telephone laboratories received a favorable response to a new unit definition among members of the international advisory committee on long distance telephony in europe and replaced the msc with the transmission unit tu 1'
  • '##590637 atcheson bradley heidrich wolfgang ihrke ivo 5 october 2008 an evaluation of optical flow algorithms for background oriented schlieren imaging experiments in fluids 46 3 467 – 476 doi101007s0034800805727 s2cid 17713504 skeen scott a manin julien pickett lyle m 2015 simultaneous formaldehyde plif and highspeed schlieren imaging for ignition visualization in highpressure spray flames proceedings of the combustion institute 35 3 3167 – 3174 doi101016jproci201406040 willert christian e mitchell daniel m soria julio 5 april 2012 an assessment of highpower lightemitting diodes for high frame rate schlieren imaging experiments in fluids 53 2 413 – 421 bibcode2012exfl53413w doi101007s0034801212971 s2cid 120726611'
  • 'stripline and microstrip transmission lines such planar transmissionline resonators can be very compact in size and are widely used elements in microwave circuitry in cryogenic solidstate research superconducting transmissionline resonators contribute to solidstate spectroscopy and quantum information science in a laser light is amplified in a cavity resonator that is usually composed of two or more mirrors thus an optical cavity also known as a resonator is a cavity with walls that reflect electromagnetic waves ie light this allows standing wave modes to exist with little loss mechanical resonators are used in electronic circuits to generate signals of a precise frequency for example piezoelectric resonators commonly made from quartz are used as frequency references common designs consist of electrodes attached to a piece of quartz in the shape of a rectangular plate for high frequency applications or in the shape of a tuning fork for low frequency applications the high dimensional stability and low temperature coefficient of quartz helps keeps resonant frequency constant in addition the quartzs piezoelectric property converts the mechanical vibrations into an oscillating voltage which is picked up by the attached electrodes these crystal oscillators are used in quartz clocks and watches to create the clock signal that runs computers and to stabilize the output signal from radio transmitters mechanical resonators can also be used to induce a standing wave in other media for example a multiple degree of freedom system can be created by imposing a base excitation on a cantilever beam in this case the standing wave is imposed on the beam this type of system can be used as a sensor to track changes in frequency or phase of the resonance of the fiber one application is as a measurement device for dimensional metrology the most familiar examples of acoustic resonators are in musical instruments every musical instrument has resonators some generate the sound directly such as the wooden bars in a xylophone the head of a drum the strings in stringed instruments and the pipes in an organ some modify the sound by enhancing particular frequencies such as the sound box of a guitar or violin organ pipes the bodies of woodwinds and the sound boxes of stringed instruments are examples of acoustic cavity resonators the exhaust pipes in automobile exhaust systems are designed as acoustic resonators that work with the muffler to reduce noise by making sound waves cancel each other out the exhaust note is an important feature for some vehicle owners so both the original manufacturers and the aftermarket suppliers use the resonator to enhance the sound in tuned exhaust systems designed for performance the resonance of the exhaust pipes can also be used'
5
  • 'chon is a mnemonic acronym for the four most common elements in living organisms carbon hydrogen oxygen and nitrogen the acronym chnops which stands for carbon hydrogen nitrogen oxygen phosphorus sulfur represents the six most important chemical elements whose covalent combinations make up most biological molecules on earth all of these elements are nonmetals in a human body these four elements compose about 96 of the weight and major minerals macrominerals and minor minerals also called trace elements compose the remaindersulfur is contained in the amino acids cysteine and methionine phosphorus is contained in phospholipids a class of lipids that are a major component of all cell membranes as they can form lipid bilayers which keep ions proteins and other molecules where they are needed for cell function and prevent them from diffusing into areas where they should not be phosphate groups are also an essential component of the backbone of nucleic acids general name for dna rna and are required to form atp – the main molecule used as energy powering the cell in all living creaturescarbonaceous asteroids are rich in chon elements these asteroids are the most common type and frequently collide with earth as meteorites such collisions were especially common early in earths history and these impactors may have been crucial in the formation of the planets oceansthe simplest compounds to contain all of the chon elements are isomers fulminic acid hcno isofulminic acid honc cyanic acid hocn and isocyanic acid hnco having one of each atom abundance of the chemical elements carbonbased life biochemistry bioinorganic chemistry'
  • 'to expand atas capabilities for detection of signals that may have embedded messagesberkeley astronomers used the ata to pursue several science topics some of which might have transient seti signals until 2011 when the collaboration between the university of california berkeley and the seti institute was terminated cnet published an article and pictures about the allen telescope array ata on december 12 2008in april 2011 the ata was entered an 8month hibernation due to funding shortfalls regular operation of the ata resumed on december 5 2011in 2012 the ata was revitalized with a 36 million donation by franklin antonio cofounder and chief scientist of qualcomm incorporated this gift supported upgrades of all the receivers on the ata dishes to have 2× to 10× over the range 1 – 8 ghz greater sensitivity than before and supporting observations over a wider frequency range from 1 – 18 ghz though initially the radio frequency electronics only go to 12 ghz as of july 2013 the first of these receivers was installed and proven with full installation on all 42 antennas being expected for june 2017 ata is well suited to the search for extraterrestrial intelligence seti and to discovery of astronomical radio sources such as heretofore unexplained nonrepeating possibly extragalactic pulses known as fast radio bursts or frbs serendip search for extraterrestrial radio emissions from nearby developed intelligent populations is a seti program launched in 1979 by the berkeley seti research center serendip takes advantage of ongoing mainstream radio telescope observations as a piggyback or commensal program using large radio telescopes including the nrao 90m telescope at green bank and formerly the arecibo 305m telescope rather than having its own observation program serendip analyzes deep space radio telescope data that it obtains while other astronomers are using the telescopes the most recently deployed serendip spectrometer serendip vi was installed at both the arecibo telescope and the green bank telescope in 2014 – 2015 breakthrough listen is a tenyear initiative with 100 million funding begun in july 2015 to actively search for intelligent extraterrestrial communications in the universe in a substantially expanded way using resources that had not previously been extensively used for the purpose it has been described as the most comprehensive search for alien communications to date the science program for breakthrough listen is based at berkeley seti research center located in the astronomy department at the university of california berkeley announced in july 2015 the project is observing for thousands of hours every year on two major radio telescopes the green bank observatory in west virginia and the parkes observatory in australia previously only about 24'
  • 'proposed studies on the implications of peaceful space activities for human affairs often referred to as the brookings report was a 1960 report commissioned by nasa and created by the brookings institution in collaboration with nasas committee on longrange studies it was submitted to the house committee on science and astronautics of the united states house of representatives in the 87th united states congress on april 18 1961 the report has become noted for one short section entitled the implications of a discovery of extraterrestrial life which examines the potential implications of such a discovery on public attitudes and values the section briefly considers possible public reactions to some possible scenarios for the discovery of extraterrestrial life stressing a need for further research in this area it recommended continuing studies to determine the likely social impact of such a discovery and its effects on public attitudes including study of the question of how leadership should handle information about such a discovery and under what circumstances leaders might or might not find it advisable to withhold such information from the public the significance of this section of the report is a matter of controversy persons who believe that extraterrestrial life has already been confirmed and that this information is being withheld by government from the public sometimes turn to this section of the report as support for their view frequently cited passages from this section of the report are drawn both from its main body and from its footnotesthe report has been mentioned in newspapers such as the new york times the baltimore sun the washington times and the huffington post the report was entered into the congressional record which is currently archived at over 1110 libraries as part of the federal depository library programthe main author donald n michael was a social psychologist with a background in the natural sciences he was a fellow of the american association for the advancement of science the american psychological association the society for the psychological study of social issues and the world academy of art and scienceover 50 years after the report was initially released the brookings institution again focused on space policy by hosting several panels of experts to discuss topics such as the economic benefits of private industry ’ s involvement the scientific discoveries resulting from nasa ’ s continued space efforts and the potential for future exploration and the government ’ s policies and decision making process although the report discusses the need for research on many policy issues related to space exploration it is most often cited for passages from its brief section on the implications of a discovery of extraterrestrial life see section use in discussions about possible coverups the report contains the following chapters 5 introduction goals and methods comments on the organization and functions of a nasa social science research capability implications of satellitebased'
25
  • 'and a lower bound or equivalently if it is contained in an interval note that this is not just a property of the set s but also one of the set s as subset of p a bounded poset p that is by itself not as subset is one that has a least element and a greatest element note that this concept of boundedness has nothing to do with finite size and that a subset s of a bounded poset p with as order the restriction of the order on p is not necessarily a bounded poset a subset s of rn is bounded with respect to the euclidean distance if and only if it bounded as subset of rn with the product order however s may be bounded as subset of rn with the lexicographical order but not with respect to the euclidean distance a class of ordinal numbers is said to be unbounded or cofinal when given any ordinal there is always some element of the class greater than it thus in this case unbounded does not mean unbounded by itself but unbounded as a subclass of the class of all ordinal numbers bounded domain bounded function local boundedness order theory totally bounded'
  • '##2 1021 2012 evaluated at every third term combining pairs of fractions produces 114 32 2 2 3 32 3 64 3 2050 − 1 − 1 2050 − 1 2050 − [UNK] 32 3 64 6150 − 3 − 9 6150 − 9 6150 − [UNK] displaystyle sqrt 114cfrac sqrt 32223cfrac 323cfrac 64320501cfrac 12050cfrac 12050ddots cfrac 323cfrac 6461503cfrac 96150cfrac 96150ddots which is now 10 1 2 10 2 1 20 1 2 [UNK] displaystyle 1012overline 10212012 evaluated at the third term and every six terms thereafter continued fraction – number represented as a01a11 generalized continued fraction – generalization of continued fractions in which the partial numerators and partial denominators can assume arbitrary complex valuespages displaying wikidata descriptions as a fallback hermites problem continued fraction method of computing square roots – algorithms for calculating square roots restricted partial quotients continued fraction factorization – an integer factorization algorithmpages displaying wikidata descriptions as a fallback'
  • 'in mathematical analysis a family of functions is equicontinuous if all the functions are continuous and they have equal variation over a given neighbourhood in a precise sense described herein in particular the concept applies to countable families and thus sequences of functions equicontinuity appears in the formulation of ascolis theorem which states that a subset of cx the space of continuous functions on a compact hausdorff space x is compact if and only if it is closed pointwise bounded and equicontinuous as a corollary a sequence in cx is uniformly convergent if and only if it is equicontinuous and converges pointwise to a function not necessarily continuous apriori in particular the limit of an equicontinuous pointwise convergent sequence of continuous functions fn on either metric space or locally compact space is continuous if in addition fn are holomorphic then the limit is also holomorphic the uniform boundedness principle states that a pointwise bounded family of continuous linear operators between banach spaces is equicontinuous let x and y be two metric spaces and f a family of functions from x to y we shall denote by d the respective metrics of these spaces the family f is equicontinuous at a point x0 ∈ x if for every ε 0 there exists a δ 0 such that dƒx0 ƒx ε for all ƒ ∈ f and all x such that dx0 x δ the family is pointwise equicontinuous if it is equicontinuous at each point of xthe family f is uniformly equicontinuous if for every ε 0 there exists a δ 0 such that dƒx1 ƒx2 ε for all ƒ ∈ f and all x1 x2 ∈ x such that dx1 x2 δfor comparison the statement all functions ƒ in f are continuous means that for every ε 0 every ƒ ∈ f and every x0 ∈ x there exists a δ 0 such that dƒx0 ƒx ε for all x ∈ x such that dx0 x δ for continuity δ may depend on ε ƒ and x0 for uniform continuity δ may depend on ε and ƒ for pointwise equicontinuity δ may depend on ε and x0 for uniform equicontinuity δ may depend only on εmore generally when x is a topological space a set f of functions from x to y is said to be equicontinuous at x if for every ε 0 x has a neighborhood u'
27
  • 'a quantum solvent is essentially a superfluid aka a quantum liquid used to dissolve another chemical species any superfluid can theoretically act as a quantum solvent but in practice the only viable superfluid medium that can currently be used is helium4 and it has been successfully accomplished in controlled conditions such solvents are currently under investigation for use in spectroscopic techniques in the field of analytical chemistry due to their superior kinetic properties any matter dissolved or otherwise suspended in the superfluid will tend to aggregate together in clumps encapsulated by a quantum solvation shell due to the totally frictionless nature of the superfluid medium the entire object then proceeds to act very much like a nanoscopic ball bearing allowing effectively complete rotational freedom of the solvated chemical species a quantum solvation shell consists of a region of nonsuperfluid helium4 atoms that surround the molecules and exhibit adiabatic following around the centre of gravity of the solute as such the kinetics of an effectively gaseous molecule can be studied without the need to use an actual gas which can be impractical or impossible it is necessary to make a small alteration to the rotational constant of the chemical species being examined in order to compensate for the higher mass entailed by the quantum solvation shell quantum solvation has so far been achieved with a number of organic inorganic and organometallic compounds and it has been speculated that as well as the obvious use in the field of spectroscopy quantum solvents could be used as tools in nanoscale chemical engineering perhaps to manufacture components for use in nanotechnology'
  • 'nanofiltration is a membrane filtration process used most often to soften and disinfect water nanofiltration is a membrane filtrationbased method that uses nanometer sized pores through which particles smaller than 10 nanometers pass through the membrane nanofiltration membranes have pore sizes from 110 nanometers smaller than that used in microfiltration and ultrafiltration but a little bit bigger than that in reverse osmosis membranes used are predominantly created from polymer thin films materials that are commonly used include polyethylene terephthalate or metals such as aluminum pore dimensions are controlled by ph temperature and time during development with pore densities ranging from 1 to 106 pores per cm2 membranes made from polyethylene terephthalate and other similar materials are referred to as tracketch membranes named after the way the pores on the membranes are made tracking involves bombarding the polymer thin film with high energy particles this results in making tracks that are chemically developed into the membrane or etched into the membrane which are the pores membranes created from metal such as alumina membranes are made by electrochemically growing a thin layer of aluminum oxide from aluminum metal in an acidic medium historically nanofiltration and other membrane technology used for molecular separation was applied entirely on aqueous systems the original uses for nanofiltration were water treatment and in particular water softening nanofilters soften water by retaining scaleforming divalent ions eg ca2 mg2nanofiltration has been extended into other industries such as milk and juice production as well as pharmaceuticals fine chemicals and flavour and fragrance industries one of the main advantages of nanofiltration as a method of softening water is that during the process of retaining calcium and magnesium ions while passing smaller hydrated monovalent ions filtration is performed without adding extra sodium ions as used in ion exchangers many separation processes do not operate at room temperature eg distillation which greatly increases the cost of the process when continuous heating or cooling is applied performing gentle molecular separation is linked with nanofiltration that is often not included with other forms of separation processes centrifugation these are two of the main benefits that are associated with nanofiltration nanofiltration has a very favorable benefit of being able to process large volumes and continuously produce streams of products still nanofiltration is the least used method of membrane filtration in industry as the membrane pores sizes are limited to only a few nanometers anything smaller reverse osmosis is used and anything larger is used for ultrafiltration ultrafiltration can also be'
  • 'electrical conductance magnetism and optical effects nanotechnology has an almost limitless string of applications in biology biotechnology and biomedicine nanotechnology has engendered a growing sense of excitement due to the ability to produce and utilize materials devices and systems through the control of matter on the nanometer scale 1 to 50 nm this bottomup approach requires less material and causes less pollution nanotechnology has had several commercial applications in advanced laser technology hard coatings photography pharmaceuticals printing chemicalmechanical polishing and cosmetics soon there will be lighter cars using nanoparticle reinforced polymers orally applicable insulin artificial joints made from nanoparticulate materials and lowcalorie foods with nanoparticulate taste enhancers viruses have long been studied as deadly pathogens to cause disease in all living forms by the 1950s researchers had begun thinking of viruses as tools in addition of pathogens bacteriophage genomes and components of the protein expression machinery have been widely utilized as tools for understanding the fundamental cellular process on the basis of these studies several viruses have been exploited as expression systems in biotechnology later in the 1970s viruses are used as a vector for the benefit of humans since that often viruses are used as vectors for gene therapy cancer control and control of harmful or damaging organisms in both agriculture and medicinerecently a new approach to exploiting viruses and their capsids for biotechnology began to change toward using them for nanotechnology application researchers douglas and young montana state university bozeman mt usa were the first to consider the utility of a virus capsid as a nanomaterial they have taken plant virus cowpea chlorotic mottle virus ccmv for their study ccmv showed a highly dynamic platform with ph and metal ion dependent structural transitions douglas and young made use of these capsid dynamics and exchanged the natural cargo nucleic acid with synthetic materials since then many materials have been encapsulated into ccmv and other vnps at about the same time the research team led by mann university of bristol uk pioneered a new area using the rodshaped particles of tmv tobacco mosaic virus the particles were used as templates for the fabrication of a range of metallized nanotube structures using mineralization techniques tmv particles have also been utilized to generate various structures nanotubes and nanowires for use in batteries and data storage devicesviral capsids have attracted great interest in the field of nanobiology because of their nanoscale size symmetrical structural organization load capacity controllable selfassembly and ease of modification viruses are essentially naturally'
36
  • 'society the ornate latin style was the primary form of oration through the mid20th century after world war ii and the increased use of film and television the latin oration style began to fall out of favor this cultural change likely had to do with the rise of the scientific method and the emphasis on a plain style of speaking and writing even todays formal oratory is much less ornate than in the classical era ancient china had a delayed start to implementing rhetoric persuasion because there were no rhetoricians training students it was understood that chinese rhetoric was part of chinese philosophy which schools taught focusing on two concepts wen ” rhetoric and “ zhi ” thoughtful content ancient chinese rhetoric shows strong connections with modern public speaking as chinese rhetoric placed a high value on ethicsancient chinese rhetoric had three objectives i using language to reflect peoples feelings ii using language to be more pointed effective and impactful and iii using rhetoric as an aesthetic tool chinese rhetoric traditionally focused more on the written than the spoken word but both share similar characteristics of constructiona unique and key difference between chinese and western rhetoric is the audience targeted for persuasion in chinese rhetoric state rulers were the audience whereas western rhetoric targets the public another difference between chinese and western rhetoric practices is how a speaker establishes credibility or ethos in chinese rhetoric the speaker does not focus on individual credibility like western rhetoric instead the speaker focuses on collectivism by sharing personal experiences and establishing a connection between the speakers concern and the audiences interestchinese rhetoric analyses public speakers on three standards i tracing which is how well the speaker is doing compared to traditional speaking practices ii examination or how the speaker considers the audiences daily lives and iii practice which is how relevant the topic or argument is to the state society and people aristotle and one of his most famous writings rhetoric written in 350 bce have been used as a foundation for learning how to master the art of public speaking in his works rhetoric is the act of publicly persuading an audience rhetoric is similar to dialect he defines both as being acts of persuasion however dialect is the act of persuading someone in private whereas rhetoric is about persuading people in a public setting aristotle defines someone who practices rhetoric or a rhetorician as an individual who can comprehend persuasion and how it is appliedaristotle divides rhetoric into three elements i the speaker ii the topic or point of the speech and iii the audience aristotle also classifies oration into three types i political used to convince people to take or not take action ii forensic usually used in law related to accusing or defending someone and iii ceremonial which recognizes someone positively or negativelyaristotle breaks down the political category into'
  • 'in the relationship and creates mutual good communication so that some type of agreement becomes much more possibleone idea that rogers emphasized several times in his 1951 paper that is not mentioned in textbook treatments of rogerian argument is thirdparty intervention rogers suggested that a neutral third party instead of the parties to the conflict themselves could in some cases present one partys sympathetic understanding of the other to the otherrogerian argument is an application of rogers ideas about communication taught by rhetoric teachers who were inspired by rapoport but rogers ideas about communication have also been applied somewhat differently by many others for example marshall rosenberg created nonviolent communication a process of conflict resolution and nonviolent living after studying and working with rogers and other writing teachers used some of rogers ideas in developing expressivist theories of writing there are different opinions about whether rogerian rhetoric is like or unlike classical rhetoric from ancient greece and romeyoung becker and pike said that classical rhetoric and rapoports pavlovian strategy and freudian strategy all share the common goal of controlling or persuading someone else but the rogerian strategy has different assumptions about humanity and a different goal in young becker and pikes view the goal of rogerian rhetoric is to remove the obstacles — especially the sense of threat — to cooperative communication mutual understanding and mutual intellectual growth they considered this goal to be a new alternative to classical rhetoric they also said that classical rhetoric is used both in dyadic situations — when two parties are trying to understand and change each other — and in triadic situations — when one party is responding to an opponent but is trying to influence a third party such as an arbitrator or jury or public opinion — but rogerian rhetoric is specially intended for certain dyadic situations not for triadic situationsenglish professor andrea lunsford responding to young becker and pike in a 1979 article argued that the three principles of rogerian strategy that they borrowed from rapoport could be found in various parts of aristotles writings and so were already in the classical tradition she pointed to book i of aristotles rhetoric where he said that one must be able to understand and argue both sides of an issue and to his discussions of friendship and of the enthymeme in book ii and to similar passages in his topics she also saw some similarity to platos phaedrus other scholars have also found resonances between rogerian and platonic rhetorics of dialogueenglish professor paul g bator argued in 1980 that rogerian argument is more different from aristotles rhetoric than lunsford had concluded among the differences he noted the ari'
  • 'translated and currently indicated with ge corps saxothuringia munchen in munich corps hannoverania hannover ge corps berlin in alphabetic order with links to the english wikipedia on top links to the german wikipedia are in process of being translated and currently indicated with ge hans gorges ge1859 – 1946 president rektor of the technischen hochschule dresden 1914 – 1915 the predecessor of todays tu dresden as well as an influential physicist and professor for electrical engineering walther heise highestlevel government construction officer and construction manager for the erich mullerbuilding the jante building and the zeuner building at the tu dresden heinrich koch 1873 – 1945 highestlevel government construction officer and construction manager for the beyerbuilding at the tu dresden as well as the main building of saxonys capital city archive in dresden martin johann krause president rektor of the technischen hochschule dresden 1894 – 1896 and 1919 – 1920 the predecessor of todays tu dresden siegfried meurer ge 1908 – 1997 engineer and inventor of the flustermotor quiet diesel engine as well as honorary professor at the technical university rwth aachen hermann immanuel rietschel ge 1847 – 1914 founder of the heating and climate technology specialization in mechanical engineering in germany oskar august reuther president rektor of the th dresden 1932 – 1934 the predecessor of todays tu dresden hermann rietschel president rektor of the technischen hochschule berlin 1893 – 1894 the predecessor of todays tu berlin malwin roßberg highlevel government construction officer and construction manager for the fritz forster building and the walterkonig building at the tu dresdenfreiherr carl von wagner ge 1843 – 1907 main civil engineer overseeing the planning design and construction of the isthmus of tehuantepec railroad pass in central america fritz zeuner honorary senator at the th dresden carl zimmerer ge 1926 – 2001 international business consultant honorary doctorate at the whu otto beisheim school of management and member of the board of directors at the agrippina ruckversicherungs ag the klinggraffmedal is awarded for the combination of extraordinary accomplishments in academia involvement for the fraternity and proven leadership on local and preferentially national level an average of five medals is awarded each year nationwide chosen by a joint alumni committee of kscv and wsc representatives the award indirectly reflects back at the fraternity showing leaders in their field among the fraternitys brothers for more information see klinggraffmedaille ge at the association of german student corps alumni ge'
6
  • 'virialization timescales and the clusters age however several dynamical mechanisms to accelerate virialization compared to twobody interactions have been examined in starforming regions it is often observed that otype stars are preferentially located in the center of a young cluster after relaxation the speed of some low mass members can be greater than the escape velocity of the cluster which results in these members being lost to the cluster this process is called evaporation a similar phenomenon explains the loss of lighter gases from a planet such as hydrogen and helium from the earth — after equipartition some molecules of sufficiently light gases at the top of the atmosphere will exceed the escape velocity of the planet and be lost through evaporation most open clusters eventually dissipate as indicated by the fact that most existing open clusters are quite young globular clusters being more tightly bound appear to be more durable the relaxation time of the milky way galaxy is approximately 10 trillion years on the order of thousand times the age of the galaxy itself thus any observed mass segregation in our galaxy must be almost entirely primordial nbody problem – problem in physics and celestial mechanics virial theorem – theorem of statistical mechanics messier 67 – old open cluster in the constellation cancer willman 1 – ultralow luminosity dwarf galaxy orion nebula cluster – open cluster in the orion nebula in the constellation orion westerhout 40 – starforming region in the constellation serpens ian a bonnell melvyn b davies 1998 mass segregation in young stellar clusters monthly notices of the royal astronomical society 295 3 691 – 698 bibcode1998mnras295691b doi101046j13658711199801372xmerritt david 2013 dynamics and evolution of galactic nuclei princeton university press isbn 9780691121017 oclc 820123438spitzer lyman s jr 1987 dynamical evolution of globular clusters princeton university press isbn 0691083096white s d m april 1977 mass segregation and missing mass in the coma cluster monthly notices of the royal astronomical society 179 2 33 – 41 bibcode1977mnras17933w doi101093mnras179233'
  • 'superionic water also called superionic ice or ice xviii is a phase of water that exists at extremely high temperatures and pressures in superionic water water molecules break apart and the oxygen ions crystallize into an evenly spaced lattice while the hydrogen ions float around freely within the oxygen lattice the freely mobile hydrogen ions make superionic water almost as conductive as typical metals making it a superionic conductor it is one of the 19 known crystalline phases of ice superionic water is distinct from ionic water which is a hypothetical liquid state characterized by a disordered soup of hydrogen and oxygen ions while theorized for decades it was not until the 1990s that the first experimental evidence emerged for superionic water initial evidence came from optical measurements of laserheated water in a diamond anvil cell and from optical measurements of water shocked by extremely powerful lasers the first definitive evidence for the crystal structure of the oxygen lattice in superionic water came from xray measurements on lasershocked water which were reported in 2019if it were present on the surface of the earth superionic ice would rapidly decompress in may 2019 scientists at the lawrence livermore national laboratory llnl were able to synthesize superionic ice confirming it to be almost four times as dense as normal ice and black in colorsuperionic water is theorized to be present in the mantles of giant planets such as uranus and neptune as of 2013 it is theorized that superionic ice can possess two crystalline structures at pressures in excess of 50 gpa 7300000 psi it is predicted that superionic ice would take on a bodycentered cubic structure however at pressures in excess of 100 gpa 15000000 psi and temperatures above 3140 degrees fahrenheit it is predicted that the structure would shift to a more stable facecentered cubic lattice the ice appears black in color demontis et al made the first prediction for superionic water using classical molecular dynamics simulations in 1988 in 1999 cavazzoni et al predicted that such a state would exist for ammonia and water in conditions such as those existing on uranus and neptune in 2005 laurence fried led a team at lawrence livermore national laboratory to recreate the formative conditions of superionic water using a technique involving smashing water molecules between diamonds and super heating it with lasers they observed frequency shifts which indicated that a phase transition had taken place the team also created computer models which indicated that they had indeed created superionic water in 2013 hugh f wilson michael l wong and burkhard militzer at the university'
  • 'the illustris project is an ongoing series of astrophysical simulations run by an international collaboration of scientists the aim was to study the processes of galaxy formation and evolution in the universe with a comprehensive physical model early results were described in a number of publications following widespread press coverage the project publicly released all data produced by the simulations in april 2015 key developers of the illustris simulation have been volker springel maxplanckinstitut fur astrophysik and mark vogelsberger massachusetts institute of technology the illustris simulation framework and galaxy formation model has been used for a wide range of spinoff projects starting with auriga and illustristng both 2017 followed by thesan 2021 millenniumtng 2022 and tngcluster to be published in 2023 the original illustris project was carried out by mark vogelsberger and collaborators as the first largescale galaxy formation application of volker springels novel arepo codethe illustris project included largescale cosmological simulations of the evolution of the universe spanning initial conditions of the big bang to the present day 138 billion years later modeling based on the most precise data and calculations currently available are compared to actual findings of the observable universe in order to better understand the nature of the universe including galaxy formation dark matter and dark energythe simulation included many physical processes which are thought to be critical for galaxy formation these include the formation of stars and the subsequent feedback due to supernova explosions as well as the formation of supermassive black holes their consumption of nearby gas and their multiple modes of energetic feedbackimages videos and other data visualizations for public distribution are available at official media page the main illustris simulation was run on the curie supercomputer at cea france and the supermuc supercomputer at the leibniz computing centre germany a total of 19 million cpu hours was required using 8192 cpu cores the peak memory usage was approximately 25 tb of ram a total of 136 snapshots were saved over the course of the simulation totaling over 230 tb cumulative data volumea code called arepo was used to run the illustris simulations it was written by volker springel the same author as the gadget code the name is derived from the sator square this code solves the coupled equations of gravity and hydrodynamics using a discretization of space based on a moving voronoi tessellation it is optimized for running on large distributed memory supercomputers using an mpi approach in april 2015 eleven months'
3
  • 'a digging stick sometimes called a yam stick is a wooden implement used primarily by subsistencebased cultures to dig out underground food such as roots and tubers tilling the soil or burrowing animals and anthills it is a term used in archaeology and anthropology to describe similar implements which usually consists of little more than a sturdy stick which has been shaped or sharpened and sometimes hardened by being placed temporarily in a firefashioned with handles for pulling or pushing it forms a prehistoric plough and is also described as a type of hoe digging sticks more than 170000 years old made of boxwood by neanderthals have been found in italy in mexico and the mesoamerican region the digging stick was the most important agricultural tool throughout the regionthe coa stick normally flares out into a triangle at the end and is used for cultivating maize it is still used for agriculture in some indigenous communities with some newer 20thcentury versions having the addition of a little metal tipother digging sticks according to native americans of the columbia plateau have been used since time immemorial to gather edible roots like balsamroot bitterroot camas and varieties of biscuitroot typical digging sticks were and are still about 2 to 3 feet in length usually slightly arched with the bottom tip shaved off at an angle a 5 to 8 inch crosspiece made of antler bone or wood was fitted perpendicularly over the top of the stick allowing the use of two hands to drive the tool into the ground since contact with the europeans in the 19th century native americans have also adapted the use of a metal in making digging sticks australia digging sticks are used by many of the aboriginal peoples of australia for digging up roots and tubers and for ceremonial usethe gunditjmara people of western victoria used digging sticks also known as yam sticks for digging yams goannas ants and other foods out of the ground as well as for defence for settling disputes and for punishment purposes as part of customary law new guinea the kuman people eastcentral new guinea were horticulturists who used basic tools such as the digging stick wooden hoe and wooden spade in their daily lives eventually they started to use more sophisticated tools such as iron spades and pickaxestwo main types of digging sticks both shared a similar shape but differed in size a larger and heavier digging stick with a diameter of about 4 cm 16 in and 2 m 6 ft 7 in in length used for the purpose of turning over the soil surface for new gardens and a smaller and lighter digging stick with'
  • 'peak oil for example and tends to reject the very idea of modernity and the myth of progress that is so central to modernization thinking similarly kirkpatrick sale wrote about progress as a myth benefiting the few and a pending environmental doomsday for everyone an example is the philosophy of deep ecology sociologist robert nisbet said that no single idea has been more important than the idea of progress in western civilization for three thousand years and defines five crucial premises of the idea of progress value of the past nobility of western civilization worth of economictechnological growth faith in reason and scientificscholarly knowledge obtained through reason intrinsic importance and worth of life on earthsociologist p a sorokin said the ancient chinese babylonian hindu greek roman and most of the medieval thinkers supporting theories of rhythmical cyclical or trendless movements of social processes were much nearer to reality than the present proponents of the linear view unlike confucianism and to a certain extent taoism that both search for an ideal past the judeochristianislamic tradition believes in the fulfillment of history which was translated into the idea of progress in the modern age therefore chinese proponents of modernization have looked to western models according to thompson the late qing dynasty reformer kang youwei believed he had found a model for reform and modernisation in the ancient chinese classicsphilosopher karl popper said that progress was not fully adequate as a scientific explanation of social phenomena more recently kirkpatrick sale a selfproclaimed neoluddite author wrote exclusively about progress as a myth in an essay entitled five facets of a mythiggers 1965 says that proponents of progress underestimated the extent of mans destructiveness and irrationality while critics misunderstand the role of rationality and morality in human behaviorin 1946 psychoanalyst charles baudouin claimed modernity has retained the corollary of the progress myth the idea that the present is superior to the past while at the same time insisting that it is free of the myth the last two centuries were familiar with the myth of progress our own century has adopted the myth of modernity the one myth has replaced the other men ceased to believe in progress but only to pin their faith to more tangible realities whose sole original significance had been that they were the instruments of progress this exaltation of the present is a corollary of that very faith in progress which people claim to have discarded the present is superior to the past by definition only in a mythology of progress thus one retains the corollary while rejecting the principle there is only one way of retaining a position of'
  • 'the skhul and qafzeh hominins or qafzeh – skhul early modern humans are hominin fossils discovered in esskhul and qafzeh caves in israel they are today classified as homo sapiens among the earliest of their species in eurasia skhul cave is on the slopes of mount carmel qafzeh cave is a rockshelter near nazareth in lower galilee the remains found at es skhul together with those found at the nahal mearot nature reserve and mugharet elzuttiyeh were classified in 1939 by arthur keith and theodore d mccown as palaeoanthropus palestinensis a descendant of homo heidelbergensis the remains exhibit a mix of traits found in archaic and anatomically modern humans they have been tentatively dated at about 80000 – 120000 years old using electron paramagnetic resonance and thermoluminescence dating techniques the brain case is similar to modern humans but they possess brow ridges and a projecting facial profile like neanderthals they were initially regarded as transitional from neanderthals to anatomically modern humans or as hybrids between neanderthals and modern humans neanderthal remains have been found nearby at kebara cave that date to 61000 – 48000 years ago but it has been hypothesised that the skhulqafzeh hominids had died out by 80000 years ago because of drying and cooling conditions favouring a return of a neanderthal population suggesting that the two types of hominids never made contact in the region a more recent hypothesis is that skhulqafzeh hominids represent the first exodus of modern humans from africa around 125000 years ago probably via the sinai peninsula and that the robust features exhibited by the skhulqafzeh hominids represent archaic sapiens features rather than neanderthal features the discovery of modern human made tools from about 125000 years ago at jebel faya united arab emirates in the arabian peninsula may be from an even earlier exit of modern humans from africa in january 2018 it was announced that modern human finds at the mount carmel cave of misliya discovered in 2002 had been dated to around 185000 years ago giving an even earlier date for an outofafrica migrationian wallace and john shea have devised a methodology for examining the various middle paleolithic core assemblages present at the levant site in order to test whether the different hominid populations had distinct mobility patterns they use a ratio of formal and expedient cores within assemblages'
9
  • 'a prophage is a bacteriophage often shortened to phage genome that is integrated into the circular bacterial chromosome or exists as an extrachromosomal plasmid within the bacterial cell integration of prophages into the bacterial host is the characteristic step of the lysogenic cycle of temperate phages prophages remain latent in the genome through multiple cell divisions until activation by an external factor such as uv light leading to production of new phage particles that will lyse the cell and spread as ubiquitous mobile genetic elements prophages play important roles in bacterial genetics and evolution such as in the acquisition of virulence factors upon detection of host cell damage by uv light or certain chemicals the prophage is excised from the bacterial chromosome in a process called prophage induction after induction viral replication begins via the lytic cycle in the lytic cycle the virus commandeers the cells reproductive machinery the cell may fill with new viruses until it lyses or bursts or it may release the new viruses one at a time in an exocytotic process the period from infection to lysis is termed the latent period a virus following a lytic cycle is called a virulent virus prophages are important agents of horizontal gene transfer and are considered part of the mobilome genes are transferred via transduction as the prophage genome is imperfectly excised from the host chromosome and integrated into a new host specialized transduction or as fragments of host dna are packaged into the phage particles and introduced into a new host generalized transduction all families of bacterial viruses that have circular singlestranded or doublestranded dna genomes or replicate their genomes through rolling circle replication eg caudovirales have temperate members zygotic induction occurs when a bacterial cell carrying the dna of a bacterial virus transfers its own dna along with the viral dna prophage into the new host cell this has the effect of causing the host cell to break apart the dna of the bacterial cell is silenced before entry into the cell by a repressor protein which is encoded for by the prophage upon the transfer of the bacterial cells dna into the host cell the repressor protein is no longer encoded for and the bacterial cells original dna is then turned on in the host cell this mechanism eventually will lead to the release of the virus as the host cell splits open and the viral dna is able to spread this new discovery provided key insights into bacterial conjugation and contributed to the early repression model of gene regulation which provided an explanation'
  • 'gut microbiota gut microbiome or gut flora are the microorganisms including bacteria archaea fungi and viruses that live in the digestive tracts of animals the gastrointestinal metagenome is the aggregate of all the genomes of the gut microbiota the gut is the main location of the human microbiome the gut microbiota has broad impacts including effects on colonization resistance to pathogens maintaining the intestinal epithelium metabolizing dietary and pharmaceutical compounds controlling immune function and even behavior through the gut – brain axis the microbial composition of the gut microbiota varies across regions of the digestive tract the colon contains the highest microbial density of any humanassociated microbial community studied so far representing between 300 and 1000 different species bacteria are the largest and to date best studied component and 99 of gut bacteria come from about 30 or 40 species up to 60 of the dry mass of feces is bacteria over 99 of the bacteria in the gut are anaerobes but in the cecum aerobic bacteria reach high densities it is estimated that the human gut microbiota have around a hundred times as many genes as there are in the human genome in humans the gut microbiota has the largest numbers and species of bacteria compared to other areas of the body the approximate number of bacteria composing the gut microbiota is about 10131014 in humans the gut flora is established at one to two years after birth by which time the intestinal epithelium and the intestinal mucosal barrier that it secretes have codeveloped in a way that is tolerant to and even supportive of the gut flora and that also provides a barrier to pathogenic organismsthe relationship between some gut microbiota and humans is not merely commensal a nonharmful coexistence but rather a mutualistic relationship 700 some human gut microorganisms benefit the host by fermenting dietary fiber into shortchain fatty acids scfas such as acetic acid and butyric acid which are then absorbed by the host intestinal bacteria also play a role in synthesizing vitamin b and vitamin k as well as metabolizing bile acids sterols and xenobiotics the systemic importance of the scfas and other compounds they produce are like hormones and the gut flora itself appears to function like an endocrine organ dysregulation of the gut flora has been correlated with a host of inflammatory and autoimmune conditionsthe composition of human gut microbiota changes over time when'
  • 'anaerobiosis has a significant impact on the production of virulence factors of this organism there is hope among some humans that the therapeutic enzymatic degradation of the signaling molecules will be possible when treating illness caused by biofilms and prevent the formation of such biofilms and possibly weaken established biofilms disrupting the signaling process in this way is called quorum sensing inhibition acinetobacter sp it has recently been found that acinetobacter sp also show quorum sensing activity this bacterium an emerging pathogen produces ahls acinetobacter sp shows both quorum sensing and quorum quenching activity it produces ahls and can also degrade the ahl molecules aeromonas sp this bacterium was previously considered a fish pathogen but it has recently emerged as a human pathogen aeromonas sp have been isolated from various infected sites from patients bile blood peritoneal fluid pus stool and urine all isolates produced the two principal ahls nbutanoylhomoserine lactone c4hsl and nhexanoyl homoserine lactone c6hsl it has been documented that aeromonas sobria has produced c6hsl and two additional ahls with nacyl side chain longer than c6 yersinia the yenr and yeni proteins produced by the gammaproteobacterium yersinia enterocolitica are similar to aliivibrio fischeri luxr and luxi yenr activates the expression of a small noncoding rna yens yens inhibits yeni expression and acylhomoserine lactone production yenryeniyens are involved in the control of swimming and swarming motility threedimensional structures of proteins involved in quorum sensing were first published in 2001 when the crystal structures of three luxs orthologs were determined by xray crystallography in 2002 the crystal structure of the receptor luxp of vibrio harveyi with its inducer ai2 which is one of the few biomolecules containing boron bound to it was also determined many bacterial species including e coli an enteric bacterium and model organism for gramnegative bacteria produce ai2 a comparative genomic and phylogenetic analysis of 138 genomes of bacteria archaea and eukaryotes found that the luxs enzyme required for ai2 synthesis is widespread in bacteria while the periplasmic binding protein luxp is present only in vibrio strains leading to the conclusion that either other organisms may use components different from the ai2 signal trans'
33
  • 'in parapsychology and spiritualism an apport is the alleged paranormal transference of an article from one place to another or an appearance of an article from an unknown source that is often associated with poltergeist activity or seances apports reported during seances have been found to be the result of deliberate fraud no medium or psychic has demonstrated the manifestation of an apport under scientifically controlled conditions a famous apport fraud is attributed to charles bailey 1870 – 1947 during a seance bailey produced two live birds out of thin air but was undone when the dealer who sold him the birds appeared in the crowd common objects that are produced are stones flowers perfumes and animals these objects are said to be gifts from the spiritsin march 1902 in berlin police officers interrupted a seance of the apport medium frau anna rothe her hands were grabbed and she was wrestled to the ground a female police assistant physically examined rothe and discovered 157 flowers as well as oranges and lemons hidden in her petticoat she was arrested and charged with fraud after a trial lasting six days she was sentenced to eighteen months imprisonmentin 1926 heinrich melzer was exposed as a fraud as he was caught in the seance room with small stones attached to the back of his ears by flesh coloured tape according to neurologist terence hines some female mediums went so far as to conceal in their vagina or anus objects to be apported during the seance and gauzy fabric that would become ectoplasm during the seance these were places that victorian gentlemen no matter how skeptical were highly unlikely to ask to searchthere are many cases where apports have been smuggled into the seance room other apport mediums that were exposed as frauds were lajos pap and maria silbert evocation indriði indriðason list of topics characterized as pseudoscience materialization psychokinesis sleight of hand teleportation'
  • 'mediumship is the practice of purportedly mediating communication between familiar spirits or spirits of the dead and living human beings practitioners are known as mediums or spirit mediums there are different types of mediumship or spirit channelling including seance tables trance and ouija belief in psychic ability is widespread despite the absence of empirical evidence for its existence scientific researchers have attempted to ascertain the validity of claims of mediumship for more than one hundred years and have consistently failed to confirm them as late as 2005 an experiment undertaken by the british psychological society reaffirmed that test subjects who selfidentified as mediums demonstrated no mediumistic abilitymediumship gained popularity during the nineteenth century when ouija boards were used as a source of entertainment investigations during this period revealed widespread fraud — with some practitioners employing techniques used by stage magicians — and the practice began to lose credibility fraud is still rife in the medium or psychic industry with cases of deception and trickery being discovered to this dayseveral different variants of mediumship have been described arguably the bestknown forms involve a spirit purportedly taking control of a mediums voice and using it to relay a message or where the medium simply hears the message and passes it on other forms involve materializations of the spirit or the presence of a voice and telekinetic activity the practice is associated with several religiousbelief systems such as shamanism vodun spiritualism spiritism candomble voodoo umbanda and some new age groups in spiritism and spiritualism the medium has the role of an intermediary between the world of the living and the world of spirit mediums claim that they can listen to and relay messages from spirits or that they can allow a spirit to control their body and speak through it directly or by using automatic writing or drawing spiritualists classify types of mediumship into two main categories mental and physical mental mediums purportedly tune in to the spirit world by listening sensing or seeing spirits or symbols physical mediums are believed to produce materialization of spirits apports of objects and other effects such as knocking rapping bellringing etc by using ectoplasm created from the cells of their bodies and those of seance attendeesduring seances mediums are said to go into trances varying from light to deep that permit spirits to control their mindschanneling can be seen as the modern form of the old mediumship where the channel or channeller purportedly receives messages from teachingspirit an ascended master from god or from an angelic entity but essentially through the filter of his own waking'
  • 'result of external spirit agencies the psychical researcher thomson jay hudson in the law of psychic phenomena 1892 and theodore flournoy in his book spiritism and psychology 1911 wrote that all kinds of mediumship could be explained by suggestion and telepathy from the medium and that there was no evidence for the spirit hypothesis the idea of mediumship being explained by telepathy was later merged into the superesp hypothesis of mediumship which is currently advocated by some parapsychologists in their book how to think about weird things critical thinking for a new age authors theodore schick and lewis vaughn have noted that the spiritualist and esp hypothesis of mediumship has yielded no novel predictions assumes unknown entities or forces and conflicts with available scientific evidencescientists who study anomalistic psychology consider mediumship to be the result of fraud and psychological factors research from psychology for over a hundred years suggests that where there is not fraud mediumship and spiritualist practices can be explained by hypnotism magical thinking and suggestion trance mediumship which according to spiritualists is caused by discarnate spirits speaking through the medium can be explained by dissociative identity disorderillusionists such as joseph rinn have staged fake seances in which the sitters have claimed to have observed genuine supernatural phenomena albert moll studied the psychology of seance sitters according to wolffram 2012 moll argued that the hypnotic atmosphere of the darkened seance room and the suggestive effect of the experimenters social and scientific prestige could be used to explain why seemingly rational people vouchsafed occult phenomena the psychologists leonard zusne and warren jones in their book anomalistic psychology a study of magical thinking 1989 wrote that spirits controls are the products of the mediums own psychological dynamicsa fraudulent medium may obtain information about their sitters by secretly eavesdropping on sitters conversations or searching telephone directories the internet and newspapers before the sittings a technique called cold reading can also be used to obtain information from the sitters behavior clothing posture and jewellerythe psychologist richard wiseman has written cold reading also explains why psychics have consistently failed scientific tests of their powers by isolating them from their clients psychics are unable to pick up information from the way those clients dress or behave by presenting all of the volunteers involved in the test with all of the readings they are prevented from attributing meaning to their own reading and therefore cant identify it from readings made for others as a result the type of highly successful hit rate that psychics enjoy on a daily basis comes crashing down and the truth emerges –'
18
  • 'a poster is a large sheet that is placed either on a public space to promote something or on a wall as decoration typically posters include both textual and graphic elements although a poster may be either wholly graphical or wholly text posters are designed to be both eyecatching and informative posters may be used for many purposes they are a frequent tool of advertisers particularly of events musicians and films propagandists protestors and other groups trying to communicate a message posters are also used for reproductions of artwork particularly famous works and are generally lowcost compared to the original artwork the modern poster as we know it however dates back to the 1840s and 1850s when the printing industry perfected colour lithography and made mass production possible according to the french historian max gallo for over two hundred years posters have been displayed in public places all over the world visually striking they have been designed to attract the attention of passersby making us aware of a political viewpoint enticing us to attend specific events or encouraging us to purchase a particular product or service the modern poster as we know it however dates back to the midnineteenth century when several separate but related changes took place first the printing industry perfected colour lithography and made mass production of large and inexpensive images possible second government censorship of public spaces in countries such as france was lifted and finally advertisers began to market massproduced consumer goods to a growing populace in urban areasin little more than a hundred years writes poster expert john barnicoat it has come to be recognized as a vital art form attracting artists at every level from painters such as toulouselautrec and mucha to theatrical and commercial designers they have ranged in styles from art nouveau symbolism cubism and art deco to the more formal bauhaus and the often incoherent hippie posters of the 1960s posters in the form of placards and posted bills have been used since earliest times primarily for advertising and announcements purely textual posters have a long history they advertised the plays of shakespeare and made citizens aware of government proclamations for centuries the great revolution in posters however was the development of printing techniques that allowed for cheap mass production and printing notably including the technique of lithography which was invented in 1796 by the german alois senefelder the invention of lithography was soon followed by chromolithography which allowed for mass editions of posters illustrated in vibrant colors to be printed by the 1890s the technique had spread throughout europe a number of noted french artists created poster art in this period foremost amongst them henri de toulouselautrec jules'
  • 'rules if an individual reads an english word they have never seen they use the law of past experience to interpret the letters l and i as two letters beside each other rather than using the law of closure to combine the letters and interpret the object as an uppercase u music an example of the gestalt movement in effect as it is both a process and result is a music sequence people are able to recognise a sequence of perhaps six or seven notes despite them being transposed into a different tuning or key an early theory of gestalt grouping principles in music was composertheorist james tenneys metahodos 1961 auditory scene analysis as developed by albert bregman further extends a gestalt approach to the analysis of sound perception gestalt psychology contributed to the scientific study of problem solving in fact the early experimental work of the gestaltists in germany marks the beginning of the scientific study of problem solving later this experimental work continued through the 1960s and early 1970s with research conducted on relatively simple but novel for participants laboratory tasks of problem solvinggiven gestalt psychologys focus on the whole it was natural for gestalt psychologists to study problemsolving from the perspective of insight seeking to understand the process by which organisms sometimes suddenly transition from having no idea how to solve a problem to instantly understanding the whole problem and its solution 13 in a famous set of experiments kohler gave chimpanzees some boxes and placed food high off the ground after some time the chimpanzees appeared to suddenly realize that they could stack the boxes on top of each other to reach the food 362 max wertheimer distinguished two kinds of thinking productive thinking and reproductive thinking 456 361 productive thinking is solving a problem based on insight — a quick creative unplanned response to situations and environmental interaction reproductive thinking is solving a problem deliberately based on previous experience and knowledge reproductive thinking proceeds algorithmically — a problem solver reproduces a series of steps from memory knowing that they will lead to a solution — or by trial and error 361 karl duncker another gestalt psychologist who studied problem solving 370 coined the term functional fixedness for describing the difficulties in both visual perception and problem solving that arise from the fact that one element of a whole situation already has a fixed function that has to be changed in order to perceive something or find the solution to a problemabraham luchins also studied problem solving from the perspective of gestalt psychology he is well known for his research on the role of mental set einstellung effect which he demonstrated'
  • 'were veterans of sanrios theatrical films unit which gave him confidence in their abilities more than half of the background paintings for the film were made on site at gainax rather than assigning the task to staff working externally as ogura felt the worldview and details of the films aesthetic were easier for him to communicate to artists in person giving as an example the color subtleties as the color scheme in royal space force was subdued if a painting needed more of a bluish cast to it he couldnt simply instruct the artist to add more blue toshio okada described the appearance of the world in which royal space force takes place as having been shaped in stages by three main artists first its major color elements blue and brown were determined by sadamoto then its architectural styles and artistic outlook were designed by takashi watabe and finally ogura gave it a sense of life through depicting its light shadow and air it was noted also that the films world displays different layers of time in its designs the main motifs being art deco but with older art nouveau and newer postmodern elements also present yamaga expressed the view that ogura being a tokyo native allowed him to do a good job on the films city scenes yet ogura himself described the task as difficult while he attempted to sketch out as much of the city as possible its urban aesthetic was so cluttered that it was difficult for him to determine vanishing point and perspective ogura commented that although the film depicted a different world theres nothing that youd call scifi stuff its everyday normal life like our own surroundings i wanted to express that messy impression as art director he also laid particular emphasis on attempting to convey the visual texture of the worlds architecture and interior design remarking that he was amazed at how watabes original drawings of buildings contained detailed notes on the structural and decorative materials used in them inspiring ogura to then express in his paintings such aspects as the woodwork motifs prominent in the royal space force headquarters or by contrast the metallic elements in the room where the republic minister nereddon tastes wine watabe and ogura would collaborate again in 1995 on constructing the cityscapes of ghost in the shellogura theorized that the background paintings in royal space force were a result not only of the effort put into the film but the philosophy behind the effort i think this shows what you can make if you take animation seriously yamaga often said he wanted to dispense with the usual symbolic bits it isnt about saying that because its evening the colors should be signified in this way not every sunset is the same'
22
  • 'afghanistan southwestern angola ethiopia eritrea eastern chad darfur sudan iraq gabon togo and the sultanate of omansince 2018 alain gachet and his company rti exploration has been working together with minae ministry of environment energy and telecommunications of costa rica and the us geological service to evaluate aquifers that will provide costa rica with clearer information related to its underground water sources the project “ mapping of the ground water hydric resource in costa rica ” uses watex high technology which provides images of costa ricas subterraneous water sources bypassing interference from surface obstacles like infrastructure and vegetation this will allow costa rica to develop a better strategy for using its underground water sources to address the issues of drought and climate change awards and honors in january 2015 upon the recommendation of segolene royal the french minister of ecology at that time alain gachet is awarded by yves coppens the rank of chevaliers of the french legion of honor in the us he was elected in the “ space technology hall of fame ” by nasa and the space foundation in 2016 for using modern space technologies for the progress of humanityin 2017 endorsed by the organizations that had previously accompanied him but also by george washington university the university of turin and the canadian space agency gachet calls for an increase in the drilling of aquifers and the establishment of regulation on rapid agricultural development and relates his message through television lectures and in writing he has written a memoir of his experiences published in 2015 by jc lattes jc lattes of his experiences le sourcier qui fait jaillir l ’ eau du desertonly aquifers that are renewable can reasonably be exploited because fossil water must be preserved for future generations according to alain gachet however they represent together by far the largest freshwater deposit on the planet there is more fresh water hidden underground than visible on the lakes rivers and glaciers up to thirty times more according to nasa estimations alain gachet stated in article published in the independent that over a billion people have no easy access to drinking water and that 18 million children die each year from diseases linked to drinking bad water he added that in another half century there will be 55 billion two thirds of the population living in a state of severe watershortage in an article published in the french magazine ouest france gachet points out that there is enough deep groundwater in africa to transfigure the entire face of the continent enough water to stop many wars rebuild agriculture and restore dignity and hope to millions of men translated from statement in french he directly links the european migration crisis of the 21st century to the desertification of the sahel and'
  • 'period from may 2010 to july 2011 the dissolved oxygen level for the streams in the fishing creek watershed was highest in february 2011 and lowest in june july or august 2010 depending on the stream a site on fishing creek near camp lavigne had slightly less fluctuation there it ranged from 8 to 17 milligrams per liter of dissolved oxygenthe concentration of hydrogen ions in the waters of fishing creek near bloomsburg between 2002 and 2012 ranged from 000001 to 000153 milligrams per liter the date of the lowest concentration of hydrogen ions at that location was on march 1 and july 16 2003 the date of the highest concentration of hydrogen ions at the location on the creek was december 17 2003 the average concentration of hydrogen ions was 00001 milligrams per literthe total concentration of nitrogen in the waters of fishing creek at the gauging station near bloomsburg between 2002 and 2012 ranged from 052 to 28 milligrams per liter the date during which the lowest concentration of nitrogen occurred was october 14 2009 the date of the highest concentration was on january 13 2003 the average concentration of nitrogen between 2002 and 2012 was 1212 milligrams per liter the concentration of dissolved oxygen in fishing creek near bloomsburg between 2002 and 2012 ranged between 41 and 171 milligrams per liter the lowest concentration of dissolved oxygen in the creek during that period and at that location occurred on july 25 2005 the highest concentration of dissolved oxygen occurred on january 6 2009 the average concentration of dissolved oxygen was 10942 milligrams per literfishing creek contains dissolved aluminum but in most places not enough to be toxic although some of its tributaries have aluminum concentrations approaching lethal levels for fish the only tributary of fishing creek which contains dissolved aluminum in concentrations of over 100 micrograms per liter is east branch fishing creek fishing creek itself and all of its other tributaries had dissolved aluminum concentrations of less than 70 micrograms per liter this concentration is linked to the thawing of soils as demonstrated by the fact that aluminum levels in fishing creek peak in march and april and drop to almost zero in the summerthe total concentration of calcium in the waters of fishing creek at the gauging station near bloomsburg between 2002 and 2012 ranged from 55 milligrams per liter to 26 milligrams per liter the lowest concentration of calcium occurred on february 6 2008 the highest concentration of calcium occurred on june 18 2012 the average concentration of calcium was 7532 milligrams per literthe total concentration of magnesium in the waters of fishing creek at the gauging station near bloomsburg between'
  • 'pore size can also be an influence of volume in woodbased materials however almost all water is adsorbed at humidities below 98 rh in biological applications there can also be a distinction between physisorbed water and free water — the physisorbed water being that closely associated with and relatively difficult to remove from a biological material the method used to determine water content may affect whether water present in this form is accounted for for a better indication of free and bound water the water activity of a material should be considered water molecules may also be present in materials closely associated with individual molecules as water of crystallization or as water molecules which are static components of protein structure in soil science hydrology and agricultural sciences water content has an important role for groundwater recharge agriculture and soil chemistry many recent scientific research efforts have aimed toward a predictiveunderstanding of water content over space and time observations have revealed generally that spatial variance in water content tends to increase as overall wetness increases in semiarid regions to decrease as overall wetness increases in humid regions and to peak under intermediate wetness conditions in temperate regions there are four standard water contents that are routinely measured and used which are described in the following table and lastly the available water content θa which is equivalent to θa ≡ θfc − θpwpwhich can range between 01 in gravel and 03 in peat agriculture when a soil becomes too dry plant transpiration drops because the water is increasingly bound to the soil particles by suction below the wilting point plants are no longer able to extract water at this point they wilt and cease transpiring altogether conditions where soil is too dry to maintain reliable plant growth is referred to as agricultural drought and is a particular focus of irrigation management such conditions are common in arid and semiarid environments some agriculture professionals are beginning to use environmental measurements such as soil moisture to schedule irrigation this method is referred to as smart irrigation or soil cultivation groundwater in saturated groundwater aquifers all available pore spaces are filled with water volumetric water content porosity above a capillary fringe pore spaces have air in them too most soils have a water content less than porosity which is the definition of unsaturated conditions and they make up the subject of vadose zone hydrogeology the capillary fringe of the water table is the dividing line between saturated and unsaturated conditions water content in the capillary fringe decreases with increasing distance above the phreatic surface the flow of water through and unsaturated zone in soils often involves a process of'
10
  • 'three serine residues can be phosphoylated on the said protein and hence phosphomimetic mutants are useful to probe the function of the individual phosphorylation'
  • '##cules unknown to naturethe main challenges of designing high quality mutant libraries have shown significant progress in the recent past this progress has been in the form of better descriptions of the effects of mutational loads on protein traits also computational approaches have showed large advances in the innumerably large sequence space to more manageable screenable sizes thus creating smart libraries of mutants library size has also been reduced to more screenable sizes by the identification of key beneficial residues using algorithms for systematic recombination finally a significant step forward toward efficient reengineering of enzymes has been made with the development of more accurate statistical models and algorithms quantifying and predicting coupled mutational effects on protein functionsgenerally directed evolution may be summarized as an iterative two step process which involves generation of protein mutant libraries and high throughput screening processes to select for variants with improved traits this technique does not require prior knowledge of the protein structure and function relationship directed evolution utilizes random or focused mutagenesis to generate libraries of mutant proteins random mutations can be introduced using either error prone pcr or site saturation mutagenesis mutants may also be generated using recombination of multiple homologous genes nature has evolved a limited number of beneficial sequences directed evolution makes it possible to identify undiscovered protein sequences which have novel functions this ability is contingent on the proteins ability to tolerant amino acid residue substitutions without compromising folding or stabilitydirected evolution methods can be broadly categorized into two strategies asexual and sexual methods asexual methods do not generate any cross links between parental genes single genes are used to create mutant libraries using various mutagenic techniques these asexual methods can produce either random or focused mutagenesis random mutagenesis random mutagenic methods produce mutations at random throughout the gene of interest random mutagenesis can introduce the following types of mutations transitions transversions insertions deletions inversion missense and nonsense examples of methods for producing random mutagenesis are below error prone pcr error prone pcr utilizes the fact that taq dna polymerase lacks 3 to 5 exonuclease activity this results in an error rate of 0001 – 0002 per nucleotide per replication this method begins with choosing the gene or the area within a gene one wishes to mutate next the extent of error required is calculated based upon the type and extent of activity one wishes to generate this extent of error determines the error prone pcr strategy to be employed following pcr the genes are cloned into a plasmid and introduced'
  • 'each of these proteins produced during viral infection appears to be critical for normal phage t4 morphogenesis phage t4 encoded proteins that determine virion structure include major structural components minor structural components and nonstructural proteins that catalyze specific steps in the morphogenesis sequence the study of ma structure and function is challenging in particular because of their megadalton size but also because of their complex compositions and varying dynamic natures most have had standard chemical and biochemical methods applied methods of protein purification and centrifugation chemical and electrochemical characterization etc in addition their methods of study include modern proteomic approaches computational and atomicresolution structural methods eg xray crystallography smallangle xray scattering saxs and smallangle neutron scattering sans force spectroscopy and transmission electron microscopy and cryoelectron microscopy aaron klug was recognized with the 1982 nobel prize in chemistry for his work on structural elucidation using electron microscopy in particular for proteinnucleic acid mas including the tobacco mosaic virus a structure containing a 6400 base ssrna molecule and 2000 coat protein molecules the crystallization and structure solution for the ribosome mw 25 mda an example of part of the protein synthetic machinery of living cells was object of the 2009 nobel prize in chemistry awarded to venkatraman ramakrishnan thomas a steitz and ada e yonath finally biology is not the sole domain of mas the fields of supramolecular chemistry and nanotechnology each have areas that have developed to elaborate and extend the principles first demonstrated in biologic mas of particular interest in these areas has been elaborating the fundamental processes of molecular machines and extending known machine designs to new types and processes multistate modeling of biomolecules quaternary structure multiprotein complex organelle the broadest definition of organelle includes not only membrane bound cellular structures but also very large biomolecular complexes multistate modeling of biomolecules'
34
  • 'that the child uses to imagine the information 1 visual evocation includes the transformation of textual information into visual information for example drawings diagrams and the use of colour in underlining2 auditory evocation includes oral or mental repetition of information for example listening to lessons on an audio device3 kinaesthetic evocation includes movements feelings smells and tastes for example the use of gestures and movements to learnconstituting mental management is the organisation improvement and use of these activities and processes cognitive psychology is the study of the mind as an information processor it rose to great importance in the mid1950s due to the dissatisfaction with the behaviourist psychological models the emphasis of psychology shifted away from studying the mind in favour of understanding human information processing relating to perception attention language memory thinking and consciousness the main concern of cognitive psychology is how information is received from the senses processed by the brain and how this processing directs how humans behave it is a multifaceted approach in which various cognitive functions work together to understand not only individuals and groups but also society as a whole mental management falls within the cognitive model of psychology and needs to be distinguished from the behaviourist model which considers mental processes to be unobservable and therefore akin to a ‘ black box ’ more specifically the behaviourist model assumes that the process linking behaviour to the stimulus cannot be studied it therefore describes the conceptualisation of psychological disorders in terms of overt behaviour patterns produced by learning and the influence of reinforcement contingencies treatment techniques associated with this approach include systematic desensitisation and modelling and focusing on modifying ineffective or maladaptive patterns in contrast the cognitive model represents a theoretical view of thought and mental operations provides explanations for observed phenomena and makes predictions about behavioural consequences specifically it describes the mental events that connect the input from the environment with the behavioural output the approach assumes that people are continually creating and accessing internal representations models of what they are experiencing in the world for the purposes of perception comprehension and behaviour selection action treatment techniques associated with this approach include cognitive behavioural therapy which involves defining observing analysing and interpreting patterns and reframing these as more optimal ways of thinking there are five different processes of mental management which la garanderie distinguishes as different types of mental gestures in psychology attention is defined as “ a state in which cognitive resources are focused on certain aspects of the environment rather than on others and the central nervous system is in a state of readiness to respond to stimuli in mental management it describes the essential first step required to enable the subsequent step of retrieving memorized information the gesture of attention is linked to the perception from our five'
  • 'is vital to have detailed regard to the legallyrequired curriculum of the country in which the scheme of work is to be delivered these are typically defined in detail by subject understanding the subtleties and nuances of their presentation is of vital importance when defining the most useful schemes of work for maintained schools and exam boards in england the national curriculum is set by department for education such that all children growing up in england have a broadly similar education the curriculum for primary education ages 45 to 11 and secondary education ages 11 to 18 in england is divided into five key stages key stages 1 and 2 are delivered at primary schools key stages 3 4 and 5 are delivered at secondary schools england mathematics english primary schools key stages 1 2 ages 5 to 11 the expectations for delivering the national curriculum for mathematics in england at key stages 1 and 2 are tightly defined with clear timelinked objectives the department for education has provided an initial annual scheme of work or set of expectations for each schoolacademic year from year 1 age 56 to and including year 6 age 1011 this does not specify the order of teaching each topic within each year but does provide guidance and does set out the expectations of what is to be taught and learned by the end of each year of students primary education english secondary schools key stages 3 and 4 ages 11 to 16 the national curriculum for mathematics in england is also tightly defined at key stages 3 and 4 however each individual english schools mathematics department is given greater freedom to decide when and how to deliver the content by contrast to the national curriculum for englands primary schools there are no annual expectations instead guidance is set by reference to what is to be taught and learned by the end of key stage 3 the end of year 9 ages 1314 and by the end of key stage 4 the end of year 11 ages 1516 it is notable that the curriculum for key stage 4 is intended by the department for education to examine all learning from key stages 1 to 4 in particular topics listed in key stage 3 explicitly form part of the curriculum for key stage 4 such that the foundations of earlier learning are reinforced whilst building upon them accordingly students who have struggled with the hardertounderstand elements in the past are given the opportunity to master the key stage 3 content whilst others build higher in parallel english secondary schools gcses typically age 1516 it is mandatory in england for students to have taken a gcse in mathematics by the year of their sixteenth birthday it is notable that the subject content agreed between the department of education and the office of qualifications and examinations regulation ofqual the exam board regulator for gcses as'
  • 'it is helpful to leave the accustomed environment of the family in town and to go to quiet surroundings in the country close to nature with the development of mobile touchscreen devices some montessori activities have been made into mobile apps mobile applications have been criticized due to the lack of physical interaction with objectsalthough not supported by all most montessori schools use digital technology with the purpose of preparing students for their future technology is not used in the same way as it would be used in a regular classroom instead it is used in meaningful ways students are not to simply replace realworld activities with hightech ones such as the applications mentioned earlierdevices are not commonly used when students are being taught when students have a question about something they try to solve it themselves instead of turning to a device to try and figure out an answer when a device is used by a student the teacher expects them to use it in a meaningful way there has to be a specific purpose behind using technology before using a device the student should ask themselves if using this device is the best way or if it is the only way to do a certain task if the answer is yes to both of those questions then that would be considered using technology in a meaningful way montessori perceived specific elements of human psychology which her son and collaborator mario montessori identified as human tendencies in 1957 there is some debate about the exact list but the following are clearly identified planes of development montessori observed four distinct periods or planes in human development extending from birth to 6 years from 6 to 12 from 12 to 18 and from 18 to 24 she saw different characteristics learning modes and developmental imperatives active in each of these planes and called for educational approaches specific to each periodthe first plane extends from birth to around six years of age during this period montessori observed that the child undergoes striking physical and psychological development the firstplane child is seen as a concrete sensorial explorer and learner engaged in the developmental work of psychological selfconstruction and building functional independence montessori introduced several concepts to explain this work including the absorbent mind sensitive periods and normalization montessori described the young childs behavior of effortlessly assimilating the sensorial stimuli of his or her environment including information from the senses language culture and the development of concepts with the term absorbent mind she believed that this is a power unique to the first plane and that it fades as the child approached age six montessori also observed and discovered periods of special sensitivity to particular stimuli during this time which she called the sensitive periods in montessori education the classroom environment responds'
7
  • 'ability to filter out unattended stimuli reaches its prime in young adulthood in reference to the cocktail party phenomenon older adults have a harder time than younger adults focusing in on one conversation if competing stimuli like subjectively important messages make up the background noisesome examples of messages that catch peoples attention include personal names and taboo words the ability to selectively attend to ones own name has been found in infants as young as 5 months of age and appears to be fully developed by 13 months along with multiple experts in the field anne treisman states that people are permanently primed to detect personally significant words like names and theorizes that they may require less perceptual information than other words to trigger identification another stimulus that reaches some level of semantic processing while in the unattended channel is taboo words these words often contain sexually explicit material that cause an alert system in people that leads to decreased performance in shadowing tasks taboo words do not affect children in selective attention until they develop a strong vocabulary with an understanding of language selective attention begins to waver as we get older older adults have longer latency periods in discriminating between conversation streams this is typically attributed to the fact that general cognitive ability begins to decay with old age as exemplified with memory visual perception higher order functioning etceven more recently modern neuroscience techniques are being applied to study the cocktail party problem some notable examples of researchers doing such work include edward chang nima mesgarani and charles schroeder using electrocorticography jonathan simon mounya elhilali adrian kc lee shihab shamma barbara shinncunningham daniel baldauf and jyrki ahveninen using magnetoencephalography jyrki ahveninen edmund lalor and barbara shinncunningham using electroencephalography and jyrki ahveninen and lee m miller using functional magnetic resonance imaging not all the information presented to us can be processed in theory the selection of what to pay attention to can be random or nonrandom for example when driving drivers are able to focus on the traffic lights rather than on other stimuli present in the scene in such cases it is mandatory to select which portion of presented stimuli is important a basic question in psychology is when this selection occurs this issue has developed into the early versus late selection controversy the basis for this controversy can be found in the cherry dichotic listening experiments participants were able to notice physical changes like pitch or change in gender of the speaker and stimuli like their own name in the unattended channel this brought about the question of whether the meaning semantics of the'
  • '##rate survival although embryos can adapt to normal changes in their environment evidence suggests they are not well adapted to endure the negative effects of noise pollution studies have been conducted on the sea hare to determine the effects of boat noise on the early stages of life and the development of embryos researchers have studied sea hares from the lagoon of moorea island french polynesia in the study recordings of boat noise were made by using a hydrophone in addition recordings of ambient noise were made that did not contain boat noise in contrast to ambient noise playbacks mollusks exposed to boat noise playbacks had a 21 reduction in embryonic development additionally newly hatched larvae experienced an increased mortality rate of 22 when exposed to boat noise playbacks anthropogenic noise can have negative effects on invertebrates that aid in controlling environmental processes that are crucial to the ecosystem there are a variety of natural underwater sounds produced by waves in coastal and shelf habitats and biotic communication signals that do not negatively impact the ecosystem the changes in behavior of invertebrates vary depending on the type of anthropogenic noise and is similar to natural noisescapesexperiments have examined the behavior and physiology of the clam ruditapes philippinarum the decapod nephrops norvegicus and the brittlestar amphiura filiformis that are affected by sounds resembling shipping and building noises the three invertebrates in the experiment were exposed to continuous broadband noise and impulsive broadband noise the anthropogenic noise impeded the bioirrigation and burying behavior of nephrops norvegicus in addition the decapod exhibited a reduction in movement ruditapes philippinarum experienced stress which caused a reduction in surface relocation the anthropogenic noise caused the clams to close their valves and relocate to an area above the interface of the sedimentwater this response inhibits the clam from mixing the top layer of the sediment profile and hinders suspension feeding sound causes amphiura filiformis to experience changes in physiological processes which results in irregularity of bioturbation behaviorthese invertebrates play an important role in transporting substances for benthic nutrient cycling as a result ecosystems are negatively impacted when species cannot perform natural behaviors in their environment locations with shipping lanes dredging or commercial harbors are known as continuous broadband sound piledriving and construction are sources that exhibit impulsive broadband noise the different types of broadband noise have different effects on the varying species of invertebrates and how they behave in their environmentanother study found that the valve closures in the pacific oyster magallana gigas was a'
  • 'amount of fidelity of the transient stimuli used to evoke a response bone conduction abr thresholds can be used if other limitations are present but thresholds are not as accurate as abr thresholds recorded through air conductionadvantages of hearing aid selection by brainstem audiometry include the following applications evaluation of loudness perception in the dynamic range of hearing recruitment determination of basic hearing aid properties gain compression factor compression onset level cases with middle ear impairment contrary to acoustic reflex methods noncooperative subjects even in sleep sedation or anesthesia without influence of age and vigilance contrary to cortical evoked responsesdisadvantages of hearing aid selection by brainstem audiometry include the following applications in cases of severe hearing impairment including no or only poor information as to loudness perception no control of compression setting no frequencyspecific compensation of hearing impairment there are about 188000 people around the world who have received cochlear implants in the united states alone there are about 30000 adults and over 30000 children who are recipients of cochlear implants this number continues to grow as cochlear implantation is becoming more and more accepted in 1961 dr william house began work on the predecessor for todays cochlear implant william house is an otologist and is the founder of house ear institute in los angeles california this groundbreaking device which was manufactured by 3m company was approved by the fda in 1984 although this was a single channel device it paved the way for future multi channel cochlear implants currently as of 2007 the three cochlear implant devices approved for use in the us are manufactured by cochlear medel and advanced bionics the way a cochlear implant works is sound is received by the cochlear implants microphone which picks up input that needs to be processed to determine how the electrodes will receive the signal this is done on the external component of the cochlear implant called the sound processor the transmitting coil also an external component transmits the information from the speech processor through the skin using frequency modulated radio waves the signal is never turned back into an acoustic stimulus unlike a hearing aid this information is then received by the cochlear implants internal components the receiver stimulator delivers the correct amount of electrical stimulation to the appropriate electrodes on the array to represent the sound signal that was detected the electrode array stimulates the remaining auditory nerve fibers in the cochlea which carry the signal on to the brain where it is processed one way to measure the'
23
  • 'autophagy was induced artificially postexercise the accumulation of damaged organelles in collagen vi deficient muscle fibres was prevented and cellular homeostasis was maintained both studies demonstrate that autophagy induction may contribute to the beneficial metabolic effects of exercise and that it is essential in the maintaining of muscle homeostasis during exercise particularly in collagen vi fiberswork at the institute for cell biology university of bonn showed that a certain type of autophagy ie chaperoneassisted selective autophagy casa is induced in contracting muscles and is required for maintaining the muscle sarcomere under mechanical tension the casa chaperone complex recognizes mechanically damaged cytoskeleton components and directs these components through a ubiquitindependent autophagic sorting pathway to lysosomes for disposal this is necessary for maintaining muscle activity because autophagy decreases with age and age is a major risk factor for osteoarthritis the role of autophagy in the development of this disease is suggested proteins involved in autophagy are reduced with age in both human and mouse articular cartilage mechanical injury to cartilage explants in culture also reduced autophagy proteins autophagy is constantly activated in normal cartilage but it is compromised with age and precedes cartilage cell death and structural damage thus autophagy is involved in a normal protective process chondroprotection in the joint cancer often occurs when several different pathways that regulate cell differentiation are disturbed autophagy plays an important role in cancer – both in protecting against cancer as well as potentially contributing to the growth of cancer autophagy can contribute to cancer by promoting survival of tumor cells that have been starved or that degrade apoptotic mediators through autophagy in such cases use of inhibitors of the late stages of autophagy such as chloroquine on the cells that use autophagy to survive increases the number of cancer cells killed by antineoplastic drugsthe role of autophagy in cancer is one that has been highly researched and reviewed there is evidence that emphasizes the role of autophagy as both a tumor suppressor and a factor in tumor cell survival recent research has shown however that autophagy is more likely to be used as a tumor suppressor according to several models several experiments have been done with mice and varying beclin1 a protein that regulates autophagy when the beclin1 gene was altered to be heterozygous beclin 1 the mice were'
  • 'into human antibodies this results in a molecule of approximately 95 human origin humanised antibodies bind antigen much more weakly than the parent murine monoclonal antibody with reported decreases in affinity of up to several hundredfold increases in antibodyantigen binding strength have been achieved by introducing mutations into the complementarity determining regions cdr using techniques such as chainshuffling randomization of complementaritydetermining regions and antibodies with mutations within the variable regions induced by errorprone pcr e coli mutator strains and sitespecific mutagenesis human monoclonal antibodies suffix umab are produced using transgenic mice or phage display libraries by transferring human immunoglobulin genes into the murine genome and vaccinating the transgenic mouse against the desired antigen leading to the production of appropriate monoclonal antibodies murine antibodies in vitro are thereby transformed into fully human antibodiesthe heavy and light chains of human igg proteins are expressed in structural polymorphic allotypic forms human igg allotype is one of the many factors that can contribute to immunogenicity anticancer monoclonal antibodies can be targeted against malignant cells by several mechanisms ramucirumab is a recombinant human monoclonal antibody and is used in the treatment of advanced malignancies in childhood lymphoma phase i and ii studies have found a positive effect of using antibody therapymonoclonal antibodies used to boost an anticancer immune response is another strategy to fight cancer where cancer cells are not targeted directly strategies include antibodies engineered to block mechanisms which downregulate anticancer immune responses checkpoints such as pd1 and ctla4 checkpoint therapy and antibodies modified to stimulate activation of immune cells monoclonal antibodies used for autoimmune diseases include infliximab and adalimumab which are effective in rheumatoid arthritis crohns disease and ulcerative colitis by their ability to bind to and inhibit tnfα basiliximab and daclizumab inhibit il2 on activated t cells and thereby help preventing acute rejection of kidney transplants omalizumab inhibits human immunoglobulin e ige and is useful in moderatetosevere allergic asthma alzheimers disease ad is a multifaceted agedependent progressive neurodegenerative disorder and is a major cause of dementia according to the amyloid hypothesis the accumulation of extracellular amyloid betapeptides aβ into plaques via oligomerization leads to'
  • 'a small proportion of humans show partial or apparently complete innate resistance to hiv the virus that causes aids the main mechanism is a mutation of the gene encoding ccr5 which acts as a coreceptor for hiv it is estimated that the proportion of people with some form of resistance to hiv is under 10 in 1994 stephen crohn became the first person discovered to be completely resistant to hiv in all tests performed despite having partners infected by the virus crohns resistance was a result of the absence of a receptor which prevent the hiv from infecting cd4 present on the exterior of the white blood cells the absence of such receptors or rather the shortening of them to the point of being inoperable is known as the delta 32 mutation this mutation is linked to groups of people that have been exposed to hiv but remain uninfected such as some offspring of hiv positive mothers health officials and sex workersin early 2000 researchers discovered a small group of sex workers in nairobi kenya who were estimated to have sexual contact with 60 to 70 hiv positive clients a year without signs of infection these sex workers were not found to have the delta mutation leading scientists to believe other factors could create a genetic resistance to hiv researchers from public health agency of canada have identified 15 proteins unique to those virusfree sex workers later however some sex workers were discovered to have contracted the virus leading oxford university researcher sarah rowlandjones to believe continual exposure is a requirement for maintaining immunity cc chemokine receptor type 5 also known as ccr5 or cd195 is a protein on the surface of white blood cells that is involved in the immune system as it acts as a receptor for chemokines this is the process by which t cells are attracted to specific tissue and organ targets many strains of hiv use ccr5 as a coreceptor to enter and infect host cells a few individuals carry a mutation known as ccr5δ32 in the ccr5 gene protecting them against these strains of hivin humans the ccr5 gene that encodes the ccr5 protein is located on the short p arm at position 21 on chromosome 3 a cohort study from june 1981 to october 2016 looked into the correlation between the delta 32 deletion and hiv resistance and found that homozygous carriers of the delta 32 mutation are resistant to mtropic strains of hiv1 infection certain populations have inherited the delta 32 mutation resulting in the genetic deletion of a portion of the ccr5 gene in 2019 it was discovered that a mutation of tnpo3 that causes type 1f'
1
  • 'free path of molecules in a gas the molecular composition of the gas contributes both as the mass m of the molecules and their heat capacities and so both have an influence on speed of sound in general at the same molecular mass monatomic gases have slightly higher speed of sound over 9 higher because they have a higher γ 53 166 than diatomics do 75 14 thus at the same molecular mass the speed of sound of a monatomic gas goes up by a factor of this gives the 9 difference and would be a typical ratio for speeds of sound at room temperature in helium vs deuterium each with a molecular weight of 4 sound travels faster in helium than deuterium because adiabatic compression heats helium more since the helium molecules can store heat energy from compression only in translation but not rotation thus helium molecules monatomic molecules travel faster in a sound wave and transmit sound faster sound travels at about 70 of the mean molecular speed in gases the figure is 75 in monatomic gases and 68 in diatomic gases note that in this example we have assumed that temperature is low enough that heat capacities are not influenced by molecular vibration see heat capacity however vibrational modes simply cause gammas which decrease toward 1 since vibration modes in a polyatomic gas give the gas additional ways to store heat which do not affect temperature and thus do not affect molecular velocity and sound velocity thus the effect of higher temperatures and vibrational heat capacity acts to increase the difference between the speed of sound in monatomic vs polyatomic molecules with the speed remaining greater in monatomics by far the most important factor influencing the speed of sound in air is temperature the speed is proportional to the square root of the absolute temperature giving an increase of about 06 ms per degree celsius for this reason the pitch of a musical wind instrument increases as its temperature increases the speed of sound is raised by humidity the difference between 0 and 100 humidity is about 15 ms at standard pressure and temperature but the size of the humidity effect increases dramatically with temperature the dependence on frequency and pressure are normally insignificant in practical applications in dry air the speed of sound increases by about 01 ms as the frequency rises from 10 hz to 100 hz for audible frequencies above 100 hz it is relatively constant standard values of the speed of sound are quoted in the limit of low frequencies where the wavelength is large compared to the mean free pathas shown above the approximate value 10003 33333 ms is exact a little below 5 °c and is a good approximation for all usual outside temperatures in temperate climates at least hence the usual rule of thumb to determine how far'
  • 'tailplane settingentering service in 1951 the boeing b47 stratojet was the worlds first purposely built jet bomber to include one piece stabilator design a stabilator was considered for the boeing b52 stratofortress but rejected due to the unreliability of hydraulics at the timethe north american f86 sabre the first us air force aircraft which could go supersonic although in a shallow dive was introduced with a conventional horizontal stabilizer with elevators which was eventually replaced with a stabilator when stabilators can move differentially to perform the roll control function of ailerons as they do on many modern fighter aircraft they are known as elevons or rolling tails a canard surface looking like a stabilator but not stabilizing like a tailplane can also be mounted in front of the main wing in a canard configuration curtisswright xp55 ascender stabilators on military aircraft have the same problem of too light control forces inducing overcontrol as general aviation aircraft unlike light aircraft supersonic aircraft are not fitted with antiservo tabs which would add unacceptable drag in older jet fighter aircraft a resisting force was generated within the control system either by springs or a resisting hydraulic force rather than by an external antiservo tab for example the north american f100 super sabre used gearing and a variable stiffness spring attached to the control stick to provide an acceptable resistance to pilot input in modern fighters control inputs are processed by computers fly by wire and there is no direct connection between the pilots stick and the stabilator most modern airliners do not have a stabilator instead they have an adjustable horizontal stabilizer and a separate elevator control the movable horizontal stabilizer is adjusted to keep the pitch axis in trim during flight as the speed changes or as fuel is burned and the center of gravity moves these adjustments are commanded by the autopilot when it is engaged or by the human pilot if the plane is being flown manually adjustable stabilizers are not the same as stabilators a stabilator is controlled by the pilots control yoke or stick whereas an adjustable stabilizer is controlled by the trim system in the boeing 737 the adjustable stabilizer trim system is powered by an electrically operated jackscrewone example of an airliner with a genuine stabilator used for flight control is the lockheed l1011'
  • 'industrial airport montana the program included noise measurement with over 1400 sensors for internal and external measurements noise reduction including safran undercarriage modifications saf testing with blends of 30 to 50 sanitisation methods for the covid19 pandemic digital textbased atc routing communications 2021 boeing 737 max 9 this 5month program was conducted with a new airframe originally destined for corendon dutch airlines but was painted in a special alaska airlines livery with ecodemonstrator stickers in october 2021 the aircraft flew from seattle to glasgow scotland for the united nations cop26 climate change conference bringing executives from boeing and alaska airlines and fuelled by a 50 saf fuel blend the testing program included low profile anticollision light for weight and drag reduction and increased visibility textbased atc communications halonfree fire extinguishing ground testing only noise reduction engine nacelles including testing at glasgow industrial airport montana cabin walls made from recycled material 50 saf blend atmospheric greenhouse gas measurement system integration for airliners passenger air vent designs to create an air curtain between seat rows 2022 boeing 777200er the aircraft was originally delivered to singapore airlines in 2002 and flew most recently for surinam airways it wears a livery celebrating the 10th anniversary of the ecodemonstrator program boeing implied that this aircraft will operate as the ecodemonstrator test aircraft until 2024 the company stated that the sixmonth 2022 program would demonstrate 30 new technologies among which were the use of a 30 saf blend disinfection of water from sinks for reuse in toilet flushing weight reduction through 3d printed parts noise reduction techniques vortex generators which retract during cruise headworn headup display enhanced vision system firefighting system that does not use halon environmentallyfriendly galley cooler refrigerant 2023 in april 2023 boeing announced that the 777200er would be testing 19 technologies during the year including cargo hold wall panels made from recycled and sustainable materials fibreoptic fuel quantity sensors compatible with saf smart airport maps by boeing subsidiary jeppesen for active airport taxiing monitoring for electronic flight bags all flights to use saf in the highest available blendbetween 25 june and 29 june 2023 the aircraft operated from london stansted airport performing flights over the netherlands belgium germany and the czech republic subsequently returning to its base at seattle as of october 2023 no announcement had been made about the purpose of these flights most information from planespottersnet all aircraft apart from the 2022 777 had ecodemonstrator stickers applied to the'
4
  • 'radius of confinement for a confined motion the msd has been widely used in early applications of long but not necessarily redundant singleparticle trajectories in a biological context however the msd applied to long trajectories suffers from several issues first it is not precise in part because the measured points could be correlated second it cannot be used to compute any physical diffusion coefficient when trajectories consists of switching episodes for example alternating between free and confined diffusion at low spatiotemporal resolution of the observed trajectories the msd behaves sublinearly with time a process known as anomalous diffusion which is due in part to the averaging of the different phases of the particle motion in the context of cellular transport ameoboid high resolution motion analysis of long spts in microfluidic chambers containing obstacles revealed different types of cell motions depending on the obstacle density crawling was found at low density of obstacles and directed motion and random phases can even be differentiated statistical methods to extract information from spts are based on stochastic models such as the langevin equation or its smoluchowskis limit and associated models that account for additional localization point identification noise or memory kernel the langevin equation describes a stochastic particle driven by a brownian force ξ displaystyle xi and a field of force eg electrostatic mechanical etc with an expression f x t displaystyle fxt m x ¨ γ x [UNK] − f x t ξ displaystyle mddot xgamma dot xfxtxi where m is the mass of the particle and γ 6 π a ρ displaystyle gamma 6pi arho is the friction coefficient of a diffusing particle ρ displaystyle rho the viscosity here ξ displaystyle xi is the δ displaystyle delta correlated gaussian white noise the force can derived from a potential well u so that f x t − u ′ x displaystyle fxtux and in that case the equation takes the form m d 2 x d t 2 γ d x d t ∇ u x 2 ε γ d η d t displaystyle mfrac d2xdt2gamma frac dxdtnabla uxsqrt 2varepsilon gamma frac deta dt where ε k b t displaystyle varepsilon ktextbt is the energy and k b displaystyle ktextb the boltzmann constant and t the temperature langevins equation is used to describe trajectories where inertia or acceleration matters for example at very short timescales'
  • 'counting is the process of determining the number of elements of a finite set of objects that is determining the size of a set the traditional way of counting consists of continually increasing a mental or spoken counter by a unit for every element of the set in some order while marking or displacing those elements to avoid visiting the same element more than once until no unmarked elements are left if the counter was set to one after the first object the value after visiting the final object gives the desired number of elements the related term enumeration refers to uniquely identifying the elements of a finite combinatorial set or infinite set by assigning a number to each element counting sometimes involves numbers other than one for example when counting money counting out change counting by twos 2 4 6 8 10 12 or counting by fives 5 10 15 20 25 there is archaeological evidence suggesting that humans have been counting for at least 50000 years counting was primarily used by ancient cultures to keep track of social and economic data such as the number of group members prey animals property or debts that is accountancy notched bones were also found in the border caves in south africa which may suggest that the concept of counting was known to humans as far back as 44000 bce the development of counting led to the development of mathematical notation numeral systems and writing counting can occur in a variety of forms counting can be verbal that is speaking every number out loud or mentally to keep track of progress this is often used to count objects that are present already instead of counting a variety of things over time counting can also be in the form of tally marks making a mark for each number and then counting all of the marks when done tallying this is useful when counting objects over time such as the number of times something occurs during the course of a day tallying is base 1 counting normal counting is done in base 10 computers use base 2 counting 0s and 1s also known as boolean algebra counting can also be in the form of finger counting especially when counting small numbers this is often used by children to facilitate counting and simple mathematical operations fingercounting uses unary notation one finger one unit and is thus limited to counting 10 unless you start in with your toes older finger counting used the four fingers and the three bones in each finger phalanges to count to the number twelve other handgesture systems are also in use for example the chinese system by which one can count to 10 using only gestures of one hand by using finger binary base 2 counting it is possible to keep a finger count up to 1023 210 − 1 various devices can'
  • '##xt1sum t0infty vt1prtgxleq t1mid gxsum t0infty vt1leftfrac prgxtprgxrightleftfrac prxtgleq xt1prgxtrightsum t0infty vt1tpxcdot qxtendaligned where t p x displaystyle tpx is the probability that x survives to age xt and q x t displaystyle qxt is the probability that xt dies within one year if the benefit is payable at the moment of death then tgx g x and the actuarial present value of one unit of whole life insurance is calculated as a [UNK] x e v t [UNK] 0 ∞ v t f t t d t [UNK] 0 ∞ v t t p x μ x t d t displaystyle overline axevtint 0infty vtfttdtint 0infty vttpxmu xtdt where f t displaystyle ft is the probability density function of t t p x displaystyle tpx is the probability of a life age x displaystyle x surviving to age x t displaystyle xt and μ x t displaystyle mu xt denotes force of mortality at time x t displaystyle xt for a life aged x displaystyle x the actuarial present value of one unit of an nyear term insurance policy payable at the moment of death can be found similarly by integrating from 0 to n the actuarial present value of an n year pure endowment insurance benefit of 1 payable after n years if alive can be found as n e x p r g x n v n n p x v n displaystyle nexprgxnvnnpxvn in practice the information available about the random variable g and in turn t may be drawn from life tables which give figures by year for example a three year term life insurance of 100000 payable at the end of year of death has actuarial present value 100 000 a x 1 3 [UNK] 100 000 [UNK] t 1 3 v t p r t g x t displaystyle 100000astackrel 1xoverline 3100000sum t13vtprtgxt for example suppose that there is a 90 chance of an individual surviving any given year ie t has a geometric distribution with parameter p 09 and the set 1 2 3 for its support then p r t g x 1 01 p r t g x 2 09 01 009 p r t g x 3 09'
14
  • 'in amniote embryology the hypoblast is one of two distinct layers arising from the inner cell mass in the mammalian blastocyst or from the blastodisc in reptiles and birds the hypoblast gives rise to the yolk sac which in turn gives rise to the chorionthe hypoblast is a layer of cells in fish and amniote embryos the hypoblast helps determine the embryos body axes and its migration determines the cell movements that accompany the formation of the primitive streak and helps to orient the embryo and create bilateral symmetry the other layer of the inner cell mass the epiblast differentiates into the three primary germ layers ectoderm mesoderm and endoderm the hypoblast lies beneath the epiblast and consists of small cuboidal cells the hypoblast in fish but not in birds and mammals contains the precursors of both the endoderm and mesoderm in birds and mammals it contains precursors to the extraembryonic endoderm of the yolk sacin chick embryos early cleavage forms an area opaca and an area pellucida and the region between these is called the marginal zone area opaca is the blastoderms peripheral part where the cells remain unseparated from the yolk it is a white area that transmits light although the hypoblast does not contribute to the embryo it influences the orientation of the embryo the hypoblast also inhibits primitive streak formation the absence of hypoblast results in multiple primitive streaks in chicken embryos the primitive endoderm derived yolk sac ensures the proper organogenesis of the fetus and the exchange of nutrients gases and wastes hypoblast cells also provide chemical signals that specify the migration of epiblast cells in birds the primitive streak formation is generated by a thickening of the epiblast called the kollers sickle the kollers sickle is created at the posterior edge of the area pellucida while the rest of the cells of the area pellucida remain at the surface forming the epiblast in chicks the mesoderm cells dont invaginate like in amphibians but they migrate medially and caudally from both sides and create a midline thickening called primitive streak the primitive streak grows rapidly in length as more presumptive mesoderm cells continue to aggregate inward gas'
  • 'handedness lateralization of language function assessment on a dichotic listening test'
  • '##pecific in immunologically competent animalsculturing virus was once technically difficult in 1931 ernest goodpasture and alice miles woodruff developed a new technique that used chicken eggs to propagate a pox virus building on their success the chick was used to isolate the mumps virus for vaccine development and it is still used to culture some viruses and parasites today the ability of chicken embryonic nerves to infiltrate a mouse tumor suggested to rita levimontalcini that the tumor must produce a diffusible growth factor 1952 she identified nerve growth factor ngf leading to the discovery of a large family of growth factors which are key regulators during normal development and disease processes including cancer the adult chicken has also made significant contributions to the advancement of science by inoculating chickens with cholera bacteria pasteurella multocida from an overgrown and thereby attenuated culture louis pasteur produced the first labderived attenuated vaccine 1860s great advances in immunology and oncology continued to characterize the 20th century for which we indebted to the chicken model peyton rous 18791970 won the nobel prize for discovering that viral infection of chicken could induce sarcoma rous 1911 steve martin followed up on this work and identified a component of a chicken retrovirus src which became the first known oncogene j michael bishop and harold varmus with their colleagues 1976 extended these findings to humans showing that cancer causing oncogenes in mammals are induced by mutations to protooncogenesdiscoveries in the chicken ultimately divided the adaptive immune response into antibody bcell and cellmediated tcell responses chickens missing their bursa an organ with an unknown function at the time could not be induced to make antibodies through these experiments bruce glick correctly deduced that bursa was responsible for making the cells that produced antibodies bursa cells were termed bcells for bursa to differentiate them from thymus derived tcells the chicken embryo is a unique model that overcomes many limitations to studying the biology of cancer in vivo the chorioallantoic membrane cam a wellvascularized extraembryonic tissue located underneath the eggshell has a successful history as a biological platform for the molecular analysis of cancer including viral oncogenesis carcinogenesis tumor xenografting tumor angiogenesis and cancer metastasis since the chicken embryo is naturally immunodeficient the cam readily supports the engraftment of both normal and tumor tissues the avian cam successfully supports most cancer cell characteristics including growth invasion angiogenesis and re'
31
  • 'neti neti sanskrit नति नति is a sanskrit expression which means not this not that or neither this nor that neti is sandhi from na iti not so it is found in the upanishads and the avadhuta gita and constitutes an analytical meditation helping a person to understand the nature of the brahman by negating everything that is not brahman one of the key elements of jnana yoga practice is often a neti neti search the purpose of the exercise is to negate all objects of consciousness including thoughts and the mind and to realize the nondual awareness of reality neti neti meaning not this not this is the method of vedic analysis of negation it is a keynote of vedic inquiry with its aid the jnani negates identification with all things of this world which is not the atman in this way he negates the anatman notself through this gradual process he negates the mind and transcends all worldly experiences that are negated till nothing remains but the self he attains union with the absolute by denying the body name form intellect senses and all limiting adjuncts and discovers what remains the true i alone lcbeckett in his book neti neti explains that this expression is an expression of something inexpressible it expresses the ‘ suchness ’ the essence of that which it refers to when ‘ no other definition applies to it ’ neti neti negates all descriptions about the ultimate reality but not the reality itself intuitive interpretation of uncertainty principle can be expressed by neti neti that annihilates ego and the world as nonself anatman it annihilates our sense of self altogetheradi shankara was one of the foremost advaita philosophers who advocated the netineti approach in his commentary on gaudapada ’ s karika he explains that brahman is free from adjuncts and the function of neti neti is to remove the obstructions produced by ignorance his disciple sureshvara further explains that the negation neti neti does not have negation as its purpose it purports identity the sage of the brihadaranyaka upanishad ii iii 16 beginning with there are two forms of brahman the material and the immaterial the solid and the fluid the sat ‘ being ’ and tya ‘ that ’ of satya – which means true denies the existence of everything other than brahman and therefore there exists no separate entity like jiva which shankara states is'
  • '##dicate will contain paradoxical sentences such as this sentence is not true as a result tarski held that the semantic theory could not be applied to any natural language such as english because they contain their own truth predicates donald davidson used it as the foundation of his truthconditional semantics and linked it to radical interpretation in a form of coherentism bertrand russell is credited with noticing the existence of such paradoxes even in the best symbolic formations of mathematics in his day in particular the paradox that came to be named after him russells paradox russell and whitehead attempted to solve these problems in principia mathematica by putting statements into a hierarchy of types wherein a statement cannot refer to itself but only to statements lower in the hierarchy this in turn led to new orders of difficulty regarding the precise natures of types and the structures of conceptually possible type systems that have yet to be resolved to this day kripkes theory of truth named after saul kripke contends that a natural language can in fact contain its own truth predicate without giving rise to contradiction he showed how to construct one as follows beginning with a subset of sentences of a natural language that contains no occurrences of the expression is true or is false so the barn is big is included in the subset but not the barn is big is true nor problematic sentences such as this sentence is false defining truth just for the sentences in that subset extending the definition of truth to include sentences that predicate truth or falsity of one of the original subset of sentences so the barn is big is true is now included but not either this sentence is false nor the barn is big is true is true defining truth for all sentences that predicate truth or falsity of a member of the second set imagine this process repeated infinitely so that truth is defined for the barn is big then for the barn is big is true then for the barn is big is true is true and so ontruth never gets defined for sentences like this sentence is false since it was not in the original subset and does not predicate truth of any sentence in the original or any subsequent set in kripkes terms these are ungrounded since these sentences are never assigned either truth or falsehood even if the process is carried out infinitely kripkes theory implies that some sentences are neither true nor false this contradicts the principle of bivalence every sentence must be either true or false since this principle is a key premise in deriving the liar paradox the paradox is dissolvedhowever it has been shown by godel that selfreference cannot be avoided naive'
  • 'cause of error but the interpretations can be the term prolepsis was used by epicureans to describe the way the mind forms general concepts from sense perceptions to the stoics more like heraclitus than anaxagoras order in the cosmos comes from an entity called logos the cosmic reason but as in anaxagoras this cosmic reason like human reason but higher is connected to the reason of individual humans the stoics however did not invoke incorporeal causation but attempted to explain physics and human thinking in terms of matter and forces as in aristotelianism they explained the interpretation of sense data requiring the mind to be stamped or formed with ideas and that people have shared conceptions that help them make sense of things koine ennoia nous for them is soul somehow disposed pos echon the soul being somehow disposed pneuma which is fire or air or a mixture as in plato they treated nous as the ruling part of the soulplutarch criticized the stoic idea of nous being corporeal and agreed with plato that the soul is more divine than the body while nous mind is more divine than the soul the mix of soul and body produces pleasure and pain the conjunction of mind and soul produces reason which is the cause or the source of virtue and vice from “ on the face in the moon ” albinus was one of the earliest authors to equate aristotles nous as prime mover of the universe with platos form of the good alexander of aphrodisias alexander of aphrodisias was a peripatetic aristotelian and his on the soul referred to as de anima in its traditional latin title explained that by his interpretation of aristotle potential intellect in man that which has no nature but receives one from the active intellect is material and also called the material intellect nous hulikos and it is inseparable from the body being only a disposition of it he argued strongly against the doctrine of immortality on the other hand he identified the active intellect nous poietikos through whose agency the potential intellect in man becomes actual not with anything from within people but with the divine creator itself in the early renaissance his doctrine of the souls mortality was adopted by pietro pomponazzi against the thomists and the averroists for him the only possible human immortality is an immortality of a detached human thought more specifically when the nous has as the object of its thought the active intellect itself or another incorporeal intelligible formalex'
41
  • 'the urban hierarchy of brazil places brazils cities into categories global cities national metropolises regional metropolises regional capitols a b and c subregional centers a and b and zone centers a b and c brazils global cities possess areas of influence that surpass the countrys borders brazils global cities are the municipalities of rio de janeiro and sao paulo brasilia sao paulo and rio de janeiro are brazils national metropolises they are the first level of territorial management providing a focus for centers located in all parts of brazil regional metropolises constitute the second level of territorial management and influence the surrounding macroregion where they are located brazils regional metropolises include curitiba salvador porto alegre goiania fortaleza recife manaus belem and belo horizonte the regional capitol is the third level of territorial management and influences its state and its surrounding states these are subdivided into a regional capitols cities such as natal campinas florianopolis and vitoria b regional capitols cities such as caxias do sul chapeco porto velho and campina grande and c regional capitols cities such as campos caruaru governador valadares and mossoro subregional centers possess influence over cities that are within close proximity and nearby towns and rural areas as well these are subdivided into a subregional centers cities such as alfenas anapolis sao mateus and umuarama and b subregional centers afogados da ingazeira cacoal caratinga and tefe the zone centers are the cities or towns that have an important regional influence but limited to the immediate surrounding area and exercising elementary management functions these are subdivided into two levels a zone centers cities such as tabatinga lagoa vermelha lins and tres de maio and b zone centers towns such as afonso claudio eirunepe sao bento and taio demography of brazil cities of brazilgeneral urban hierarchy urban studies and planning urban geography'
  • 'world urban forum iii was an international unhabitat event on urban sustainability also known as wuf3 world urban forum and fum3 forum urbain mondial wuf3 was organized by the unhabitat and facilitated and funded by the government of canada it was held on 19 – 23 june 2006 in vancouver to help solve urgent problems of the worlds cities the theme of the third session of the world urban forum was sustainable cities – turning ideas into action from ideas to action was the intended outcome of the conference officially it was suggested that this conference would be considered a success if every participant took home and implemented at least one new idea within the next 50 years twothirds of the worlds population will live in urban areas as these cities expand the world community faces the challenge of minimizing the growing poverty crisis and improving the urban poors access to basic facilities such as shelter clean water and sanitation world urban forum 3 brought together thousands of the worlds best thinkers on urbanization – experts decision makers and members of public and private institutions – to zero in on solutions to these key 21st century challenges habitat jam a threeday international online event was conceived to set the stage for the wuf3 conference seventy actionable ideas were collected through the jam and were used to define themes and shape discussion topics for delegates attending the forum participation in habitat jam was open to public and privatesector organizations and individuals around the world with an interest in urban issues while the jam is over the discussions remain available online attendance at wuf3 was estimated at 11418 people registered from more than 100 countries the number of participants was 9689 while 1847 were support staff and volunteers the gender ratios were 467 female and 521 male participants identified as government parliamentarians or local authority comprised 3094 of the participants the remaining participants were classified as nongovernmental organizations private sector professional and research institutions foundations media intergovernmental organizations other participants canada secretariat and no affiliation indicated compared to previous forums there was a notable increase in private sector participation up from 203 to 1187 private sector participants between wuf2 and wuf3 in vancouver'
  • 'sustainable urbanism is both the study of cities and the practices to build them urbanism that focuses on promoting their long term viability by reducing consumption waste and harmful impacts on people and place while enhancing the overall wellbeing of both people and place wellbeing includes the physical ecological economic social health and equity factors among others that comprise cities and their populations in the context of contemporary urbanism the term cities refers to several scales of human settlements from towns to cities metropolises and megacity regions that includes their peripheries suburbs exurbs sustainability is a key component to professional practice in urban planning and urban design along with its related disciplines landscape architecture architecture and civil and environmental engineering green urbanism and ecological urbanism are other common terms that are similar to sustainable urbanism however they can be construed as focusing more on the natural environment and ecosystems and less on economic and social aspects also related to sustainable urbanism are the practices of land development called sustainable development which is the process of physically constructing sustainable buildings as well as the practices of urban planning called smart growth or growth management which denote the processes of planning designing and building urban settlements that are more sustainable than if they were not planned according to sustainability criteria and principles the origin of the term sustainable urbanism has been attributed to professor susan owens of cambridge university in the uk in the 1990s according to her doctoral student and now professor of architecture phillip tabb the first university graduate program named sustainable urbanism was founded by professors michael neuman and phillip tabb at texas am university in 2002 there are now dozens of university programs with that name worldwide as of 2018 there are hundreds of scholarly articles books and publications whose titles contain the exact words sustainable urbanism and thousands of articles books and publications that contain that exact term according to google scholar in 2007 two important events occurred in the usa that furthered the knowledge base and diffusion of sustainable urbanism first was the international conference on sustainable urbanism at texas am university in april which drew nearly 200 persons from five continents second later in the year was the publication of the book sustainable urbanism by doug farr according to farr this approach aims to eliminate environmental impacts of urban development by supplying and providing all resources locally the full life cycle of services and public goods such as electricity and food are evaluated from production to consumption with the intent of eliminating waste or environmental externalities since that time significant research and practice worldwide has broadened the term considerably to include social economic welfare and public health factors among others to the environmental and physical factors in the farr book thus taking it beyond an urban design field into all of'
30
  • 'psychological repercussions for caregivers palliative care and bereavement palliative care is an especially demanding task for cancer caregivers than less progressive cancer because the care required for those with terminal cancer tends to be more demanding caregivers in this situation often report high caregiver burden lasting the entire period caregiver distress also tends to increase as patients progress through this phase causing a lower caregiver quality of life having a difficult time with caregiving increases distress caregivers face as they enter the bereavement period after the death of their loved one whereas finding meaning in caring for their loved one may relate to better long term adjustment for caregivers before bereavement loved ones often face anticipatory grief where they mourn and prepare for the loss of their loved one families often react with fears of abandonment anxiety hopelessness and helplessness and an intensified attachment to their loved one upon the death of their loved one caregivers typically experience grief which in a minority of cases may be complicated by a diagnosis of major depressive disorder later in the grief process survivorship as patients and caregivers enter the survivorship period after the completion of primary treatment they often feel uncertain and overwhelmed by what lies before them patients often leave their last treatment session without much direction as to what they should expect next this is distressing for the caregiver as well because caregiving continues into this phase of the cancer trajectory because of the longterm and late effects of treatment the psychological distress patients face and any other related factors a common concern faced by survivors and their caregivers is how to return to their normal life without cancer as well survivors and caregivers are often uncertain and fearful about death and cancer recurrence even in the absence of cancerrelated symptoms recurrence cancer recurrence has been called one of the most stressful events in the course of illness for both patients and their families research on the effect that recurrence has on caregivers is mixed some caregivers may report even poorer quality of life than the recurrent survivors themselves as well as depressive moods fears about recurrence and hopelessness kim given 2008 in contrast some caregivers report lower levels of strain in recurrence than with the initial cancer presumably because they had adapted to the stress of chronic illness along with psychological repercussions of cancer some caregivers also experience physical effects due to caregiving this is particularly true of highly burdensome caregiving as is typically the case with older or palliative patients'
  • 'disparate sources and create electronically linked national health information exchange it might be somehow related a big health consortium was formed in 2008 to promote personalized medicine but disbanded in 2012 in july 2009 cabig announced a collaboration with the dr susan love research foundation to build an online cohort of women willing to participate in clinical trials called the army of women it had a goal of one million in its database by december 2009 the site was launched and about 30000 women and men signed up by 2010the cancer genome atlas aimed to characterize more than 10000 tumors across at least 20 cancers by 2015 cabig provided connectivity data standards and tools to collect organize share and analyze the diverse research data in its database since 2007 nci worked with uk national cancer research institute ncri the two organizations shared technologies for collaborative research and the secure exchange of research data using cagrid and the ncri oncology information exchange onix web portal announced in august 2009 onix shut down in march 2012 the duke cancer institute used cabig clinical trials tools in their collaboration with the beijing cancer hospital of peking university the project intended to connect 65 ncidesignated cancer centers to enable collaborative research participating institutions could either “ adopt ” cabig tools to share data directly through cagrid or “ adapt ” commercial or inhouse developed software to be cabigcompatible the cabig program developed software development kits sdks for interoperable software tools and instructions on the process of adapting existing tools or developing applications to be cabigcompatible the enterprise support network program included domainspecific expertise and support service providers third party organizations that provide assistance on a contractforservices basis a web portal using the liferay software was available from 2008 to 2013 since 2004 the cabig program used opensource communities adapted from other publicprivate partnerships the cabig program produced software under contract to software development teams largely within the commercial research communityin general software developed under us government contracts is the property of the us government and the us taxpayers depending on the terms in specific contracts they might be accessible only by request under the freedom of information act foia the timeliness of response to such requests might preclude a requester from ever gaining any secondary value from software released under a foia request the cabig program placed the all cabig software in a software repository freely accessible for download open source means anyone can modify the downloaded software however the licensing applied to the downloaded software allows greater flexibility than is typical an individual or enterprise is allowed to contribute the modified code back to the cabig program but is not required to'
  • 'taken from the patients own body autologous vaccine or from another patient allogeneic vaccine several autologous vaccines such as oncophage for kidney cancer and vitespen for a variety of cancers have either been released or are undergoing clinical trial fdaapproved vaccines such as sipuleucelt for metastasizing prostate cancer or nivolumab for melanoma and lung cancer can act either by targeting overexpressed or mutated proteins or by temporarily inhibiting immune checkpoints to boost immune activity screening procedures commonly sought for more prevalent cancers such as colon breast and cervical have greatly improved in the past few decades from advances in biomarker identification and detection early detection of pancreatic cancer biomarkers was accomplished using sersbased immunoassay approach a sersbase multiplex proteinbiomarker detection platform in a microfluidic chip to detect is used to detect several protein biomarkers to predict the type of disease and critical biomarkers and increase the chance of diagnosis between diseases with similar biomarkers pc ovc and pancreatitisfor detecting cancer early all eligible people need to go to screenings but disadvantaged groups face different barriers that lead to lower attendance rates cervical cancer cervical cancer is usually screened through in vitro examination of the cells of the cervix eg pap smear colposcopy or direct inspection of the cervix after application of dilute acetic acid or testing for hpv the oncogenic virus that is the necessary cause of cervical cancer screening is recommended for women over 21 years initially women between 21 – 29 years old are encouraged to receive pap smear screens every three years and those over 29 every five years for women older than the age of 65 and with no history of cervical cancer or abnormality and with an appropriate precedence of negative pap test results may cease regular screeningstill adherence to recommended screening plans depends on age and may be linked to educational level culture psychosocial issues and marital status further emphasizing the importance of addressing these challenges in regards to cancer screening colorectal cancer colorectal cancer is most often screened with the fecal occult blood test fobt variants of this test include guaiacbased fobt gfobt the fecal immunochemical test fit and stool dna sdna testing further testing includes flexible sigmoidoscopy fs total colonoscopy tc or computed tomography ct scans if a tc is nonideal a recommended age at which to begin screening is 50 years however this'
17
  • 'in glaciology starvation occurs when a glacier retreats not because of temperature increases but due to precipitation so low that the ice flow downward into the zone of ablation exceeds the replenishment from snowfall eventually the ice will move so far down that it either calves into the ocean or melts when starvation does occur however it can almost always be reversed by slight changes in precipitation such as are brought about by mountain ranges thus even if glaciers do not cover a lowland due to low precipitation glaciation is almost certain to occur at higher elevations starvation of continental ice sheets is known to have occurred during the period before the last glacial maximum in many areas of canada and the west siberian plain it is thought that after the end of the eemian stage continental ice sheets first formed near or beyond their northern margins that is in the extreme northwest of siberia and the yukon territory northwest territories and nunavut in north america however as an ice sheet advances precipitation at its centre known as a dome tends to become very low because highpressure systems form due to the very cold temperatures above the ice this meant that at the northern edge of the ice sheets there was almost no replenishment of the ice and as it fell to lower elevations even if it did not melt it was not being replaced thus as the continental ice sheets of quaternary glaciations advanced south according to each milankovitch cycle their northern edges were starved and it is believed that starvation caused them to retreat substantially southward by the time the southern limits of maximum glaciation were approached in areas such as the russian far east eastern siberia and beringia glaciers were in effect starved before they could form at all some have also argued that starvation as well as increasing temperatures played a significant role in the decay of continental ice sheets after the lgm the argument is that as fresh water from the melting edges of the ice sheet reached the sea the flow of warm water which fed the ice sheets was stopped and deglaciation during the summer accelerated this is considered a highly controversial position starvation of glaciers is believed to have occurred during the little ice age in parts of alaska the himalayas and the karakoram this is because these glaciers do not follow the general global patterns of glacial advance during warm periods and retreat during cold periods which would imply that their size is controlled by the amount of precipitation they receive because temperatures are so low that the increases deemed from global warming for example would fail to melt them to any degree whatsoever pielou ec 1991 after the ice age the return of life to glaciated north america university of chicago press isbn 02'
  • 'observed for tunnel valleysin southern africa a permocarboniferous tunnel valley system has been identified in northern cape province south africa the active formation of tunnel valleys is observed in the present period beneath the antarctic ice during the late ordovician eastern gondwana was covered with ice sheets as a consequence jordan and saudi arabia exhibit regionallyextensive filled tunnel valley structures openpit gold mines near kalgoorlie western australia expose an extensive network of glaciallyeroded valleys filled with tillite and shale cut below the late paleozoic pilbara ice sheet tunnel valleys and related glacial impacts have been identified in russia belarus ukraine poland germany northern france the netherlands belgium great britain finland sweden denmark and norway they have been studied in detail in denmark north germany and north poland where the thick ice sheet of the weichsel and earlier glaciations having flowed down from the mountains of scandinavia began to rise up the northeuropean slope driven by the altitude of the glacial ice accumulation over scandinavia their alignment indicates the direction of ice flow at the time of their formation they are found extensively in the united kingdom with several examples reported from cheshire for example they are also to be found under the north seaexamples of lakes formed in tunnel valleys include the ruppiner see a lake in ostprignitzruppin brandenburg the werbellinsee and the schwielochsee all in germany okanagan lake is a large deep ribbon lake in the okanagan valley of british columbia which formed in a tunnel valley from the okanogan lobe of the cordilleran ice sheet the lake is 135 km 84 mi long between 4 and 5 km 25 and 31 mi wide and has a surface area of 351 km2 136 sq mi northern idaho and montana show evidence of tunnel valley formation under the purcell lobe and the flathead lobe of the cordilleran ice sheet tunnel valleys in southeast alberta form an interconnected anabranching network comprising sage creek the lost river and the milk river and generally drain southeast tunnel valleys have been observed in minnesota wisconsin and michigan at the margins of the laurentide ice sheet examples of bedrock tunnel valleys in minnesota include river warren falls and several valleys which lie deep beneath till deposited by the glaciers which created them but can be traced in many places by the chain of lakes in minneapolis and lakes and dry valleys in st paul the kawartha lakes of ontario formed in the late wisconsinan glacial period ice melt from the niagara escarpment flowed through tunnel valleys beneath the ice expanded to form a westtoeast passage between the main'
  • 'branch of the glaciohydrological system of observation installed throughout the tropical andes mountains by ird and partners since 1991 has monitored mass balance on zongo 6000 m asl chacaltaya 5400 m asl and charquini glaciers 5380 m asl a system of stakes has been used with frequent field observations as often as monthly these measurements have been made in concert with energy balance to identify the cause of the rapid retreat and mass balance loss of these tropical glaciers nowadays glaciological stations exist in russia and kazakhstan in russia there are 2 stations glacier djankuat in caucasus is located near the mountain elbrus and glacier aktru in altai mountains in kazakhstan there is glaciological station in glacier tuyuksu in tian shan is located near the city of almaty a recently developed glacier balance model based on monte carlo principals is a promising supplement to both manual field measurements and geodetic methods of measuring mass balance using satellite images the ptaa precipitationtemperatureareaaltitude model requires only daily observations of precipitation and temperature collected at usually lowaltitude weather stations and the areaaltitude distribution of the glacier output are daily snow accumulation bc and ablation ba for each altitude interval which is converted to mass balance by bn bc – ba snow accumulation bc is calculated for each areaaltitude interval based on observed precipitation at one or more lower altitude weather stations located in the same region as the glacier and three coefficients that convert precipitation to snow accumulation it is necessary to use established weather stations that have a long unbroken records so that annual means and other statistics can be determined ablation ba is determined from temperature observed at weather stations near the glacier daily maximum and minimum temperatures are converted to glacier ablation using twelve coefficients the fifteen independent coefficients that are used to convert observed temperature and precipitation to ablation and snow accumulation apply a simplex optimizing procedure the simplex automatically and simultaneously calculates values for each coefficient using monte carlo principals that rely on random sampling to obtain numerical results similarly the ptaa model makes repeated calculations of mass balance minutely readjusting the balance for each iteration the ptaa model has been tested for eight glaciers in alaska washington austria and nepal calculated annual balances are compared with measured balances for approximately 60 years for each of five glaciers the wolverine and gulkana in alaska hintereisferner kesselwandferner and vernagtferner in austria it has also been applied to the langtang glacier in nepal results for these tests are shown on the'
12
  • 'symmetry group whose number of symmetries is a jordan – polya number and every jordan – polya number counts the symmetries of some tree primorial the primorial n displaystyle n is the product of prime numbers less than or equal to n displaystyle n this construction gives them some similar divisibility properties to factorials but unlike factorials they are squarefree as with the factorial primes n ± 1 displaystyle npm 1 researchers have studied primorial primes n ± 1 displaystyle npm 1 subfactorial the subfactorial yields the number of derangements of a set of n displaystyle n objects it is sometimes denoted n displaystyle n and equals the closest integer to n e displaystyle ne superfactorial the superfactorial of n displaystyle n is the product of the first n displaystyle n factorials the superfactorials are continuously interpolated by the barnes gfunction'
  • 'u t m n f p m n cos m 2 n 2 t f q m n sin m 2 n 2 t m 2 n 2 displaystyle futmnfpmncossqrt m2n2tfrac fqmnsinsqrt m2n2tsqrt m2n2 applying the inverse fourier transform gives u t x y q t x y p t t x y displaystyle utxyqtxypttxy where p t x y 1 2 π [UNK] x − x ′ 2 y − y ′ 2 t 2 p x ′ y ′ d x ′ d y ′ t 2 − x − x ′ 2 − y − y ′ 2 1 2 displaystyle ptxyfrac 12pi int xx2yy2t2frac pxydxdyleftt2xx2yy2right12 q t x y 1 2 π [UNK] x − x ′ 2 y − y ′ 2 t 2 q x ′ y ′ d x ′ d y ′ t 2 − x − x ′ 2 − y − y ′ 2 1 2 displaystyle qtxyfrac 12pi int xx2yy2t2frac qxydxdyleftt2xx2yy2right12 here pq are arbitrary sufficiently smooth functions of two variables so due their modest time dependence the integrals pq also count as freely chosen functions of two variables as promised one of them is differentiated once before adding to the other to express the general solution of the initial value problem for the two dimensional wave equation in the case of a nonlinear equation it will only rarely be possible to obtain the general solution in closed form however if the equation is quasilinear linear in the highest order derivatives then we can still obtain approximate information similar to the above specifying a member of the solution space will be modulo nonlinear quibbles equivalent to specifying a certain number of functions in a smaller number of variables the number of these functions is the einstein strength of the pde in the simple example above the strength is two although in this case we were able to obtain more precise information'
  • 'properties with infinite exponents may hold and some of them are obtained as consequences of the axiom of determinacy ad for example donald a martin proved that ad implies [UNK] 1 → [UNK] 1 2 [UNK] 1 displaystyle aleph 1rightarrow aleph 12aleph 1 several large cardinal properties can be defined using this notation in particular weakly compact cardinals κ are those that satisfy κ→κ2 αerdos cardinals κ are the smallest that satisfy κ→αω ramsey cardinals κ are those that satisfy κ→κω'
40
  • 'in mathematics particularly topology a topological space x is locally normal if intuitively it looks locally like a normal space more precisely a locally normal space satisfies the property that each point of the space belongs to a neighbourhood of the space that is normal under the subspace topology a topological space x is said to be locally normal if and only if each point x of x has a neighbourhood that is normal under the subspace topologynote that not every neighbourhood of x has to be normal but at least one neighbourhood of x has to be normal under the subspace topology note however that if a space were called locally normal if and only if each point of the space belonged to a subset of the space that was normal under the subspace topology then every topological space would be locally normal this is because the singleton x is vacuously normal and contains x therefore the definition is more restrictive every locally normal t1 space is locally regular and locally hausdorff a locally compact hausdorff space is always locally normal a normal space is always locally normal a t1 space need not be locally normal as the set of all real numbers endowed with the cofinite topology shows collectionwise normal space – property of topological spaces stronger than normality homeomorphism – mapping which preserves all topological properties of a given space locally compact space – type of topological space in mathematics locally hausdorff space locally metrizable space – topological space that is homeomorphic to a metric spacepages displaying short descriptions of redirect targets monotonically normal space – property of topological spaces stronger than normality normal space – topological space in which every pair of disjoint closed sets has disjoint open neighborhoodspages displaying wikidata descriptions as a fallback paranormal space cech eduard 1937 on bicompact spaces annals of mathematics 38 4 823 – 844 doi1023071968839 issn 0003486x jstor 1968839'
  • 'δn lies in the affine hyperplane obtained by removing the restriction ti ≥ 0 in the above definition the n 1 vertices of the standard nsimplex are the points ei ∈ rn1 where e0 1 0 0 0 e1 0 1 0 0 [UNK] en 0 0 0 1a standard simplex is an example of a 01polytope with all coordinates as 0 or 1 it can also be seen one facet of a regular n1orthoplex there is a canonical map from the standard nsimplex to an arbitrary nsimplex with vertices v0 vn given by t 0 … t n ↦ [UNK] i 0 n t i v i displaystyle t0ldots tnmapsto sum i0ntivi the coefficients ti are called the barycentric coordinates of a point in the nsimplex such a general simplex is often called an affine nsimplex to emphasize that the canonical map is an affine transformation it is also sometimes called an oriented affine nsimplex to emphasize that the canonical map may be orientation preserving or reversing more generally there is a canonical map from the standard n − 1 displaystyle n1 simplex with n vertices onto any polytope with n vertices given by the same equation modifying indexing t 1 … t n ↦ [UNK] i 1 n t i v i displaystyle t1ldots tnmapsto sum i1ntivi these are known as generalized barycentric coordinates and express every polytope as the image of a simplex δ n − 1 [UNK] p displaystyle delta n1twoheadrightarrow p a commonly used function from rn to the interior of the standard n − 1 displaystyle n1 simplex is the softmax function or normalized exponential function this generalizes the standard logistic function δ0 is the point 1 in r1 δ1 is the line segment joining 1 0 and 0 1 in r2 δ2 is the equilateral triangle with vertices 1 0 0 0 1 0 and 0 0 1 in r3 δ3 is the regular tetrahedron with vertices 1 0 0 0 0 1 0 0 0 0 1 0 and 0 0 0 1 in r4 δ4 is the regular 5cell with vertices 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 and 0 0 0 0 1 in r5 an alternative coordinate system is given by taking the indefinite sum s 0 0 s 1 s 0 t 0 t 0 s 2 s 1 t 1 t 0 t'
  • 'for details and the link to singular homology see topological invariance via triangulation one can assign a chain complex to topological spaces that arise from its simplicial complex and compute its simplicial homology compact spaces always admit finite triangulations and therefore their homology groups are finitely generated and only finitely many of them do not vanish other data as betti numbers or euler characteristic can be derived from homology betti numbers and eulercharacteristics let s displaystyle mathcal s be a finite simplicial complex the n displaystyle n th betti number b n s displaystyle bnmathcal s is defined to be the rank of the n displaystyle n th simplicial homology group of the spaces these numbers encode geometric properties of the spaces the betti number b 0 s displaystyle b0mathcal s for instance represents the number of connected components for a triangulated closed orientable surfaces f displaystyle f b 1 f 2 g displaystyle b1f2g holds where g displaystyle g denotes the genus of the surface therefore its first betti number represents the doubled number of handles of the surfacewith the comments above for compact spaces all betti numbers are finite and almost all are zero therefore one can form their alternating sum [UNK] k 0 ∞ − 1 k b k l displaystyle sum k0infty 1kbkmathcal l which is called the euler charakteristik of the complex a catchy topological invariant to use these invariants for the classification of topological spaces up to homeomorphism one needs invariance of the characteristics regarding homeomorphism a famous approach to the question was at the beginning of the 20th century the attempt to show that any two triangulations of the same topological space admit a common subdivision this assumption is known as hauptvermutung german main assumption let l ⊂ r n displaystyle mathcal lsubset mathbb r n be a simplicial complex a complex l ′ ⊂ r n displaystyle mathcal lsubset mathbb r n is said to be a subdivision of l displaystyle mathcal l iff every simplex of l ′ displaystyle mathcal l is contained in a simplex of l displaystyle mathcal l and every simplex of l displaystyle mathcal l is a finite union of simplices in l ′ displaystyle mathcal l those conditions ensure that subdivisions does not change the simplicial complex as a set or'
37
  • 'for platonic or aristotelian forms in this first sense of form almost all logic is informal notformal understanding informal logic this way would be much too broad to be useful by form2 barth and krabbe mean the form of sentences and statements as these are understood in modern systems of logic here validity is the focus if the premises are true the conclusion must then also be true now validity has to do with the logical form of the statement that makes up the argument in this sense of formal most modern and contemporary logic is formal that is such logics canonize the notion of logical form and the notion of validity plays the central normative role in this second sense of form informal logic is notformal because it abandons the notion of logical form as the key to understanding the structure of arguments and likewise retires validity as normative for the purposes of the evaluation of argument it seems to many that validity is too stringent a requirement that there are good arguments in which the conclusion is supported by the premises even though it does not follow necessarily from them as validity requires an argument in which the conclusion is thought to be beyond reasonable doubt given the premises is sufficient in law to cause a person to be sentenced to death even though it does not meet the standard of logical validity this type of argument based on accumulation of evidence rather than pure deduction is called a conductive argument by form3 barth and krabbe mean to refer to procedures which are somehow regulated or regimented which take place according to some set of rules barth and krabbe say that we do not defend formality3 of all kinds and under all circumstances rather we defend the thesis that verbal dialectics must have a certain form ie must proceed according to certain rules in order that one can speak of the discussion as being won or lost 19 in this third sense of form informal logic can be formal for there is nothing in the informal logic enterprise that stands opposed to the idea that argumentative discourse should be subject to norms ie subject to rules criteria standards or procedures informal logic does present standards for the evaluation of argument procedures for detecting missing premises etc johnson and blair 2000 noticed a limitation of their own definition particularly with respect to everyday discourse which could indicate that it does not seek to understand specialized domainspecific arguments made in natural languages consequently they have argued that the crucial divide is between arguments made in formal languages and those made in natural languages fisher and scriven 1997 proposed a more encompassing definition seeing informal logic as the discipline which studies the practice of critical thinking and provides its intellectual spine by critical thinking'
  • 'in logic the semantics of logic or formal semantics is the study of the semantics or interpretations of formal and idealizations of natural languages usually trying to capture the pretheoretic notion of entailment the truth conditions of various sentences we may encounter in arguments will depend upon their meaning and so logicians cannot completely avoid the need to provide some treatment of the meaning of these sentences the semantics of logic refers to the approaches that logicians have introduced to understand and determine that part of meaning in which they are interested the logician traditionally is not interested in the sentence as uttered but in the proposition an idealised sentence suitable for logical manipulationuntil the advent of modern logic aristotles organon especially de interpretatione provided the basis for understanding the significance of logic the introduction of quantification needed to solve the problem of multiple generality rendered impossible the kind of subject – predicate analysis that governed aristotles account although there is a renewed interest in term logic attempting to find calculi in the spirit of aristotles syllogisms but with the generality of modern logics based on the quantifier the main modern approaches to semantics for formal languages are the following the archetype of modeltheoretic semantics is alfred tarskis semantic theory of truth based on his tschema and is one of the founding concepts of model theory this is the most widespread approach and is based on the idea that the meaning of the various parts of the propositions are given by the possible ways we can give a recursively specified group of interpretation functions from them to some predefined mathematical domains an interpretation of firstorder predicate logic is given by a mapping from terms to a universe of individuals and a mapping from propositions to the truth values true and false modeltheoretic semantics provides the foundations for an approach to the theory of meaning known as truthconditional semantics which was pioneered by donald davidson kripke semantics introduces innovations but is broadly in the tarskian mold prooftheoretic semantics associates the meaning of propositions with the roles that they can play in inferences gerhard gentzen dag prawitz and michael dummett are generally seen as the founders of this approach it is heavily influenced by ludwig wittgensteins later philosophy especially his aphorism meaning is use truthvalue semantics also commonly referred to as substitutional quantification was advocated by ruth barcan marcus for modal logics in the early 1960s and later championed by j michael dunn nuel belnap and hugues leblanc for standard firstorder'
  • 'the imperial animal lionel tiger and robin fox and the uk association of chief police officers spokesman on knife crime alfred hitchcockas used in new scientist the term nominative determinism only applies to work in contributions to other newspapers new scientist writers have stuck to this definition with the exception of editor roger highfield in a column in the evening standard in which he included key attributes of lifeprior to 1994 other terms for the suspected psychological effect were used sporadically onomastic determinism was used as early as 1970 by roberta frank german psychologist wilhelm stekel spoke of die verpflichtung des namens the obligation of the name in 1911 outside of science cognomen syndrome was used by playwright tom stoppard in his 1972 play jumpers in ancient rome the predictive power of a persons name was captured by the latin proverb nomen est omen meaning the name is a sign this saying is still in use today in english and other languages such as french german italian dutch and sloveniannew scientist coined the term nominative contradeterminism for people who move away from their name creating a contradiction between name and occupation examples include andrew waterhouse a professor of wine wouldbe doctor thomas edward kill who subsequently changed his name to jirgensohn and the archbishop of manila cardinal sin the synonym inaptronym is also sometimes used the first scientists to discuss the concept that names had a determining effect were early 20thcentury german psychologists wilhelm stekel spoke of the obligation of the name in the context of compulsive behaviour and choice of occupation karl abraham wrote that the determining power of names might be partially caused by inheriting a trait from an ancestor who was given a fitting name he made the further inference that families with fitting names might then try to live up to their names in some way in 1952 carl jung referred to stekels work in his theory of synchronicity events without causal relationship that yet seem to be meaningfully related we find ourselves in something of a quandary when it comes to making up our minds about the phenomenon which stekel calls the compulsion of the name what he means by this is the sometimes quite gross coincidence between a mans name and his peculiarities or profession for instance herr feist mr stout is the food minister herr rosstauscher mr horsetrader is a lawyer herr kalberer mr calver is an obstetrician are these the whimsicalities of chance or the suggestive effects of the name as stekel seems to suggest or are they meaningful coincidences jung listed striking'
29
  • 'optical fibers near the threshold of soliton supercontinuum generation and characterized the initial conditions for generating rogue waves in any medium research in optics has pointed out the role played by a nonlinear structure called peregrine soliton that may explain those waves that appear and disappear without leaving a trace many of these encounters are reported only in the media and are not examples of openocean rogue waves often in popular culture an endangering huge wave is loosely denoted as a rogue wave while the case has not been and most often cannot be established that the reported event is a rogue wave in the scientific sense – ie of a very different nature in characteristics as the surrounding waves in that sea state and with a very low probability of occurrence according to a gaussian process description as valid for linear wave theory this section lists a limited selection of notable incidents eagle island lighthouse 1861 – water broke the glass of the structures east tower and flooded it implying a wave that surmounted the 40 m 130 ft cliff and overwhelmed the 26 m 85 ft tower flannan isles lighthouse 1900 – three lighthouse keepers vanished after a storm that resulted in wavedamaged equipment being found 34 m 112 ft above sea level ss kronprinz wilhelm september 18 1901 – the most modern german ocean liner of its time winner of the blue riband was damaged on its maiden voyage from cherbourg to new york by a huge wave the wave struck the ship headon rms lusitania 1910 – on the night of 10 january 1910 a 23 m 75 ft wave struck the ship over the bow damaging the forecastle deck and smashing the bridge windows voyage of the james caird 1916 – sir ernest shackleton encountered a wave he termed gigantic while piloting a lifeboat from elephant island to south georgia rms homeric 1924 – hit by a 24 m 80 ft wave while sailing through a hurricane off the east coast of the united states injuring seven people smashing numerous windows and portholes carrying away one of the lifeboats and snapping chairs and other fittings from their fastenings uss ramapo 1933 – triangulated at 34 m 112 ft rms queen mary 1942 – broadsided by a 28 m 92 ft wave and listed briefly about 52° before slowly righting ss michelangelo 1966 – hole torn in superstructure heavy glass smashed 24 m 80 ft above the waterline and three deaths ss edmund fitzgerald 1975 – lost on lake superior a coast guard report blamed water entry to the hatches which gradually filled the hold or errors in navigation or charting causing damage from running onto shoals however another nearby ship the ss arthur m anderson was hit'
  • 'proportional to the eddys depth in the surface ξ ′ k z displaystyle xi kz where z displaystyle z is the depth and k displaystyle k is known as the von karman constant thus the gradient can be integrated to solve for u [UNK] displaystyle overline u u [UNK] u ∗ k ln z z o displaystyle overline ufrac ukln frac zzo so we see that the mean flow in the surface layer has a logarithmic relationship with depth in nonneutral conditions the mixing length is also affected by buoyancy forces and moninobukhov similarity theory is required to describe the horizontalwind profile the surface layer is studied in oceanography as both the wind stress and action of surface waves can cause turbulent mixing necessary for the formation of a surface layer the worlds oceans are made up of many different water masses each have particular temperature and salinity characteristics as a result of the location in which they formed once formed at a particular source a water mass will travel some distance via largescale ocean circulation typically the flow of water in the ocean is described as turbulent ie it doesnt follow straight lines water masses can travel across the ocean as turbulent eddies or parcels of water usually along constant density isopycnic surfaces where the expenditure of energy is smallest when these turbulent eddies of different water masses interact they will mix together with enough mixing some stable equilibrium is reached and a mixed layer is formed turbulent eddies can also be produced from wind stress by the atmosphere on the ocean this kind of interaction and mixing through buoyancy at the surface of the ocean also plays a role in the formation of a surface mixed layer the logarithmic flow profile has long been observed in the ocean but recent highly sensitive measurements reveal a sublayer within the surface layer in which turbulent eddies are enhanced by the action of surface waves it is becoming clear that the surface layer of the ocean is only poorly modeled as being up against the wall of the airsea interaction observations of turbulence in lake ontario reveal under wavebreaking conditions the traditional theory significantly underestimates the production of turbulent kinetic energy within the surface layer the depth of the surface mixed layer is affected by solar insolation and thus is related to the diurnal cycle after nighttime convection over the ocean the turbulent surface layer is found to completely decay and restratify the decay is caused by the decrease in solar insolation divergence of turbulent flux and relaxation of lateral gradients during the nighttime the surface ocean cools because the atmospheric circulation is reduced due to the change in'
  • '135 billion cubic kilometers 320 million cu mi with an average depth of nearly 3700 meters 12100 ft a lake is an area filled with water localized in a basin that is surrounded by land apart from any river or other outlet that serves to feed or drain the lake lakes lie on land and are not part of the ocean and therefore are distinct from lagoons and are also larger and deeper than ponds though there are no official or scientific definitions lakes can be contrasted with rivers or streams which are usually flowing most lakes are fed and drained by rivers and streams natural lakes are generally found in mountainous areas rift zones and areas with ongoing glaciation other lakes are found in endorheic basins or along the courses of mature rivers in some parts of the world there are many lakes because of chaotic drainage patterns left over from the last ice age all lakes are temporary over geologic time scales as they will slowly fill in with sediments or spill out of the basin containing them many lakes are artificial and are constructed for industrial or agricultural use for hydroelectric power generation or domestic water supply or for aesthetic recreational purposes or other activities a pond is an area filled with water either natural or artificial that is smaller than a lake it may arise naturally in floodplains as part of a river system or be a somewhat isolated depression such as a kettle vernal pool or prairie pothole it may contain shallow water with marsh and aquatic plants and animals ponds are frequently manmade or expanded beyond their original depth and bounds among their many uses ponds provide water for agriculture and livestock aid in habitat restoration serve as fish hatcheries are components of landscape architecture may store thermal energy as solar ponds and treat wastewater as treatment ponds ponds may be fresh saltwater or brackish a river is a natural flowing watercourse usually freshwater flowing under the influence of gravity on ocean lake another river or into the ground small rivers can be referred to using names such as stream creek brook rivulet and rill there are no official definitions for the generic term river as applied to geographic features rivers are part of the hydrological cycle water generally collects in a river from precipitation in a drainage basin from surface runoff and other sources such as groundwater recharge springs and the release of stored water in natural ice and snow potamology is the scientific study of rivers while limnology is the study of inland waters in general an aquifer is an underground layer of waterbearing permeable rock rock fractures or unconsolidated materials gravel sand or silt the study of water flow in aquifers and the characterization of aquifers is called hydro'
26
  • 'their early formation have met with very limited success and the damage inflicted during the run in period is one factor preventing this technique being used for practical applications as oxide generated is effectively the result of the tribochemical decay of one or both of the metallic or ceramic surfaces in contact the study of compacted oxide layer glazes is sometimes referred to as part of the more general field of high temperature corrosion the generation of oxides during high temperature sliding wear does not automatically lead to the production of a compacted oxide layer glaze under certain conditions potentially due to nonideal conditions of sliding speed load temperature or oxide chemistry composition the oxide may not sinter together and instead the loose oxide debris may assist or enhance the removal of material by abrasive wear a change in conditions may also see a switch from the formation of a loose abrasive oxide to the formation of wear protective compacted oxide glaze layers and vice versa or even the reappearance of adhesive or severe wear due to the complexities of the conditions controlling the types of wear observed there have been a number of attempts to map types of wear with reference to sliding conditions in order to help better understand and predict them due to the potential for wear protection at high temperatures beyond which conventional lubricants can be used possible uses have been speculated in applications such as car engines power generation and even aerospace where there is an increasing demand for ever higher efficiency and thus operating temperature compacted oxide layers can form due to sliding at low temperatures and offer some wear protection however in the absence of heat as a driving force either due to frictional heating or higher ambient temperature they cannot sinter together to form more protective glaze layers tribology wear'
  • '##dt if n nuclei form in the time increment dt and the grains are assumed to be spherical then the volume fraction will be f 4 3 π n [UNK] g 3 [UNK] 0 t t − t 0 3 d t π 3 n [UNK] g 3 t 4 displaystyle ffrac 43pi dot ng3int 0ttt03dtfrac pi 3dot ng3t4 this equation is valid in the early stages of recrystallization when f1 and the growing grains are not impinging on each other once the grains come into contact the rate of growth slows and is related to the fraction of untransformed material 1f by the johnsonmehl equation f 1 − exp − π 3 n [UNK] g 3 t 4 displaystyle f1exp leftfrac pi 3dot ng3t4right while this equation provides a better description of the process it still assumes that the grains are spherical the nucleation and growth rates are constant the nuclei are randomly distributed and the nucleation time t0 is small in practice few of these are actually valid and alternate models need to be used it is generally acknowledged that any useful model must not only account for the initial condition of the material but also the constantly changing relationship between the growing grains the deformed matrix and any second phases or other microstructural factors the situation is further complicated in dynamic systems where deformation and recrystallization occur simultaneously as a result it has generally proven impossible to produce an accurate predictive model for industrial processes without resorting to extensive empirical testing since this may require the use of industrial equipment that has not actually been built there are clear difficulties with this approach the annealing temperature has a dramatic influence on the rate of recrystallization which is reflected in the above equations however for a given temperature there are several additional factors that will influence the rate the rate of recrystallization is heavily influenced by the amount of deformation and to a lesser extent the manner in which it is applied heavily deformed materials will recrystallize more rapidly than those deformed to a lesser extent indeed below a certain deformation recrystallization may never occur deformation at higher temperatures will allow concurrent recovery and so such materials will recrystallize more slowly than those deformed at room temperature eg contrast hot and cold rolling in certain cases deformation may be unusually homogeneous or occur only on specific crystallographic planes the absence of orientation gradients and other heterogeneities may prevent the formation of viable nuclei experiments in the 1970s found that molybden'
  • 'by his father isaac wilkinson he patented such cylinders in 1736 to replace the leather bellows which wore out quickly isaac was granted a second patent also for blowing cylinders in 1757 the steam engine and cast iron blowing cylinder led to a large increase in british iron production in the late 18th century hot blast hot blast was the single most important advance in fuel efficiency of the blast furnace and was one of the most important technologies developed during the industrial revolution hot blast was patented by james beaumont neilson at wilsontown ironworks in scotland in 1828 within a few years of the introduction hot blast was developed to the point where fuel consumption was cut by onethird using coke or twothirds using coal while furnace capacity was also significantly increased within a few decades the practice was to have a stove as large as the furnace next to it into which the waste gas containing co from the furnace was directed and burnt the resultant heat was used to preheat the air blown into the furnacehot blast enabled the use of raw anthracite coal which was difficult to light in the blast furnace anthracite was first tried successfully by george crane at ynyscedwyn ironworks in south wales in 1837 it was taken up in america by the lehigh crane iron company at catasauqua pennsylvania in 1839 anthracite use declined when very high capacity blast furnaces requiring coke were built in the 1870s the blast furnace remains an important part of modern iron production modern furnaces are highly efficient including cowper stoves to preheat the blast air and employ recovery systems to extract the heat from the hot gases exiting the furnace competition in industry drives higher production rates the largest blast furnace in the world is in south korea with a volume around 6000 m3 210000 cu ft it can produce around 5650000 tonnes 5560000 lt of iron per yearthis is a great increase from the typical 18thcentury furnaces which averaged about 360 tonnes 350 long tons 400 short tons per year variations of the blast furnace such as the swedish electric blast furnace have been developed in countries which have no native coal resources according to global energy monitor the blast furnace is likely to become obsolete to meet climate change objectives of reducing carbon dioxide emission but bhp disagrees an alternative process involving direct reduced iron is likely to succeed it but this also needs to use a blast furnace to melt the iron and remove the gangue impurities unless the ore is very high quality oxygen blast furnace the oxygen blast furnace obf process has been extensively studied theoretically because of the potentials of promising energy conservation and co2 emission reduction this type may be the most'
15
  • 'this glossary of cell and molecular biology is a list of definitions of terms and concepts commonly used in the study of cell biology molecular biology and related disciplines including genetics microbiology and biochemistry it is split across two articles this page glossary of genetics 0 – l lists terms beginning with numbers and with the letters a through l glossary of genetics m – z lists terms beginning with the letters m through zthe glossary is intended as introductory material for novices for more specific and technical detail see the article corresponding to each term overlapping and related glossaries include glossary of evolutionary biology glossary of virology and glossary of chemistry 3 untranslated region 3utr also threeprime untranslated region 3 nontranslated region 3ntr and trailer sequence 3end also threeprime end one of two ends of a single linear strand of dna or rna specifically the end at which the chain of nucleotides terminates at the third carbon atom in the furanose ring of deoxyribose or ribose ie the terminus at which the 3 carbon is not attached to another nucleotide via a phosphodiester bond in vivo the 3 carbon is often still bonded to a hydroxyl group by convention sequences and structures positioned nearer to the 3end relative to others are referred to as downstream contrast 5end 5 cap also fiveprime cap a specially altered nucleotide attached to the 5end of some primary rna transcripts as part of the set of posttranscriptional modifications which convert raw transcripts into mature rna products the precise structure of the 5 cap varies widely by organism in eukaryotes the most basic cap consists of a methylated guanine nucleoside bonded to the triphosphate group that terminates the 5end of an rna sequence among other functions capping helps to regulate the export of mature rnas from the nucleus prevent their degradation by exonucleases and promote translation in the cytoplasm mature mrnas can also be decapped 5 untranslated region 5utr also fiveprime untranslated region 5 nontranslated region 5ntr and leader sequence 5bromodeoxyuridine see bromodeoxyuridine 5end also fiveprime end one of two ends of a single linear strand of dna or rna specifically the end at which the chain of nucleotides terminates at the fifth carbon atom in the furanose ring of deoxyribose or ribose ie the terminus at which the 5'
  • 'hypoxia inducible lipid dropletassociated hilpda also known as c7orf68 and hig2 is a protein that in humans is encoded by the hilpda gene hilpda was originally discovered in a screen to identify new genes that are activated by low oxygen pressure hypoxia in human cervical cancer cells the protein consists of 63 amino acids in humans and 64 amino acids in mice hilpda is produced by numerous cells and tissues including cancer cells immune cells fat cells and liver cells low oxygen pressure hypoxia fatty acids and betaadrenergic agonists stimulate hilpda expression nearly all cells have the ability to store excess energy as fat in special structures in the cell called lipid droplets the formation and breakdown of lipid droplets is controlled by various enzymes and lipid dropletassociated proteins one of the lipid dropletassociated proteins is hilpda hilpda acts as a regulatory signal that blocks the breakdown of the fat stores in cells when the external fat supply is high or the availability of oxygen is low in cells hilpda is located in the endoplasmic reticulum and around lipid droplets gain and lossoffunction studies have shown that hilpda promotes fat storage in cancer cells macrophages and liver cells this effect is at least partly achieved by suppressing triglyceride breakdown by inhibiting the enzyme adipose triglyceride lipase the binding of hilpda to adipose triglyceride lipase occurs via the conserved nterminal portion of hilpda which is similar to a region in the g0s2 protein the deficiency of hilpda in mice that are prone to develop atherosclerosis led to a reduction in atherosclerotic plaques suggesting that hilpda may be a potential therapeutic target for atherosclerosis in addition hilpda may be targeted for the treatment of nonalcoholic fatty liver disease'
  • 'in fact be quite small reducing the useful number of amino acids from 20 to a much lower number for example in an extremely simplified view all amino acids can be sorted into two classes hydrophobicpolar by hydrophobicity and still allow many common structures to show up early life on earth may have only four or five types of amino acids to work with and researches have shown that functional proteins can be created from wildtype ones by a similar alphabetreduction process reduced alphabets are also useful in bioinformatics as they provide an easy way of analyzing protein similarity a major focus in the field of protein engineering is on creating dna libraries that sample regions of sequence space often with the goal of finding mutants of proteins with enhanced functions compared to the wild type these libraries are created either by using a wild type sequence as a template and applying one or more mutagenesis techniques to make different variants of it or by creating proteins from scratch using artificial gene synthesis these libraries are then screened or selected and ones with improved phenotypes are used for the next round of mutagenesis protein sequence space directed evolution protein engineering highdimensional space'
2
  • 'equality means that the other member of the equality must also be defined examples of nontotal associative operations are multiplication of matrices of arbitrary size and function composition let ∗ displaystyle be a possibly partial associative operation on a set x an identity element or simply an identity is an element e such that x ∗ e x and e ∗ y y displaystyle xexquad textandquad eyy for every x and y for which the lefthand sides of the equalities are defined if e and f are two identity elements such that e ∗ f displaystyle ef is defined then e f displaystyle ef this results immediately from the definition by e e ∗ f f displaystyle eeff it follows that a total operation has at most one identity element and if e and f are different identities then e ∗ f displaystyle ef is not defined for example in the case of matrix multiplication there is one n×n identity matrix for every positive integer n and two identity matrices of different size cannot be multiplied together similarly identity functions are identity elements for function composition and the composition of the identity functions of two different sets are not defined if x ∗ y e displaystyle xye where e is an identity element one says that x is a left inverse of y and y is a right inverse of x left and right inverses do not always exist even when the operation is total and associative for example addition is a total associative operation on nonnegative integers which has 0 as additive identity and 0 is the only element that has an additive inverse this lack of inverses is the main motivation for extending the natural numbers into the integers an element can have several left inverses and several right inverses even when the operation is total and associative for example consider the functions from the integers to the integers the doubling function x ↦ 2 x displaystyle xmapsto 2x has infinitely many left inverses under function composition which are the functions that divide by two the even numbers and give any value to odd numbers similarly every function that maps n to either 2 n displaystyle 2n or 2 n 1 displaystyle 2n1 is a right inverse of the function n ↦ [UNK] n 2 [UNK] textstyle nmapsto leftlfloor frac n2rightrfloor the floor function that maps n to n 2 textstyle frac n2 or n − 1 2 textstyle frac n12 depending whether n is even or odd more generally a function has a left inverse for function composition if and only if it'
  • '##perty 3 is redundant it follows by applying 2 to 1 in practical terms it is often advantageous to be able to recognize the canonical forms there is also a practical algorithmic question to consider how to pass from a given object s in s to its canonical form s canonical forms are generally used to make operating with equivalence classes more effective for example in modular arithmetic the canonical form for a residue class is usually taken as the least nonnegative integer in it operations on classes are carried out by combining these representatives and then reducing the result to its least nonnegative residue the uniqueness requirement is sometimes relaxed allowing the forms to be unique up to some finer equivalence relation such as allowing for reordering of terms if there is no natural ordering on terms a canonical form may simply be a convention or a deep theorem for example polynomials are conventionally written with the terms in descending powers it is more usual to write x2 x 30 than x 30 x2 although the two forms define the same polynomial by contrast the existence of jordan canonical form for a matrix is a deep theorem according to oed and lsj the term canonical stems from the ancient greek word kanonikos κανονικος regular according to rule from kanon κανων rod rule the sense of norm standard or archetype has been used in many disciplines mathematical usage is attested in a 1738 letter from logan the german term kanonische form is attested in a 1846 paper by eisenstein later the same year richelot uses the term normalform in a paper and in 1851 sylvester writes i now proceed to the mode of reducing algebraical functions to their simplest and most symmetrical or as my admirable friend m hermite well proposes to call them their canonical forms in the same period usage is attested by hesse normalform hermite forme canonique borchardt forme canonique and cayley canonical formin 1865 the dictionary of science literature and art defines canonical form as in mathematics denotes a form usually the simplest or most symmetrical to which without loss of generality all functions of the same class can be reduced note in this section up to some equivalence relation e means that the canonical form is not unique in general but that if one object has two different canonical forms they are eequivalent standard form is used by many mathematicians and scientists to write extremely large numbers in a more concise and understandable way the most prominent of which being the scientific notation canonical representation of a positive integer canonical form of a continued fraction in analytic geometry the equation of a line ax by c with'
  • 'x a n − 1 x a n [UNK] displaystyle pxa0xbigg a1xbig a2xbig a3cdots xan1xancdots big big bigg thus by iteratively substituting the b i displaystyle bi into the expression p x 0 a 0 x 0 a 1 x 0 a 2 [UNK] x 0 a n − 1 b n x 0 [UNK] a 0 x 0 a 1 x 0 a 2 [UNK] x 0 b n − 1 [UNK] a 0 x 0 b 1 b 0 displaystyle beginalignedpx0a0x0big a1x0big a2cdots x0an1bnx0cdots big big a0x0big a1x0big a2cdots x0bn1big big vdots a0x0b1b0endaligned now it can be proven that 2 p x b 1 b 2 x b 3 x 2 b 4 x 3 [UNK] b n − 1 x n − 2 b n x n − 1 x − x 0 b 0 displaystyle 2quad quad quad pxb1b2xb3x2b4x3cdots bn1xn2bnxn1xx0b0 this expression constitutes horners practical application as it offers a very quick way of determining the outcome of p x x − x 0 displaystyle pxxx0 with b 0 displaystyle b0 which is equal to p x 0 displaystyle px0 being the divisions remainder as is demonstrated by the examples below if x 0 displaystyle x0 is a root of p x displaystyle px then b 0 0 displaystyle b00 meaning the remainder is 0 displaystyle 0 which means you can factor p x displaystyle px as x − x 0 displaystyle xx0 to finding the consecutive b displaystyle b values you start with determining b n displaystyle bn which is simply equal to a n displaystyle an then you then work recursively using the formula b n − 1 a n − 1 b n x 0 displaystyle bn1an1bnx0 till you arrive at b 0 displaystyle b0 evaluate f x 2 x 3 − 6 x 2 2 x − 1 displaystyle fx2x36x22x1 for x 3 displaystyle x3 we use synthetic division as follows x0│ x3 x2 x1 x0 3 │ 2 −6 2 −1 │ 6 0 6 [UNK] 2 0 2 5 the entries in the third row are the sum'
21
  • 'what is referred to as cation exchange cationexchange capacity is the amount of exchangeable cations per unit weight of dry soil and is expressed in terms of milliequivalents of positively charged ions per 100 grams of soil or centimoles of positive charge per kilogram of soil cmolckg similarly positively charged sites on colloids can attract and release anions in the soil giving the soil anion exchange capacity the cation exchange that takes place between colloids and soil water buffers moderates soil ph alters soil structure and purifies percolating water by adsorbing cations of all types both useful and harmful the negative or positive charges on colloid particles make them able to hold cations or anions respectively to their surfaces the charges result from four sources isomorphous substitution occurs in clay during its formation when lowervalence cations substitute for highervalence cations in the crystal structure substitutions in the outermost layers are more effective than for the innermost layers as the electric charge strength drops off as the square of the distance the net result is oxygen atoms with net negative charge and the ability to attract cations edgeofclay oxygen atoms are not in balance ionically as the tetrahedral and octahedral structures are incomplete hydroxyls may substitute for oxygens of the silica layers a process called hydroxylation when the hydrogens of the clay hydroxyls are ionised into solution they leave the oxygen with a negative charge anionic clays hydrogens of humus hydroxyl groups may also be ionised into solution leaving similarly to clay an oxygen with a negative chargecations held to the negatively charged colloids resist being washed downward by water and are out of reach of plant roots thereby preserving the soil fertility in areas of moderate rainfall and low temperaturesthere is a hierarchy in the process of cation exchange on colloids as cations differ in the strength of adsorption by the colloid and hence their ability to replace one another ion exchange if present in equal amounts in the soil water solution al3 replaces h replaces ca2 replaces mg2 replaces k same as nh4 replaces naif one cation is added in large amounts it may replace the others by the sheer force of its numbers this is called law of mass action this is largely what occurs with the addition of cationic fertilisers potash limeas the soil solution becomes more acidic low ph meaning an abundance of h the other cations more weakly bound to colloids'
  • 'one trade body known as the arboricultural association although the institute of chartered foresters offers a route for professional recognition and chartered arboriculturist status the qualifications associated with the industry range from vocational to doctorate arboriculture is a comparatively young industry'
  • 'no longer present if the iodine is applied and takes 2 – 3 seconds to turn dark blue or black then the process of ripening has begun but is not yet complete if the iodine becomes black immediately then most of the starch is still present at high concentrations in the sample and hence the fruit hasnt fully started to ripen climacteric fruits undergo a number of changes during fruit ripening the major changes include fruit softening sweetening decreased bitterness and colour change these changes begin in an inner part of the fruit the locule which is the gellike tissue surrounding the seeds ripeningrelated changes initiate in this region once seeds are viable enough for the process to continue at which point ripeningrelated changes occur in the next successive tissue of the fruit called the pericarp as this ripening process occurs working its way from the inside towards outer most tissue of the fruit the observable changes of softening tissue and changes in color and carotenoid content occur specifically this process activates ethylene production and the expression of ethyleneresponse genes affiliated with the phenotypic changes seen during ripening colour change is the result of pigments which were always present in the fruit becoming visible when chlorophyll is degraded however additional pigments are also produced by the fruit as it ripens in fruit the cell walls are mainly composed of polysaccharides including pectin during ripening a lot of the pectin is converted from a waterinsoluble form to a soluble one by certain degrading enzymes these enzymes include polygalacturonase this means that the fruit will become less firm as the structure of the fruit is degradedenzymatic breakdown and hydrolysis of storage polysaccharides occurs during ripening the main storage polysaccharides include starch these are broken down into shorter watersoluble molecules such as fructose glucose and sucrose during fruit ripening gluconeogenesis also increasesacids are broken down in ripening fruits and this contributes to the sweeter rather than sharp tastes associated with unripe fruits in some fruits such as guava there is a steady decrease in vitamin c as the fruit ripens this is mainly as a result of the general decrease in acid content that occurs when a fruit ripens different fruits have different ripening stages in tomatoes the ripening stages are green when the surface of the tomato is completely green breaker when less than 11 of the surface is red turning when less than 31 of the surface is red but'
39
  • 'forced convection is a mechanism or type of transport in which fluid motion is generated by an external source like a pump fan suction device etc alongside natural convection thermal radiation and thermal conduction it is one of the methods of heat transfer and allows significant amounts of heat energy to be transported very efficiently this mechanism is found very commonly in everyday life including central heating air conditioning steam turbines and in many other machines forced convection is often encountered by engineers designing or analyzing heat exchangers pipe flow and flow over a plate at a different temperature than the stream the case of a shuttle wing during reentry for example in any forced convection situation some amount of natural convection is always present whenever there are gravitational forces present ie unless the system is in an inertial frame or freefall when the natural convection is not negligible such flows are typically referred to as mixed convection when analyzing potentially mixed convection a parameter called the archimedes number ar parametrizes the relative strength of free and forced convection the archimedes number is the ratio of grashof number and the square of reynolds number which represents the ratio of buoyancy force and inertia force and which stands in for the contribution of natural convection when ar [UNK] 1 natural convection dominates and when ar [UNK] 1 forced convection dominates a r g r r e 2 displaystyle arfrac grre2 when natural convection isnt a significant factor mathematical analysis with forced convection theories typically yields accurate results the parameter of importance in forced convection is the peclet number which is the ratio of advection movement by currents and diffusion movement from high to low concentrations of heat p e u l α displaystyle pefrac ulalpha when the peclet number is much greater than unity 1 advection dominates diffusion similarly much smaller ratios indicate a higher rate of diffusion relative to advection convective heat transfer combined forced and natural convection'
  • 'displaystyle bar c for discussion of why the thermal energy storage abilities of pure substances vary see factors that affect specific heat capacity for a body of uniform composition c t h displaystyle cmathrm th can be approximated by c t h m c p displaystyle cmathrm th mcmathrm p where m displaystyle m is the mass of the body and c p displaystyle cmathrm p is the isobaric specific heat capacity of the material averaged over temperature range in question for bodies composed of numerous different materials the thermal masses for the different components can just be added together thermal mass is effective in improving building comfort in any place that experiences these types of daily temperature fluctuations — both in winter as well as in summer when used well and combined with passive solar design thermal mass can play an important role in major reductions to energy use in active heating and cooling systems the use of materials with thermal mass is most advantageous where there is a big difference in outdoor temperatures from day to night or where nighttime temperatures are at least 10 degrees cooler than the thermostat set point the terms heavyweight and lightweight are often used to describe buildings with different thermal mass strategies and affects the choice of numerical factors used in subsequent calculations to describe their thermal response to heating and cooling in building services engineering the use of dynamic simulation computational modelling software has allowed for the accurate calculation of the environmental performance within buildings with different constructions and for different annual climate data sets this allows the architect or engineer to explore in detail the relationship between heavyweight and lightweight constructions as well as insulation levels in reducing energy consumption for mechanical heating or cooling systems or even removing the need for such systems altogether ideal materials for thermal mass are those materials that have high specific heat capacity high densityany solid liquid or gas that has mass will have some thermal mass a common misconception is that only concrete or earth soil has thermal mass even air has thermal mass although very little a table of volumetric heat capacity for building materials is available but note that their definition of thermal mass is slightly different the correct use and application of thermal mass is dependent on the prevailing climate in a district temperate and cold temperate climates solarexposed thermal mass thermal mass is ideally placed within the building and situated where it still can be exposed to lowangle winter sunlight via windows but insulated from heat loss in summer the same thermal mass should be obscured from higherangle summer sunlight in order to prevent overheating of the structure the thermal mass is warmed passively by the sun or additionally by internal heating systems during the day thermal energy stored in the mass is then released back'
  • 'as a kuhn length these nonstraight regions evoke the concept of ‘ kinks ’ and are in fact a manifestation of the randomwalk nature of the chain since a kink is composed of several isoprene units each having three carboncarbon single bonds there are many possible conformations available to a kink each with a distinct energy and endtoend distance over time scales of seconds to minutes only these relatively short sections of the chain ie kinks have sufficient volume to move freely amongst their possible rotational conformations the thermal interactions tend to keep the kinks in a state of constant flux as they make transitions between all of their possible rotational conformations because the kinks are in thermal equilibrium the probability that a kink resides in any rotational conformation is given by a boltzmann distribution and we may associate an entropy with its endtoend distance the probability distribution for the endtoend distance of a kuhn length is approximately gaussian and is determined by the boltzmann probability factors for each state rotational conformation as a rubber network is stretched some kinks are forced into a restricted number of more extended conformations having a greater endtoend distance and it is the resulting decrease in entropy that produces an elastic force along the chain there are three distinct molecular mechanisms that produce these forces two of which arise from changes in entropy that we shall refer to as low chain extension regime ia and moderate chain extension regime ib the third mechanism occurs at high chain extension as it is extended beyond its initial equilibrium contour length by the distortion of the chemical bonds along its backbone in this case the restoring force is springlike and we shall refer to it as regime ii the three force mechanisms are found to roughly correspond to the three regions observed in tensile stress vs strain experiments shown in fig 1 the initial morphology of the network immediately after chemical crosslinking is governed by two random processes 1 the probability for a crosslink to occur at any isoprene unit and 2 the random walk nature of the chain conformation the endtoend distance probability distribution for a fixed chain length ie fixed number of isoprene units is described by a random walk it is the joint probability distribution of the network chain lengths and the endtoend distances between their crosslink nodes that characterizes the network morphology because both the molecular physics mechanisms that produce the elastic forces and the complex morphology of the network must be treated simultaneously simple analytic elasticity models are not possible an explicit 3dimensional numerical model is required to simulate the effects of strain on a representative volume element of a network low chain extension regime ia'
35
  • 'pedology from greek πεδον pedon soil and λογος logos study is a discipline within soil science which focuses on understanding and characterizing soil formation evolution and the theoretical frameworks for modeling soil bodies often in the context of the natural environment pedology is often seen as one of two main branches of soil inquiry the other being edaphology which is traditionally more agronomically oriented and focuses on how soil properties influence plant communities natural or cultivated in studying the fundamental phenomenology of soils eg soil formation aka pedogenesis pedologists pay particular attention to observing soil morphology and the geographic distributions of soils and the placement of soil bodies into larger temporal and spatial contexts in so doing pedologists develop systems of soil classification soil maps and theories for characterizing temporal and spatial interrelations among soils there are a few noteworthy subdisciplines of pedology namely pedometrics and soil geomorphology pedometrics focuses on the development of techniques for quantitative characterization of soils especially for the purposes of mapping soil properties whereas soil geomorphology studies the interrelationships between geomorphic processes and soil formation soil is not only a support for vegetation but it is also the pedosphere the locus of numerous interactions between climate water air temperature soil life microorganisms plants animals and its residues the mineral material of the original and added rock and its position in the landscape during its formation and genesis the soil profile slowly deepens and develops characteristic layers called horizons while a steady state balance is approached soil users such as agronomists showed initially little concern in the dynamics of soil they saw it as medium whose chemical physical and biological properties were useful for the services of agronomic productivity on the other hand pedologists and geologists did not initially focus on the agronomic applications of the soil characteristics edaphic properties but upon its relation to the nature and history of landscapes today there is an integration of the two disciplinary approaches as part of landscape and environmental sciences pedologists are now also interested in the practical applications of a good understanding of pedogenesis processes the evolution and functioning of soils like interpreting its environmental history and predicting consequences of changes in land use while agronomists understand that the cultivated soil is a complex medium often resulting from several thousands of years of evolution they understand that the current balance is fragile and that only a thorough knowledge of its history makes it possible to ensure its sustainable use important pedological concepts include complexity in soil genesis is more common than simplicity soils lie'
  • 'from alteration this occurs largely because almost all past soils have lost their former vegetative covering and the organic matter they once supported has been used up by plants since the soil was buried however if remains of plants can be found the nature of the soil fossil can be made a great deal clearer than if no flora can be found because roots can nowadays be identified with respect to the plant group from which they come patterns of root traces including their shape and size is good evidence for the vegetation type the former soil supported bluish colours in the soil tend to indicate the plants have mobilized nutrients within the soil the horizons of fossil soils typically are sharply defined only in the top layers unless some of the parent material has not been obliterated by soil formation the kinds of horizons in fossil soils are though generally the same as those found in presentday soils allowing easy classification in modern taxonomy of all but the oldest soils chemical analysis of soil fossils generally focuses on their lime content which determines both their ph and how reactive they will be to dilute acids chemical analysis is also useful usually through solvent extraction to determine key minerals this analysis can be of some use in determining the structure of a soil fossil but today xray diffraction is preferred because it permits the exact crystal structure of the former soil to be determined with the aid of xray diffraction paleosols can now be classified into one of the 12 orders of soil taxonomy oxisols ultisols alfisols mollisols spodosols aridisols entisols inceptisols gelisols histosols vertisols and andisols many precambrian soils however when examined do not fit the characteristics for any of these soil orders and have been placed in a new order called green clays the green colour is due to the presence of certain unoxidised minerals found in the primitive earth because o2 was not present there are also some forest soils of more recent times that cannot clearly be classified as alfisols or as spodosols because despite their sandy horizons they are not nearly acidic enough to have the typical features of a spodosol paleopedology is an important scientific discipline for the understanding of the ecology and evolution of ancient ecosystems both on earth and the emerging field of exoplanet research or astropedology section is currently under construction models the different definitions applied to soils is indicative of the different approaches taken to them where farmers and engineers experience different soil challenges soil scientists have a different view again johnson watsonstegner 1987 essentially these differing views of the definition of soil are'
  • 'a durisol is a reference soil group under the world reference base for soil resources wrb referring to freedraining soils in arid and semiarid environments that contain grains cemented together by secondary silica sio2 in the upper metre of soil occurring either as concretions durinodes – duric horizon or as a continuously cemented layer duripan – hardpan australia – dorbank south africa – petroduric horizon the name is derived from latin durus for hard in the faounesco soil map of the world the durisols with petroduric horizon were indicated as duripan phase of other soils eg of xerosols and yermosols durisols are developed mainly in alluvial and colluvial deposits of all texture classes they are found on level and slightly sloping alluvial plains terraces and gently sloping piedmont plains in arid semiarid and mediterranean regions the soils have ac or abc profile eroded durisols with exposed hard horizons a petroduric horizon are common in gently sloping terrain most durisols can only be used for extensive grazing arable cropping of durisols is limited to areas where irrigation water is available extensive areas of durisols occur in australia in south africa namibia and in the united states notably in nevada california and arizona minor occurrences have been reported from central and south america and from kuwait durisols are a new introduction in international soil classification and have not often been mapped as such a precise indication of their extent is not yet available pedogenesis pedology soil study soil classification w zech p schad g hintermaiererhard soils of the world springer berlin 2022 chapter 834 isbn 9783540304609'
13
  • 'atlantic university au is a private college in guaynabo puerto rico it was founded in 1981 by colonel ramon m barquin it is one of the only colleges or universities in the caribbean specializing in digital arts education in all of its forms atlantic university is a nonprofit institution and is located in the heart of guaynabo facing the citys central plaza it is the first university founded in guaynabo and is located in the citys historical district which preserves the distinctive spanish colonialstyle architecture that characterized puerto ricos towns and cities in earlier times atlantic university was founded in response to guaynabos need for a local institution of higher education as well as the growing demand for digital arts expertise in the commercial sector aus core programs were first offered in 1981 and now include a growing range of bachelors and masters degree programsthe institution officially changed its name to atlantic university on august of 2023 au is a private institution of higher education operated by atlantic university inc a nonprofit corporation established under the laws of the commonwealth of puerto rico and registered with the us state department atlantic university authorized by the puerto rico education council and accredited by the accrediting commission of career schools and colleges to award bachelors and masters degrees auc is also approved for students with educational benefits in the different gi bill programs atlantic university is a member of the following associations hispanic association of colleges and universities hacu council for higher education accreditation chea american association of hispanics in higher education aahhe association for supervision and curriculum development ascd collaborative institutional training initiative ct program international game developers association igda printing industries of america pia asociacion puertorriquena de investigacion institucional puerto rico chamber of commerce puerto rico manufacturers association asociacion de educacion privada general education business administration digital cinematography sciences digital graphic design digital animation sciences video game art and design sciences graduate studies'
  • 'object detection multimodal tasks knowledge discovery in art history and computational aesthetics whereas distant viewing includes the analysis of large collections close reading involves one piece of artwork whilst 2d and 3d digital art is beneficial as it allows the preservation of history that would otherwise have been destroyed by events like natural disasters and war there is the issue of who should own these 3d scans – ie who should own the digital copyrights artfutura artmedia austin museum of digital art computer arts society eva conferences los angeles center for digital art lumen prize onedotzero rhizome va digital futures algorithmic art computer art computer graphics electronic art generative art graphic arts new media art theatre of digital art virtual art'
  • 'in 2013 it received 10 million in series b funding on december 4 2014 the site unveiled a new logo and announced the release of an official mobile app on both ios and android released on december 10 2014on february 23 2017 deviantart was acquired by wixcom inc for 36 million the site plans to integrate deviantart and wix functionality including the ability to utilize deviantart resources on websites built with wix and integrating some of wixs design tools into the siteas of march 1 2017 syria was banned from accessing deviantarts services entirely citing us and israeli sanctions and aftermath on february 19 2018 after syrian user mythiril used a vpn to access the site and disclosed the geoblocking in a journal titled the hypocrisy of deviantart deviantart ended the geoblocking except for commercial featuresin autumn of 2018 spambots began hacking into an indeterminately large number of longinactive accounts and placing spam weblinks in their victims about sections formerly known as deviantids where users of the site display their public profile information an investigation into this matter began in january 2019 this situation ended some time in late 2021 there is no review for potential copyright and creative commons licensing violations when a work is submitted to deviantart so potential violations can remain unnoticed until reported to administrators using the mechanism available for such issues some members of the community have been the victims of copyright infringement from vendors using artwork illegally on products and prints as reported in 2007 the reporting system in which to counteract copyright infringement directly on the site has been subject to a plethora of criticism from members of the site given that it may take weeks or even a month before a filed complaint for copyright infringement is answered due to the nature of deviantart as an art community with a worldwide reach companies use deviantart to promote themselves and create more advertising through contests coolclimate is a research network connected with the university of california and they held a contest in 2012 to address the impact of climate change worldwide submissions were received and the winner was featured in the huffington postvarious car companies have held contests dodge ran a contest in 2012 for art of the dodge dart and over 4000 submissions were received winners received cash and item prizes and were featured in a gallery at dodgechrysler headquarters lexus partnered with deviantart in 2013 to run a contest for cash and other prizes based on their lexus is design the winners design became a modified lexus is and was showcased at the sema 2013 show in los angeles californiadeviantart also hosts'
38
  • 'and sentencefinal particles and some reduced vowelssome words associated with mens speech include the informal da in place of the copula desu first person pronouns such as ore and boku and sentencefinal particles such as yo ze zo and kana masculine speech also features less frequent use of honorific prefixes and fewer aizuchi response tokensresearch on japanese mens speech shows greater use of neutral forms forms not strongly associated with masculine or feminine speech than is seen in japanese womens speechsome studies of conversation between japanese men and women show neither gender taking a more dominant position in interaction men however tend to show a selforiented conversation style telling stories and expressing their expertise on topics being discussed more than is typical of women in these studies since the late twentieth century observers have noted that individual japanese men and women do not necessarily speak in the ways attributed to their gender scholars have described considerable variation within each gender some individuals use these characteristics of gendered speech while others do not upperclass women who did not conform to conventional expectations of gendered speech were sometimes criticized for failing to maintain socalled traditional japanese culture okama entertainers and one kotoba another recent phenomenon influencing gender norms in speech is the popularity of okama おかま entertainers typically men who enact very feminine speech dress and other gender markers the word okama originally referred to feminine male homosexuals but its usage has expanded to refer to masculine gay men male crossdressers and trans women among other uses entertainers who identify as okama sometimes use a form of speech called one kotoba [UNK] [UNK] 葉 literally older sister speech but with the word one older sister used to denote an effeminate man a speaking style that combines the formal aspects of womens speech described above with blunt or crude words and topics for example あたし [UNK] カレー 食 ったら 下 [UNK] た [UNK] 。 atashi ima kare kuttara geri da wa if i ate curry now id get diarrheathe pronoun atashi and the sentencefinal da wa is typical of womens speech while the verb kuttara is typical of mens speech and the topic itself is very blunthideko abe suggests that one kotoba originated during the showa era among sex workers known as dansho 男 [UNK] literally male prostitutes who adopted feminine speech wore womens clothing and often referred to themselves as women celebrities and tarento who use one kotoba include akihiro miwa shogo kariyazaki ikko kabachan and the twin brothers os'
  • '##view hospital situational codeswitching occurred with an interpreter rather than the physician in this situation the interpreter is in a position of cointerviewer where the interpreter speaks with the patient in order to find out their concerns and then relay them to the physician when they arrived to patient anda a ver que dice el doctor well lets see what the doctor saysto physician doctor i was looking for something to put over there because he wants to show you hisfoot here the interpreter codeswitches in order to be able to effectively communicate to the doctor the concerns of the patient foot pains a fouryearold child named benjamin is an englishfrench bilingual speaker he constantly code switches between his parents with the father only being able to speak english where as the mother was bilingual but only spoke french to himgrowing up in a prominent englishspeaking community benjamin preferred english over french hence him speaking english when talking about everyday activities however when conversing about school related topics at home he spoke mostly in french due to his mother and tutors continually speaking to him in french benjamin hi kevin en francais oui in french yes'
  • 'to say that speakers of utah english will pronounce certain phonemes that are distinct in western american english the same way some examples include failfell poolpull cardcord pinpen and heelhill such mergers are used more by older speakers o j simpson murder trial a wellknown example of the identification of race based on auditory sample in a legal setting occurred during the prosecution of oj simpson a witness testified against simpson based on his memory of hearing a male black voice the objection of simpson ’ s lawyer mr cochran was overruled by the presiding judge sanchez v people a major precedent was formed on the use of linguistic profiling in the case of sanchez v people a witness testified against a suspect based on his overhearing of an argument between two apparent spanish speakers where the killer was identified as having a dominican rather than a puerto rican accent the new york superior court ruled that distinguishing between accents was permissible based on the fact that human experience has taught us to discern the variation in the mode of speech of certain individuals the court found that a certain degree of familiarity with the accents and dialects of a region or ethnic group qualified an individual to identify ethnicity or race in a court based on auditory evidence clifford v kentucky a similar justification was used in the later case of clifford v kentucky a white police officer testified against charles clifford an african american appellant at the kentucky supreme court based on his evaluation of race from spoken language the presiding judge cited the findings of sanchez v people in justifying the officers claim of identifying the suspect based on overheard speech a similar case is that of clifford v commonwealth where a testimony of linguistic profiling was allowed based on the caveat that the witness is personally familiar with the general characteristics accents or speech patterns of the race or nationality in question ie so long as the opinion is rationally based on the perception of the witness guidelines for use linguist dennis preston has presented an expansion of the rulings set down on the use of linguistic profiling in legal contexts preston argues for the further definition of personal familiarity with a dialect to an individual as a member of the speech community within which the identification is taking place the person identified must be an authentic speaker with no perceived imitation of other dialects within the language further there should be no evidence of overt stereotypes connecting the speaker to a particular style of language united states v ferril linguistic profiling is very apparent in employment as evidenced by the supreme court case united states v ferril shirley ferril a former employee of the telemarketing firm tpg filed suit against the firm after being fired on the'
32
  • '##iance field the vector direction at each point in the field can be interpreted as the orientation of a flat surface placed at that point to most brightly illuminate it time wavelength and polarization angle can be treated as additional dimensions yielding higherdimensional functions accordingly in a plenoptic function if the region of interest contains a concave object eg a cupped hand then light leaving one point on the object may travel only a short distance before another point on the object blocks it no practical device could measure the function in such a region however for locations outside the objects convex hull eg shrinkwrap the plenoptic function can be measured by capturing multiple images in this case the function contains redundant information because the radiance along a ray remains constant throughout its length the redundant information is exactly one dimension leaving a fourdimensional function variously termed the photic field the 4d light field or lumigraph formally the field is defined as radiance along rays in empty space the set of rays in a light field can be parameterized in a variety of ways the most common is the twoplane parameterization while this parameterization cannot represent all rays for example rays parallel to the two planes if the planes are parallel to each other it relates closely to the analytic geometry of perspective imaging a simple way to think about a twoplane light field is as a collection of perspective images of the st plane and any objects that may lie astride or beyond it each taken from an observer position on the uv plane a light field parameterized this way is sometimes called a light slab the analog of the 4d light field for sound is the sound field or wave field as in wave field synthesis and the corresponding parametrization is the kirchhoff – helmholtz integral which states that in the absence of obstacles a sound field over time is given by the pressure on a plane thus this is two dimensions of information at any point in time and over time a 3d field this twodimensionality compared with the apparent fourdimensionality of light is because light travels in rays 0d at a point in time 1d over time while by the huygens – fresnel principle a sound wave front can be modeled as spherical waves 2d at a point in time 3d over time light moves in a single direction 2d of information while sound expands in every direction however light travelling in nonvacuous media may scatter in a similar fashion and the irreversibility or information lost in the scattering is discernible in the apparent loss of a system dimension because light field provides spatial and angular'
  • 'in optics groupvelocity dispersion gvd is a characteristic of a dispersive medium used most often to determine how the medium affects the duration of an optical pulse traveling through it formally gvd is defined as the derivative of the inverse of group velocity of light in a material with respect to angular frequency gvd ω 0 ≡ ∂ ∂ ω 1 v g ω ω ω 0 displaystyle textgvdomega 0equiv frac partial partial omega leftfrac 1vgomega rightomega omega 0 where ω displaystyle omega and ω 0 displaystyle omega 0 are angular frequencies and the group velocity v g ω displaystyle vgomega is defined as v g ω ≡ ∂ ω ∂ k displaystyle vgomega equiv partial omega partial k the units of groupvelocity dispersion are time2distance often expressed in fs2mm equivalently groupvelocity dispersion can be defined in terms of the mediumdependent wave vector k ω displaystyle komega according to gvd ω 0 ≡ ∂ 2 k ∂ ω 2 ω ω 0 displaystyle textgvdomega 0equiv leftfrac partial 2kpartial omega 2rightomega omega 0 or in terms of the refractive index n ω displaystyle nomega according to gvd ω 0 ≡ 2 c ∂ n ∂ ω ω ω 0 ω 0 c ∂ 2 n ∂ ω 2 ω ω 0 displaystyle textgvdomega 0equiv frac 2cleftfrac partial npartial omega rightomega omega 0frac omega 0cleftfrac partial 2npartial omega 2rightomega omega 0 groupvelocity dispersion is most commonly used to estimate the amount of chirp that will be imposed on a pulse of light after passing through a material of interest chirp material thickness × gvd ω 0 × bandwidth displaystyle textchirptextmaterial thicknesstimes textgvdomega 0times textbandwidth a simple illustration of how gvd can be used to determine pulse chirp can be seen by looking at the effect of a transformlimited pulse of duration σ displaystyle sigma passing through a planar medium of thickness d before passing through the medium the phase offsets of all frequencies are aligned in time and the pulse can be described as a function of time e t a e − t 2 4 σ 2 e − i ω 0 t displaystyle etaefrac t24sigma 2eiomega'
  • '##e pars optica is generally recognized as the foundation of modern optics though the law of refraction is conspicuously absentwillebrord snellius 1580 – 1626 found the mathematical law of refraction now known as snells law in 1621 subsequently rene descartes 1596 – 1650 showed by using geometric construction and the law of refraction also known as descartes law that the angular radius of a rainbow is 42° ie the angle subtended at the eye by the edge of the rainbow and the rainbows centre is 42° he also independently discovered the law of reflection and his essay on optics was the first published mention of this lawchristiaan huygens 1629 – 1695 wrote several works in the area of optics these included the opera reliqua also known as christiani hugenii zuilichemii dum viveret zelhemii toparchae opuscula posthuma and the traite de la lumiere isaac newton 1643 – 1727 investigated the refraction of light demonstrating that a prism could decompose white light into a spectrum of colours and that a lens and a second prism could recompose the multicoloured spectrum into white light he also showed that the coloured light does not change its properties by separating out a coloured beam and shining it on various objects newton noted that regardless of whether it was reflected or scattered or transmitted it stayed the same colour thus he observed that colour is the result of objects interacting with alreadycoloured light rather than objects generating the colour themselves this is known as newtons theory of colour from this work he concluded that any refracting telescope would suffer from the dispersion of light into colours and invented a reflecting telescope today known as a newtonian telescope to bypass that problem by grinding his own mirrors using newtons rings to judge the quality of the optics for his telescopes he was able to produce a superior instrument to the refracting telescope due primarily to the wider diameter of the mirror in 1671 the royal society asked for a demonstration of his reflecting telescope their interest encouraged him to publish his notes on colour which he later expanded into his opticks newton argued that light is composed of particles or corpuscles and were refracted by accelerating toward the denser medium but he had to associate them with waves to explain the diffraction of light opticks bk ii props xiil later physicists instead favoured a purely wavelike explanation of light to account for diffraction todays quantum mechanics photons and the idea of waveparticle duality bear only a minor resemblance'
42
  • '##dia are governed by the iczn treated as animals and see below for fossil fungi algae nostocaceae homocysteae 1 january 1892 gomont monographie des oscillariees nostocaceae heterocysteae 1 january 1886 bornet flahault revision des nostocacees heterocystees desmidiaceae 1 january 1848 ralfs british desmidieae oedogoniaceae 1 january 1900 hirn monographie und iconographie der oedogoniaceen fossil plants algae diatoms excepted and fungi 31 december 1820 sternberg flora der vorweltexceptions in zoology spiders 1757 clerck aranei svecici there are also differences in the way codes work for example the icn the code for algae fungi and plants forbids tautonyms while the iczn the animal code allows them these codes differ in terminology and there is a longterm project to harmonize this for instance the icn uses valid in valid publication of a name the act of publishing a formal name with establishing a name as the iczn equivalent the iczn uses valid in valid name correct name with correct name as the icn equivalent harmonization is making very limited progress there are differences in respect of what kinds of types are used the bacteriological code prefers living type cultures but allows other kinds there has been ongoing debate regarding which kind of type is more useful in a case like cyanobacteria a more radical approach was made in 1997 when the iubsiums international committee on bionomenclature icb presented the long debated draft biocode proposed to replace all existing codes with an harmonization of them the originally planned implementation date for the biocode draft was january 1 2000 but agreement to replace the existing codes was not reached in 2011 a revised biocode was proposed that instead of replacing the existing codes would provide a unified context for them referring to them when necessary changes in the existing codes are slowly being made in the proposed directions however participants of the last serious discussion of the draft biocode concluded that it would probably not be implemented in their lifetimes many authors encountered problems in using the linnean system in phylogenetic classification in fact early proponents of rankbased nomenclature such as alphonse de candolle and the authors of the 1886 version of the american ornithologists union code of nomenclature already envisioned that in the future rankbased nomenclature would have to be abandoned another code that was developed since 1998 is the phylocode which now regulates'
  • 'antigenic shift is the process by which two or more different strains of a virus or strains of two or more different viruses combine to form a new subtype having a mixture of the surface antigens of the two or more original strains the term is often applied specifically to influenza as that is the bestknown example but the process is also known to occur with other viruses such as visna virus in sheep antigenic shift is a specific case of reassortment or viral shift that confers a phenotypic change antigenic shift is contrasted with antigenic drift which is the natural mutation over time of known strains of influenza or other things in a more general sense which may lead to a loss of immunity or in vaccine mismatch antigenic drift occurs in all types of influenza including influenza a influenza b and influenza c antigenic shift however occurs only in influenza a because it infects more than just humans affected species include other mammals and birds giving influenza a the opportunity for a major reorganization of surface antigens influenza b and c principally infect humans minimizing the chance that a reassortment will change its phenotype drasticallyin 1940s maurice hilleman discovered antigenic shift which is important for the emergence of new viral pathogens as it is a pathway that viruses may follow to enter a new niche influenza a viruses are found in many different animals including ducks chickens pigs humans whales horses and seals influenza b viruses circulate widely principally among humans though it has recently been found in seals flu strains are named after their types of hemagglutinin and neuraminidase surface proteins of which there are 18 and 9 respectively so they will be called for example h3n2 for type3 hemagglutinin and type2 neuraminidase some strains of avian influenza from which all other strains of influenza a are believed to stem can infect pigs or other mammalian hosts when two different strains of influenza infect the same cell simultaneously their protein capsids and lipid envelopes are removed exposing their rna which is then transcribed to mrna the host cell then forms new viruses that combine their antigens for example h3n2 and h5n1 can form h5n2 this way because the human immune system has difficulty recognizing the new influenza strain it may be highly dangerous and result in a new pandemicinfluenza viruses which have undergone antigenic shift have caused the asian flu pandemic of 1957 the hong kong flu pandemic of 1968 and the swine flu scare of 1976 until recently such combinations were believed'
  • 'magnitude of selection can be measured by comparing the rate of nonsynonymous substitution to the rate of synonymous substitution dnds the population structure of the host population may be examined by calculation of fstatistics and hypotheses concerning panmixis and selective neutrality of the virus may be tested with statistics such as tajimas dhowever such analyses were not designed with epidemiological inference in mind and it may be difficult to extrapolate from standard statistics to desired epidemiological quantities in an effort to bridge the gap between traditional evolutionary approaches and epidemiological models several analytical methods have been developed to specifically address problems related to phylodynamics these methods are based on coalescent theory birthdeath models and simulation and are used to more directly relate epidemiological parameters to observed viral sequences effective population size the coalescent is a mathematical model that describes the ancestry of a sample of nonrecombining gene copies in modeling the coalescent process time is usually considered to flow backwards from the present in a selectively neutral population of constant size n displaystyle n and nonoverlapping generations the wright fisher model the expected time for a sample of two gene copies to coalesce ie find a common ancestor is n displaystyle n generations more generally the waiting time for two members of a sample of n displaystyle n gene copies to share a common ancestor is exponentially distributed with rate λ n n 2 1 n displaystyle lambda nn choose 2frac 1n this time interval is labeled t n displaystyle tn and at its end there are n − 1 displaystyle n1 extant lineages remaining these remaining lineages will coalesce at the rate λ n − 1 [UNK] λ 2 displaystyle lambda n1cdots lambda 2 after intervals t n − 1 [UNK] t 2 displaystyle tn1cdots t2 this process can be simulated by drawing exponential random variables with rates λ n − i i 0 [UNK] n − 2 displaystyle lambda nii0cdots n2 until there is only a single lineage remaining the mrca of the sample in the absence of selection and population structure the tree topology may be simulated by picking two lineages uniformly at random after each coalescent interval t i displaystyle ti the expected waiting time to find the mrca of the sample is the sum of the expected values of the internode intervals e t m r c a e t n e t n − 1 [UNK] e t 2 1 λ n 1 λ n − 1 [UNK] 1 λ 2 2 n 1 −'
24
  • 'water garden or aquatic garden is a term sometimes used for gardens or parts of gardens where any type of water feature is a principal or dominant element the primary focus is on plants but they will sometimes also house waterfowl or ornamental fish in which case it may be called a fish pond they vary enormously in size and style water gardening is gardening that is concerned with growing plants adapted to lakes rivers and ponds often specifically to their shallow margins although water gardens can be almost any size or depth they are often small and relatively shallow perhaps less than twenty inches 50 cm in depth this is because most aquatic plants are depth sensitive and require a specific water depth in order to thrive this can be helped by planting them in baskets raised off the bottom a water garden may include a bog garden for plants that enjoy a waterlogged soil sometimes their primary purpose is to grow a particular species or group of aquatic plants for example water lilies water gardens and water features in general have been a part of public and private gardens since ancient persian gardens and chinese gardens for instance the c 304 nanfang caomu zhuang records cultivating chinese spinach on floating gardens water features have been present and well represented in every era and in every culture that has included gardens in their landscape and architectural environments up until the rise of the industrial age when the modern water pump was introduced water was not recirculated but was diverted from rivers and springs into the water garden from which it exited into agricultural fields or natural watercourses historically water features were used to enable plant and fish production both for food purposes and for ornamental aesthetics when the aquatic flora and fauna are balanced an aquatic ecosystem is created that will support sustainable water quality and clarity elements such as fountains statues artificial waterfalls boulders underwater lighting lining treatments edging details watercourses and inwater and bankside planting can add visual interest and help to integrate the water garden with the local landscape and environment in landscape architecture and garden design a water feature is one or more items from a range of fountains jeux deau pools ponds rills artificial waterfalls and streams modern water features are typically selfcontained meaning that they do not require water to be plumbed in rather water is recycled from either a pond or a hidden reservoir also known as a sump the sixteenth century in europe saw a renewed interest in greek thought and philosophy including the works of hero of alexandria about hydraulics and pneumatics his devices such as temple doors operated by invisible weights or flowing liquids and mechanical singing birds powered by steam motivated several european palaces to create similar clever devices to enhance their public image in'
  • 'in his 1992 book play and playscapes referring to attempts to replace or add on to the rubberized surface metal and plastic of traditional playgrounds playscapes are designed to eliminate fall heights playscapes have rolling hills and fallen logs rather than a central play structure with monkey bars playscapes have much lower injury rates than standard playgroundsplayscapes have a fraction of the number of child injuries compared to standard playgrounds with play structures the most frequent injury to children on playgrounds is a fracture of the upper limb resulting from falls from climbing apparatuses the second most common cause of injury to children on playgrounds is falls from slides fall heights are the largest safety issue for most safety inspectorsplayscapes combat the issue of fall heights by using topography changes for children to climb and experience changes in height companies in canada have made strides in reducing fall height by using topography as a main feature in their designs topography changes allow designers to be creative when placing components in the playscape playscapes offer a wide range of benefits such as increasing physical activity fine and gross motor skills cognitive development they are also used in horticultural therapy for rehabilitation of mental andor physical ailment they increase participation rates and decrease absenteeism decrease bullying decrease injury rates increase focus and attention span and help with social skills in schools playscapes have shown to increase childrens level of physical activity and motor ability playscapes are found to be very beneficial in the growth and development of children both mentally and physically cognitive development focus attention span and social skills are all improved playscapes are not intimidating regardless of ability or fitness level playscapes have no central location or prescribed area of play they are openended spaces that entice children to use their imagination and creativity playscapes do not prescribe in an area that encourages a physical hierarchy thus reducing bullying and competition based on physical strength and abilityplayscapes are not limited to public parks and schools select hospitals in sweden and north america have playscapes on their facility hospitals use playscapes for horticultural therapy which has proven to increase emotional cognitive and motor improvements and increased social participation quality of life and wellbeingsince 2006 landscape architects adam white and andree davies have pioneered the playscape approach to play space design in the uk they have won a number of design and community engagement awards each year for their work these have included rhs gold medal and bbc peoples choice award landscape institute communication presentation award horticultural weekly award and children and young peoples services award for genuine community design engagementin 2009 they won the award for the uks best play space at the'
  • 'to the companys final name change in 1985 to forrec a shortened version of for recreation the companys business model also became a private limited with share capital to keep ownership fluid a policy was created that required shareholders to begin selling their shares at age 60 19851992 a northamerican focus forrec were hired by usaa real estate company a subsidiary of the usaa insurance company and gaylord entertainment company a company which owned opryland usa to assist in the design of a theme park in san antonio at the time known as fiesta texas soon after universal studios hired forrec to design their theme park in florida which opened in 1990 19922013 global projects forrec began doing more international work they were hired to transform the beijing national aquatics center from the 2008 summer olympics into a family water park called the happy magic watercube bbc worldwide asked forrec to create a series of prototypes for four of their most famous brands – top gear cbeebies bbc earth and walking with dinosaurs 2013present continued growth and mergers in 2013 tim scott and nolan natale of natale and scott architects nasa joined forrec along with their entire team this addition makes forrec a fully licensed architectural practice in ontarioforrec merged with scott torrance landscape architect inc stla in 2016 to extend local expertise in landscape architecture canadas wonderland vaughn ontario canada dollywood pigeon forge tennessee united states everland theme park seoul korea f1x theme park dubai uae legoland deutschland gunzburg germany moi park istanbul turkey nickelodeon universe bloomington minnesota united states ontario place toronto ontario canada oriental imperial valley theme park xian shaanxi china playland six flags dubai dubai uae universal studios orlando florida usa wanda nanchang outdoor theme park nanchang china wanda xishuangbanna international resort xishuangbanna china caribbean bay everland resort seoul korea center parcs domaine des trois forets moselle france costa caribe portaventura world tarragona spain dawang deep pit water world changsha china happy magic watercube beijing china lotte world kimhae water park kimhae korea nickelodeon family suites resort water park lake buena vista florida united states nrh2o north richland hills texas united states senbo green park resort china wanda xishuangbanna water park xishuangbanna china west edmonton mall world water park edmonton alberta canada annapurna studios hyderabad india azerbaijan dream land plaza baku azerbaijan circus city zhengding china dubai wonderland dubai uae fortune bay tourism city hengqin island china garden city wa fang dian china grand galaxy mall jakarta'
8
  • 'foreflight is an electronic flight bag for ios and ipados devices designed to assist pilots and corporate flight departments with flight planning it includes information about facilities such as airports navaids and air traffic control facilities it also aids pilots in tasks including flight planning weather monitoring and document management as well as an electronic logbook to help pilots record flight time the united states canada and europe are supported regions the company was founded in 2007 and has since been purchased by boeing the app provides airport information such as chart supplement entries taxi diagrams instrument approach plates departure and arrival procedures and both temporary and permanent notams it also provides weather reports and forecasts for the airport or if no reports are available nearby airports it also makes it possible to search for airports procedure diagrams or regulatory aspects of these proceduresthe app also supports a wide range of general and business aviation aircraft to allow pilots to assess performance in both hypothetical and realtime conditions examples include calculating weight and balance figures and runway performance foreflight runway analysis a subfeature of the app allows pilots to judge runway length and weather conditions to determine necessary takeoff and landing distancesforeflight provides access to maps and navigation charts the app supports flight planning features including letting pilots select routes based on ifr waypoints or using waypoints checkpoints or geographic features for vfr flight pilots can factor instrument departure arrival and approach procedures into their route as well as traffic pattern entries to airports foreflight will calculate metrics such as distance time en route and to each waypoint true and magnetic courses and fuel burn considering current weather conditions and aircraft profiles entered by the user foreflight also makes it possible to receive predeparture clearances through the appenroute weather is also available in the app official current reports and forecast information from the national weather service is provided in both textual and graphical formats the app provides approved weather briefings the briefings include information relative to a pilots flight and are timestamped and stored the aircraft tail number is also recorded to ensure that the briefing is considered legally validforeflight also displays information about airspace and special use airspace it displays information about the locations operating hours dimensions and more of uncontrolled and controlled airspaces airspaces designated for airports and special airspaces such as temporary flight restrictions in 2016 the app helped to develop a selfservice flight planning system for drones to allow schedulers dispatchers and flight crews to plan each aspect of flightthe app began offering to jeppesen charts in 2017 during a partnership with boeing who purchased foreflight in 2019in the 2020s foreflight began rapidly expanding its business aviation offerings adding new supported'
  • '##r receivers as well as providing a backup to the primary receiver the second receiver allows the pilot to easily follow a radial to or from one vor station while watching the second receiver to see when a certain radial from another vor station is crossed allowing the aircrafts exact position at that moment to be determined and giving the pilot the option of changing to the new radial if they wish as of 2008 spacebased global navigation satellite systems gnss such as the global positioning system gps are increasingly replacing vor and other groundbased systems in 2016 gnss was mandated as the primary needs of navigation for ifr aircraft in australiagnss systems have a lower transmitter cost per customer and provide distance and altitude data future satellite navigation systems such as the european union galileo and gps augmentation systems are developing techniques to eventually equal or exceed vor accuracy however low vor receiver cost broad installed base and commonality of receiver equipment with ils are likely to extend vor dominance in aircraft until space receiver cost falls to a comparable level as of 2008 in the united states gpsbased approaches outnumbered vorbased approaches but vorequipped ifr aircraft outnumber gpsequipped ifr aircraftthere is some concern that gnss navigation is subject to interference or sabotage leading in many countries to the retention of vor stations for use as a backup the vor signal has the advantage of static mapping to local terrainthe us faa plans by 2020 to decommission roughly half of the 967 vor stations in the us retaining a minimum operational network to provide coverage to all aircraft more than 5000 feet above the ground most of the decommissioned stations will be east of the rocky mountains where there is more overlap in coverage between them on july 27 2016 a final policy statement was released specifying stations to be decommissioned by 2025 a total of 74 stations are to be decommissioned in phase 1 2016 – 2020 and 234 more stations are scheduled to be taken out of service in phase 2 2021 – 2025 in the uk 19 vor transmitters are to be kept operational until at least 2020 those at cranfield and dean cross were decommissioned in 2014 with the remaining 25 to be assessed between 2015 and 2020 similar efforts are underway in australia and elsewhere in the uk and the united states dme transmitters are planned to be retained in the near future even after colocated vors are decommissioned however there are longterm plans to decommission dme tacan and ndbs the vor signal encodes a morse code identifier optional voice and a pair of navigation'
  • 'the cdi needle left or right most older units and some newer ones integrate a converter with the cdi cdi units with an internal converter are not compatible with gps units more modern units are driven by a converter that is standalone or integrated with the radio the resolver position is sent to the converter which outputs the control signal to drive the cdi for digital units the desired position of the needle is transmitted via a serial arinc 429 signal from the radio or gps unit allowing the cdi design to be independent of the receiver and by multiple receiver types acronyms and abbreviations in avionics horizontal situation indicator'
20
  • 'de canaria a xv de febrero ano milcccclxxxxiii and signed merely el almirante while the printed latin editions are signed cristoforus colom oceanee classis prefectus prefect of the ocean fleet however it is doubtful columbus actually signed the original letter that way according to the capitulations of santa fe negotiated prior to his departure april 1492 christopher columbus was not entitled to use the title of admiral of the ocean sea unless his voyage was successful it would be highly presumptuous for columbus to sign his name that way in february or march when the original letter was drafted before that success was confirmed by the royal court columbus only obtained confirmation of his title on march 30 1493 when the catholic monarchs acknowledging the receipt of his letter address columbus for the first time as our admiral of the ocean sea and viceroy and governor of the islands which have been discovered in the indies nuestro almirante del mar oceano e visorrey y gobernador de las islas que se han descubierto en las indias this suggests the signature in the printed editions was not in the original letter but was an editorial choice by the copyists or printersin the copiador version there are passages omitted from the printed editions petitioning the monarchs for the honors promised him at santa fe and additionally asking for a cardinalate for his son and the appointment of his friend pedro de villacorta as paymaster of the indies the copiador letter signs off as made in the sea of spain on march 4 1493 fecha en la mar de espana a quatro dias de marco a stark contrast to the february 15 given in the printed versions there is no name or signature at the end of the copiador letter it ends abruptly en la mar at sea in the printed spanish editions albeit not in the latin editions nor the copiador there is a small postscript dated march 14 written in lisbon noting that the return journey took only 28 days in contrast with the 33 days outward but that unusual winter storms had kept him delayed for an additional 23 days a codicil in the printed spanish edition indicates that columbus sent this letter to the escribano de racion and another to their highnesses the latin editions contain no postscript but end with a verse epigram added by leonardus de cobraria bishop of monte peloso no original manuscript copy of columbuss letter is known to exist historians have had to rely on clues in the printed editions many of them published without date or location'
  • 'authors this very successful work went through numerous editions up to the first decade of the twentyfirst century according to the golden anniversary edition of 1992 the ongoing objective of civilization past present was to present a survey of world cultural history treating the development and growth of civilization not as a unique european experience but as a global one through which all the great culture systems have interacted to produce the presentday world it attempted to include all the elements of history – social economic political religious aesthetic legal and technological just as world war i strongly encouraged american historians to expand the study of europe than to courses on western civilization world war ii enhanced the global perspectives especially regarding asia and africa louis gottschalk william h mcneill and leften s stavrianos became leaders in the integration of world history to the american college curriculum gottschalk began work on the unesco history of mankind cultural and scientific development in 1951 mcneill influenced by toynbee broadened his work on the 20th century to new topics since 1982 the world history association at several regional associations began a program to help history professors broaden their coverage in freshman courses world history became a popular replacement for courses on western civilization professors patrick manning at the university of pittsburghs world history center and ross e dunn at san diego state are leaders in promoting innovative teaching methodsin related disciplines such as art history and architectural history global perspectives have been promoted as well in schools of architecture in the us the national architectural accrediting board now requires that schools teach history that includes a nonwest or global perspective this reflects a decadelong effort to move past the standard eurocentric approach that had dominated the field universal history is at once something more and something less than the aggregate of the national histories to which we are accustomed that it must be approached in a different spirit and dealt with in a different manner the roots of historiography in the 19th century are bound up with the concept that history written with a strong connection to the primary sources could be integrated with the big picture ie to a general universal history for example leopold von ranke probably the preeminent historian of the 19th century founder of rankean historical positivism the classic mode of historiography that now stands against postmodernism attempted to write a universal history at the close of his career the works of world historians oswald spengler and arnold j toynbee are examples of attempts to integrate primary sourcebased history and universal history spenglers work is more general toynbee created a theory that would allow the study of civilizations to proceed with integration of sourcebased history writing and'
  • 'jutes the burgundians the alemanni the sciri and the franks they were later pushed westward by the huns the avars the slavs and the bulgars later invasions such as the vikings the normans the varangians the hungarians the moors the romani the turks and the mongols also had significant effects especially in north africa the iberian peninsula anatolia and central and eastern europe germanic peoples moved out of southern scandinavia and northern germany to the adjacent lands between the elbe and oder after 1000 bc the first wave moved westward and southward pushing the resident celts west to the rhine around 200 bc moving into southern germany up to the roman provinces of gaul and cisalpine gaul by 100 bc where they were stopped by gaius marius and later by julius caesar it is this western group which was described by the roman historian tacitus ad 56 – 117 and julius caesar 100 – 44 bc a later wave of germanic tribes migrated eastward and southward from scandinavia between 600 and 300 bc to the opposite coast of the baltic sea moving up the vistula near the carpathian mountains during tacitus era they included lesserknown tribes such as the tencteri cherusci hermunduri and chatti however a period of federation and intermarriage resulted in the familiar groups known as the alemanni franks saxons frisians and thuringians the first wave of invasions between ad 300 and 500 is partly documented by greek and latin historians but is difficult to verify archaeologically it puts germanic peoples in control of most areas of what was then the western roman empirethe tervingi crossed the danube into roman territory after a clash with the huns in 376 some time later in marcianopolis the escort to their leader fritigern was killed while meeting with lupicinus the tervingi rebelled and the visigoths a group derived either from the tervingi or from a fusion of mainly gothic groups eventually invaded italy and sacked rome in 410 before settling in gaul around 460 they founded the visigothic kingdom in iberia they were followed into roman territory first by a confederation of herulian rugian and scirian warriors under odoacer that deposed romulus augustulus in 476 and later by the ostrogoths led by theodoric the great who settled in italy in gaul the franks a fusion of western germanic tribes whose leaders had been aligned with rome since the 3rd century entered roman lands gradually during the 5th century and after consolidating power under childeric and his son cloviss decisive victory over syagrius in 486 established'
16
  • 'the ocean surface relative to the geoid seabed – the bottom of the ocean seabed 2030 project terrain – vertical and horizontal dimension and shape of land surface'
  • '##frfusacearmymilvehicles2stm in order to determine what the profile of a beach looks like one method for determination is the emory beach profiling method initiating a benchmark the researcher establishes a control point to start the surveys at typically this is far enough away from the swash zone that large changes in elevation will not occur during the sampling time once the initial benchmark is established the researcher will take the emory sampling device and measure the change in elevation over the distance the device is covering then they will pick up the device and move it to the end point of their last survey and so on until they reach the shoreline typically this is done during neap tide see tide for more information on neap tide since the sand grain diameters can vary throughout the entire beach the median grain size is used to determine sediment fall velocity determining sediment fall velocity allows the determination of what sediment is left where sea walls groynes breakwaters dredging of harbor entrances dumping of material on the coast and offshore reduction of coastal vegetation cutting burning grazing pollution models for the prediction of sediment transport in coastal regions have been used since the mid 1970s one of the first formulas to calculate coastal sediment transport was developed by eco bijker end of sixties some transport models are xbeach httpossdeltaresnlwebxbeach profile parameter p engineering tools and databases on sediment transport and morphology httpwwwleovanrijnsedimentcompage4html dhis mike software httpwwwmikepoweredbydhicomproductsmike21sediments delft3d httpossdeltaresnlwebdelft3dhome telemacmascaret sisyphe sediment transport and bed evolution httpwwwopentelemacorgindexphpmoduleslist164sysiphesedimenttransportandbedevolution'
  • 'of snow and ice this is merely due to marginal evaporation rates and low precipitation the mcmurdo dry valleys of antarctica which lack water whether rain ice or snow much like a nonpolar desert and even have such desert features as hypersaline lakes and intermittent streams that resemble except for being frozen at their surfaces hot or cold deserts for extreme aridity and lack of precipitation of any kind extreme winds and not seasonal heat desiccate these nearlylifeless terrains the concept of biological desert redefines the concept of desert without the characteristic of aridity not lacking water but instead lacking life such places can be socalled ocean deserts which are mostly at the centers of gyres but also hypoxic or anoxic waters such as dead zones deserts usually have a large diurnal and seasonal temperature range with high daytime temperatures falling sharply at night the diurnal range may be as much as 20 to 30 °c 36 to 54 °f and the rock surface experiences even greater temperature differentials during the day the sky is usually clear and most of the suns radiation reaches the ground but as soon as the sun sets the desert cools quickly by radiating heat into space in hot deserts the temperature during daytime can exceed 45 °c 113 °f in summer and plunge below freezing point at night during winter such large temperature variations have a destructive effect on the exposed rocky surfaces the repeated fluctuations put a strain on exposed rock and the flanks of mountains crack and shatter fragmented strata slide down into the valleys where they continue to break into pieces due to the relentless sun by day and chill by night successive strata are exposed to further weathering the relief of the internal pressure that has built up in rocks that have been underground for aeons can cause them to shatter exfoliation also occurs when the outer surfaces of rocks split off in flat flakes this is believed to be caused by the stresses put on the rock by repeated thermal expansions and contractions which induces fracturing parallel to the original surface chemical weathering processes probably play a more important role in deserts than was previously thought the necessary moisture may be present in the form of dew or mist ground water may be drawn to the surface by evaporation and the formation of salt crystals may dislodge rock particles as sand or disintegrate rocks by exfoliation shallow caves are sometimes formed at the base of cliffs by this meansas the desert mountains decay large areas of shattered rock and rubble occur the process continues and the end products are either dust or sand dust is formed from solidified clay or volcanic deposits whereas sand results from the'
19
  • '##ing of skin and higher than normal gamma glutamyl transferase and alkaline phosphatase laboratory values they are in most cases located in the right hepatic lobe and are frequently seen as a single lesion their size ranges from 1 to 30 cm they can be difficult to diagnosis with imaging studies alone because it can be hard to tell the difference between hepatocellular adenoma focal nodular hyperplasia and hepatocellular carcinoma molecular categorization via biopsy and pathological analysis aids in both diagnosis and understanding prognosis particularly because hepatocellular adenomas have the potential to become malignant it is important to note percutaneous biopsy should be avoided because this method can lead to bleeding or rupture of the adenoma the best way to biopsy suspected hepatic adenoma is via open or laparoscopic excisional biopsybecause hepatocellular adenomas are so rare there are no clear guidelines for the best course of treatment the complications which include malignant transformation spontaneous hemorrhage and rupture are considered when determining the treatment approach estimates indicate approximately 2040 of hepatocellular adenomas will undergo spontaneous hemorrhage the evidence is not well elucidated but the best available data suggests that the risk of hepatocellular adenoma becoming hepatocellular carcinoma which is malignant liver tumor is 42 of all cases transformation to hepatocellular carcinoma is more common in men currently if the hepatic adenoma is 5 cm increasing in size symptomatic lesions has molecular markers associated with hcc transformation rising level of liver tumor markers such as alpha fetoprotein the patient is a male or has a glycogen storage disorder the adenoma is recommended to be surgically removed like most liver tumors the anatomy and location of the adenoma determines whether the tumor can removed laparoscopically or if it requires an open surgical procedure hepatocellular adenomas are also known to decrease in size when there is decreased estrogen or steroids eg when estrogencontaining contraceptives steroids are stopped or postpartumwomen of childbearing age with hepatic adenomas were previously recommended to avoid becoming pregnant altogether however currently a more individualized approach is recommended that takes into account the size of the adenoma and whether surgical resection is possible prior to becoming pregnant currently there is a clinical trial called the pregnancy and liver adenoma management palm study that'
  • 'syndrome in adults are rare the recovery of adults with the syndrome is generally complete with liver and brain function returning to normal within two weeks of onsetin children mild to moderate to severe permanent brain damage is possible especially in infants over thirty percent of the cases reported in the united states from 1981 through 1997 resulted in fatality reye syndrome occurs almost exclusively in children while a few adult cases have been reported over the years these cases do not typically show permanent neural or liver damage unlike in the united kingdom the surveillance for reye syndrome in the united states is focused on people under 18 years of agein 1980 after the cdc began cautioning physicians and parents about the association between reye syndrome and the use of salicylates in children with chickenpox or viruslike illnesses the incidence of reye syndrome in the united states began to decline prior to the fdas issue of warning labels on aspirin in 1986 in the united states between 1980 and 1997 the number of reported cases of reye syndrome decreased from 555 cases in 1980 to about two cases per year since 1994 during this time period 93 of reported cases for which racial data were available occurred in whites and the median age was six years in 93 of cases a viral illness had occurred in the preceding threeweek period for the period 1991 – 1994 the annual rate of hospitalizations due to reye syndrome in the united states was estimated to be between 02 and 11 per million population less than 18 years of ageduring the 1980s a casecontrol study carried out in the united kingdom also demonstrated an association between reye syndrome and aspirin exposure in june 1986 the united kingdom committee on safety of medicines issued warnings against the use of aspirin in children under 12 years of age and warning labels on aspirincontaining medications were introduced united kingdom surveillance for reye syndrome documented a decline in the incidence of the illness after 1986 the reported incidence rate of reye syndrome decreased from a high of 063 per 100000 population less than 12 years of age in 1983 – 1984 to 011 in 1990 – 1991from november 1995 to november 1996 in france a national survey of pediatric departments for children under 15 years of age with unexplained encephalopathy and a threefold or greater increase in serum aminotransferase andor ammonia led to the identification of nine definite cases of reye syndrome 079 cases per million children eight of the nine children with reye syndrome were found to have been exposed to aspirin in part because of this survey result the french medicines agency reinforced the international attention'
  • 'cholecystitis is inflammation of the gallbladder symptoms include right upper abdominal pain pain in the right shoulder nausea vomiting and occasionally fever often gallbladder attacks biliary colic precede acute cholecystitis the pain lasts longer in cholecystitis than in a typical gallbladder attack without appropriate treatment recurrent episodes of cholecystitis are common complications of acute cholecystitis include gallstone pancreatitis common bile duct stones or inflammation of the common bile ductmore than 90 of the time acute cholecystitis is caused from blockage of the cystic duct by a gallstone risk factors for gallstones include birth control pills pregnancy a family history of gallstones obesity diabetes liver disease or rapid weight loss occasionally acute cholecystitis occurs as a result of vasculitis or chemotherapy or during recovery from major trauma or burns cholecystitis is suspected based on symptoms and laboratory testing abdominal ultrasound is then typically used to confirm the diagnosistreatment is usually with laparoscopic gallbladder removal within 24 hours if possible taking pictures of the bile ducts during the surgery is recommended the routine use of antibiotics is controversial they are recommended if surgery cannot occur in a timely manner or if the case is complicated stones in the common bile duct can be removed before surgery by endoscopic retrograde cholangiopancreatography ercp or during surgery complications from surgery are rare in people unable to have surgery gallbladder drainage may be triedabout 10 – 15 of adults in the developed world have gallstones women more commonly have stones than men and they occur more commonly after age 40 certain ethnic groups are more often affected for example 48 of american indians have gallstones of all people with stones 1 – 4 have biliary colic each year if untreated about 20 of people with biliary colic develop acute cholecystitis once the gallbladder is removed outcomes are generally good without treatment chronic cholecystitis may occur the word is from greek cholecyst meaning gallbladder and itis meaning inflammation most people with gallstones do not have symptoms however when a gallstone temporarily lodges in the cystic duct they experience biliary colic biliary colic is abdominal pain in the right upper quadrant or epigastric region it is episodic occurring after eating greasy or fatty foods and leads to nausea andor vomiting people with cholecystitis most commonly have symptoms of biliary colic before developing chole'
11
  • 'there is an established practice of using the electrical conductance of blood pv loops in heart ventricles to determine the instantaneous volume of the ventricle this technique involves inserting a tetrapolar catheter into the ventricle and measuring conductance this measured conductance is a combination of blood and muscle and various techniques are used to identify the blood conductance from the total measured conductance blood conductance can then be converted to volume using a linear baan or a nonlinear wei relationship that relates conductance to volume this approach is based on the idea that the total conductance g of a fluid between two electrodes is a function of the fluids conductivity reciprocal of resistivity and volumein cardiology a tetrapolar catheter is inserted into the ventricle and a constant current i is applied across the two outer electrodes this generates an electrical field within the ventricle and the two inner electrodes measure a voltage generated due to the electric field this measured voltage v is used to determine conductance through a modified version of ohms law conductance g is the reciprocal of resistance r which changes the standard ohms equation from vir to vigconductance is then related to blood volume though baans equation when used in cardiology the electric field generated is not limited to the blood the fluid of interest but also penetrates the heart wall giving rise to additional conductance often called parallel conductance or muscle conductance gm which must be removedvarious techniques have been attempted to remove the gm contribution with varying degrees of success the most common method is the hypertonic saline technique which involves injecting a bolus of hypertonic saline into the ventricle to alter blood conductivity without affecting the surrounding muscle another less commonly used technique involves evacuating the ventricle of blood and measuring muscle conductance alone with a conductance catheter clearly both techniques are unreliable somewhat invasive and fail to account for the continuous variation in gm over the cardiac cycle the admittance technique is an improvement over the conductance technique for the realtime removal of muscle conductance gm blood and muscle respond to alternating ac electrical currents very differently blood is purely resistive while muscle has both resistive and capacitive properties the fixed charges in muscle cells create a significant reactance that causes a phase shift time delay in the measured signal relative to the excitation signal admittance technology uses this phase shift to determine the instantaneous muscle conductance and remove it from the total measured conductance the total admittance y of the blood filled ventricle is'
  • 'patients waist for accurate ecg signal detection and the patient receives detailed training to ensure correct handling of the wcd the efficacy and effectiveness of the wcd has been tested in clinical trials and several international postmarketing studies if the wcd is worn correctly and ecg signal detection is optimal the success rate of the first shock is approximately 98 hence the wcd is as effective as an icd in treating vt and vf long term followup studies showed that approximately 90 of all patients treated with the wcd are still alive one year after the heart failure incident since the wcd is a noninvasive garment no injuries or scars remain after use and shock delivery for effective protection the wcd should be worn 24 hours a day and should only be removed for personal hygiene 1 automated external defibrillators aed are portable electronic devices designed to analyse the heart rhythm and inform the operator whether defibrillation is required they are intended for persons of the general population with an unknown risk for heart failure and are usually available in public places and first responder ambulances aeds are designed for use by laypersons and provide simple audio and visual instructions for the operator to follow electrode pads placed by an operator on the chest of the patient are for monitoring and defibrillation in contrast to the icd and wcd an aed needs the immediate activity of a bystander in order to prevent the scd 2 wcds are intended for patients with a known transient risk for scd and meant for temporary use as described abovethe wcd is the ideal therapeutic option to prevent scd in patients until it is clear that a patients heart issues are indeed permanent and longterm protection with an icd must be applied36 3 implantable cardioverterdefibrillators icd are electronic devices implanted in the chest with a lead to the right ventricle of the heart they are intended for patients with permanent risk for scd an icd is like a wcd designed to detect and terminate cardiac arrhythmias by emergency defibrillation an extensive invasive surgery is necessary for implantation of the icd which is associated with a number of risks and morbidity therefore the decision for an icd should be carefully taken the wcd is the ideal therapeutic option to prevent scd in patients until it is clear that a patients heart issues are indeed permanent and longterm protection with an icd must be applied the wcd allows patients at high risk for scd who are discharged from the hospital to return to most of the'
  • 'a bioartificial heart is an engineered heart that contains the extracellular structure of a decellularized heart and cellular components from a different source such hearts are of particular interest for therapy as well as research into heart disease the first bioartificial hearts were created in 2008 using cadaveric rat hearts in 2014 humansized bioartificial pig hearts were constructed bioartificial hearts have not been developed yet for clinical use although the recellularization of porcine hearts with human cells opens the door to xenotransplantation heart failure is one of the leading causes of death in 2013 an estimate of 173 million deaths per year out of the 54 million total deaths was caused by cardiovascular diseases meaning that 315 of the worlds total death was caused by this often the only viable treatment for endstage heart failure is organ transplantation currently organ supply is insufficient to meet the demand which presents a large limitation in an endstage treatment plan a theoretical alternative to traditional transplantation processes is the engineering of personalized bioartificial hearts researchers have had many successful advances in the engineering of cardiovascular tissue and have looked towards using decellularized and recellularized cadaveric hearts in order to create a functional organ decellularizationrecellularization involves using a cadaveric heart removing the cellular contents while maintaining the protein matrix decellularization and subsequently facilitating growth of appropriate cardiovascular tissue inside the remaining matrix recellularizationover the past years researchers have identified populations of cardiac stem cells that reside in the adult human heart this discovery sparked the idea of regenerating the heart cells by taking the stem cells inside the heart and reprogramming them into cardiac tissues the importance of these stem cells are selfrenewal the ability to differentiate into cardiomyocytes endothelial cells and smooth vascular muscle cells and clonogenicity these stem cells are capable of becoming myocytes which are for stabilizing the topography of the intercellular components as well as to help control the size and shape of the heart as well as vascular cells which serve as a cell reservoir for the turnover and the maintenance of the mesenchymal tissues however in vivo studies have demonstrated that the regenerative ability of implanted cardiac stem cells lies in the associated macrophagemediated immune response and concomitant fibroblastmediated wound healing and not in their functionality since these effects were observed for both live and dead stem cells the preferred method to remove all cellular components from a heart is perfusion decellularization this technique involves perfusing'

Evaluation

Metrics

Label F1
all 0.7653

Uses

Direct Use for Inference

First install the SetFit library:

pip install setfit

Then you can load this model and run inference.

from setfit import SetFitModel

# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("udrearobert999/multi-qa-mpnet-base-cos-v1-contrastive-2e-1000samples")
# Run inference
preds = model("##rch procedure that evaluates the objective function p x displaystyle pmathbf x on a grid of candidate source locations g displaystyle mathcal g to estimate the spatial location of the sound source x s displaystyle textbf xs as the point of the grid that provides the maximum srp modifications of the classical srpphat algorithm have been proposed to reduce the computational cost of the gridsearch step of the algorithm and to increase the robustness of the method in the classical srpphat for each microphone pair and for each point of the grid a unique integer tdoa value is selected to be the acoustic delay corresponding to that grid point this procedure does not guarantee that all tdoas are associated to points on the grid nor that the spatial grid is consistent since some of the points may not correspond to an intersection of hyperboloids this issue becomes more problematic with coarse grids since when the number of points is reduced part of the tdoa information gets lost because most delays are not anymore associated to any point in the grid the modified srpphat collects and uses the tdoa information related to the volume surrounding each spatial point of the search grid by considering a modified objective function where l m 1 m 2 l x displaystyle lm1m2lmathbf x and l m 1 m 2 u x displaystyle lm1m2umathbf x are the lower and upper accumulation limits of gcc delays which depend on the spatial location x displaystyle mathbf x the accumulation limits can be calculated beforehand in an exact way by exploring the boundaries separating the regions corresponding to the points of the grid alternatively they can be selected by considering the spatial gradient of the tdoa ∇ τ m 1 m 2 x ∇ x τ m 1 m 2 x ∇ y τ m 1 m 2 x ∇ z τ m 1 m 2 x t displaystyle nabla tau m1m2mathbf x nabla xtau m1m2mathbf x nabla ytau m1m2mathbf x nabla ztau m1m2mathbf x t where each component γ ∈ x y z displaystyle gamma in leftxyzright of the gradient is for a rectangular grid where neighboring points are separated a distance r displaystyle r the lower and upper accumulation limits are given by where d r 2 min 1 sin θ cos [UNK] 1 sin θ sin [UNK] 1 cos θ displaystyle dr2min leftfrac 1vert sintheta cosphi vert frac 1vert sintheta sinphi vert frac 1vert")

Training Details

Training Set Metrics

Training set Min Median Max
Word count 1 369.5217 509
Label Training Sample Count
0 830
1 584
2 420
3 927
4 356
5 374
6 520
7 364
8 422
9 372
10 494
11 295
12 558
13 278
14 314
15 721
16 417
17 379
18 357
19 370
20 337
21 373
22 661
23 754
24 312
25 481
26 386
27 556
28 551
29 840
30 574
31 470
32 284
33 311
34 633
35 318
36 687
37 848
38 668
39 721
40 603
41 747
42 336

Training Hyperparameters

  • batch_size: (16, 16)
  • num_epochs: (2, 8)
  • max_steps: -1
  • sampling_strategy: oversampling
  • num_iterations: 10
  • body_learning_rate: (2e-05, 0.01)
  • head_learning_rate: 0.01
  • loss: CosineSimilarityLoss
  • distance_metric: cosine_distance
  • margin: 0.25
  • end_to_end: False
  • use_amp: False
  • warmup_proportion: 0.1
  • max_length: 512
  • seed: 42
  • eval_max_steps: -1
  • load_best_model_at_end: True

Training Results

Epoch Step Training Loss Validation Loss
0.0000 1 0.3429 -
0.0917 2500 0.0568 -
0.1835 5000 0.0047 -
0.2752 7500 0.0038 -
0.3669 10000 0.0786 0.0777
0.4586 12500 0.0468 -
0.5504 15000 0.0012 -
0.6421 17500 0.0123 -
0.7338 20000 0.0883 0.0908
0.8256 22500 0.002 -
0.9173 25000 0.0015 -
1.0090 27500 0.001 -
1.1008 30000 0.0046 0.1097
1.1925 32500 0.0077 -
1.2842 35000 0.0137 -
1.3759 37500 0.0003 -
1.4677 40000 0.0001 0.1095
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.10.12
  • SetFit: 1.0.3
  • Sentence Transformers: 2.7.0
  • Transformers: 4.40.1
  • PyTorch: 2.2.1+cu121
  • Datasets: 2.19.1
  • Tokenizers: 0.19.1

Citation

BibTeX

@article{https://doi.org/10.48550/arxiv.2209.11055,
    doi = {10.48550/ARXIV.2209.11055},
    url = {https://arxiv.org/abs/2209.11055},
    author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
    keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
    title = {Efficient Few-Shot Learning Without Prompts},
    publisher = {arXiv},
    year = {2022},
    copyright = {Creative Commons Attribution 4.0 International}
}
Downloads last month
29
Safetensors
Model size
109M params
Tensor type
F32
·

Finetuned from

Evaluation results