Essay // Developmental Psychology: The 3 Major Theories of Childhood Development

Mis à jour le Mercredi, 14 Avril 2021

TheoriesOfDevelopment danny d'purb dpurb

Source: An Introduction to Developmental Psychology by Slater & Bremner (Blackwell:Oxford, 2nd Edn, 2011)

It is fundamental to undertstand that as human beings, whatever stage of our lives we are, in order to be able to function fully in our daily lives and in any other activity we first of all need to have a strong foundation. That foundation is our brain, and hence, if our brain [i.e. the hardware] is not physiologically within the limits of what is deemed fit and healthy, every aspect of our mind will be affected and also of our lives. There is no psyche [mind] without a brain, because this biological hardware given to us by nature throughout the course of the shared evolutionary history of primates on planet Earth, allows us to experience every aspect of our lives, both physical and psychical [i.e. mental].

So, before diving deeper into the depth of children’s development, we are going to explore this link between brain and behaviour in order to get a foundation of the importance or a healthy brain, for a healthy development and a healthy and fulfulling life, by starting with how brain damage can affect our personalities and mental abilities; we are going to look at the Frontal lobe, which is the part of the brain behind our forehead responsible for problem solving, strategic planning, use of environmental instructions to shift procedures, and the inhibition of impulsivity.

FLD

(Photo: Jez C Self / Frontal Lobe Gone)

 

Frontal Lobes (& Frontal Lobe Damage)

The Wisconsin Card Sorting Test (WCST; Grant & Berg, 1948; Heaton, Chelune,Talley, & Curtis, 1993) has long been used in Neuropsychology and is among the most frequently administered neuropsychological instruments (Butler, Retzlaff, & Vanderploeg, 1991).

The test was specifically devised to assess executive functions mediated by the frontal lobes such as problem solving, strategic planning, use of environmental instructions to shift procedures, and the inhibition of impulsivity. Some neuropsychologists however, have questioned whether the test can measure complex cognitive processes believed to be mediated by the Frontal lobes (Bigler, 1988; Costa, 1988).

The WCST test, until this day remains widely used in clinical settings as frontal lobe injuries are common worldwide. Performance on the WCST test is believed to be particular sensitive in reflecting the possibilities of patients having frontal lobe damage (Eling, Derckx, & Maes, 2008). On each Wisconsin card, patterns composed of either one, two, three or four identical symbols are printed. Symbols are either stars, triangle, crosses or circles; and are either red, blue, yellow or green.

At the start of the test, the patient has to deal with four stimulus cards that are different from one another in the colour, form and number of symbols they display. The aim of the participant would be to correctly sort cards from a deck into piles in front of the stimulus cards. However, the participant is not aware whether to sort by form, colour or by number. The participant generally starts guessing and is told after each card has been sorted whether it was correct or incorrect.

Firstly they are generally instructed to sort by colour; however as soon as several correct responses are registered, the sorting rule is changed to either shape or number without any notice, besides the fact that responses based on colour suddenly become incorrect. As the process continues, the sorting principle is changed as the participant learns a new sorting principle.

potbIt has been noted that those with frontal lobe area damage often continue to sort according to only one particular sorting principle for 100 or more trials even after the principle has been deemed as incorrect (Demakis, 2003). The ability to correctly remember new instructions with for effective behaviour is near impossible for those with brain damage: a problem known as ‘perseveration’.

Another widely used test is the ‘Stroop Task’ which sets out to test a patient’s ability to respond to colours of the ink of words displayed with alternating instructions. Frontal patients are known for badly performing to new instructions. As the central executive is part of the frontal lobe, other problems such as catatonia – a condition where patients remain motionless and speechless for hours while unable to initiate – can arise. Distractibility has also been observed, where sufferers are easily distracted by external or internal stimuli. Lhermite (1983) also observed the ‘Utilisation Syndrome’ in some patients with Dysexecutive Syndrome (Normal & Shallice, 1986), who would grab and use random objects available to them pathologically.

 

Incomplete Frontal Lobe Development & Impulsiveness in Children

Image: PsyBlog

The Frontal lobe, responsible for most executive functions and attention, has shown to take years [at least 20] to fully develop. The Frontal lobe [located behind the forehead] is responsible for all thoughts and voluntary behaviour such as motor skills, emotions, problem-solving and speech.

In childhood, as the frontal lobe develops, new functions are constantly added; the brain’s activity in childhood is so intense that it uses nearly half of the calories consumed by the child in its development.

As the Pre-Frontal Lobe/Cortex is believed to take a considerable amount of at least 20 years to reach maturity (Diamond, 2002), children’s impulsiveness seem to be linked to neurological factors with the Pre-Frontal Lobe/Cortex; particularly, their [sometimes] inability to inhibit response(s).

The idea was supported by developmental psychologist and philosopher Jean Piaget‘s  Theory of Cognitive Development of Children [known for his epistemological studies] where he showed the A-not-B error [also known as the “stage 4 error” or “perseverative error”] is mostly made by infants during the substage 4 of their sensorimotor stage.

Researchers used 2 boxes, marked A and B, where the experimenter had repeatedly hid a visually attractive toy under the Box A within the infant’s reach [for the latter to find]. After the infant had been conditioned to look under Box A, the critical trial had the experimenter move the toy under Box B.

Children of 10 months or younger make the “perseveration error” [looked under Box A although fully seeing experimenter move the toy under Box B]; demonstrating a lack of schema of object permanence [unlike adults with fully developed Frontal lobes].

gmatter

Frontal lobe development in adults was compared with that in adolescents, e.g. Sowell et al (1999); Giedd et all (1999); who noted differences in Grey matter volume; and differences in White matter connections. Adolescents are likely to have their response inhibition and executive attention performing less intensely than adults’. There has also been a growing & ongoing interest in researching the adolescent brain; where great differences in some areas are being discovered.

The Pre-Frontal Lobe/Cortex [located behind the forehead] is essential for ‘mentalising’ complex social and cognitive tasks. Wang et al (2006) and Blakemore et al (2007) provided more evidence between the difference in Pre-Frontal Lobe activity when ‘mentalising’ between adolescents and adults. Anderson, Damasio et al (1999) also noted that patients with very early damage to their frontal lobes suffered throughout their adult lives.

skull

2 subjects with Frontal Lobe damage were studied:

1) Subject A: Female patient of 20 years old who suffered damages to her Frontal lobe at 15 months old was observed as being disruptive through adult life; also lied, stole, was verbally and physically abusive to others; had no career plans and was unable to remain in employment.

2) Subject B was a male of 23 years of age who had sustained damages to his Frontal lobe at 3 months of age; he turned out to be unmotivated, flat with bursts of anger, slacked in front of the television while comfort eating, and ended up obese in poor hygiene and could not maintain employment. [However…]

Reflections

While research and tests have proven the link between personality traits & mental abilities and frontal brain damage, the physiological defects of the frontal lobe would likely be linked to certain traits deemed negative by a subject willing to be a functional member of society [generally Western societies].

However, personality traits similar to the above Subjects [A & B] may in fact not always be linked to deficiency and/or damage to the frontal lobes; as many other factors are to be considered when assessing the behaviour & personality traits of subjects; where [for example] violence and short temper may [at times] be linked to a range of factors and environmental events during development, or other mental strains such as sustained stress, emotional deficiencies due to abnormal brain neurochemistry, genetics, or other factors that may lead to intense emotional reactivity [such as provocation or certain themes/topics that have high emotional salience to particular subjects, ‘passion‘]

 

THE 3 MAJOR THEORIES OF DEVELOPMENT

In 1984, Nicholas Humphrey described us as “nature’s psychologists’” or homo psychologicus. What he meant was that as intelligent social beings, we tend to use our knowledge of our own thoughts and feelings – “introspection” – as a guide for understanding how others are likely to think, feel and hence, behave. He also argued that we are conscious [i.e. we have self-awareness] precisely because such an attribute is useful in the process of understanding others and having a successful social existence – consciousness is a biological adaptation that enables us to perform introspective psychology. Today, we are confident in the knowledge that the process of understanding others’ thoughts, feelings and behaviour is an ability that develops through childhood and most likely throughout our lives; and according to the greatest child psychologist of all time, Jean Piaget, a crucial phase of this process occurs in middle childhood.

Developmental psychology can be characterised as the field that attempts to understand and explain the changes that happen over time in the thought, behaviour, reasoning and functioning of a person due to biological, individual and environmental influences. Developmental psychologists study children’s development, and the development of human behaviour across the organism’s lifetime from a variety of different perspectives. Hence, if we are studying different areas of development, different theoretical perspectives will be fundamental and may influence the ways psychologists and scholars think about and study development.

Through the systematic collection of knowledge and experiments, we can develop a greater understanding and awareness of ourselves than would otherwise be possible.

 

Focussing on changes with time

The new born infant is a helpless creature, with communications skills that are limited along with few abilities. By 18 – 24 months, the end of the period of infancy – this scenario changes. The child has now formed relationships with others, has gained knowledge about the aspects of the physical world, and is about to undergo a vocabulary explosion as language development leaps ahead. At the time of adolescence, the child has turned into a mature, thinking individual actively striving to come to terms with a fast changing and complex society.

The important contribution to development, is maturation and the changes resulting from experience that intervene between the different ages and stages of childhood: the term maturation refers to those aspects of development that are primarily under genetic control, and which are relatively uninfluenced by the environment. An example would be puberty, and although its onset can be affected by environmental factors such as diet, the changes that occur are genetically determined.

 

Development Observed

The biologist, Charles Darwin, notable for his theory of evolution, made one of the earliest contributions to our understanding of child psychology in his article “A biographical sketch of an infant” (1877), which was based on observations of his own son’s development. By the early 20th century, most of our understanding of psychological development was not based on scientific methodology as much was still based on anecdotes and opinions of qualitative analysis, a method that strict empiricists have never managed to grasp or like. Nevertheless, knowledge was still being organised through both observation and experiment and during the 1920s and 1930s the study of child development started to grow as a movement, particularly in the USA with the founding of Institutes of Child Study or Child Welfare in university centres such as Iowa and Minnesota. Minute observations were made of young children in their developmental phase along with normal and abnormal behaviour and adjustment. In the 1920s Jean Piaget started his long and passionate career in child psychology, blending observation and experiment in his studies of children’s thinking.

The observations carried out in naturalistic settings was soon criticised by the empiricists of the behavioural movement in the 1940s and 1950s [although it continued to be the method of choice in the study of animal behaviour by zoologists]. This led to many psychologist carrying their experiments under laboratory conditions with statistical methods, and such experiments although come with some advantages from the perspective of empirical statistics, they do have limitations and drawbacks [e.g. on measuring qualitative aspects of personality such as emotions, values, etc]. It should be noted that much of the laboratory work on child development from the 1950s and 1960s has been described by Urie Bronfenbrenner (1979) as “the science of the behaviour of children in strange situations with strange adults”.

Schaffer (1996, pp. xiv – xvii) notes other changes in the methods in which psychologists now approach child development, such as the importance in understanding the processes of how children grow and develop rather than simply outcomes, and to integrate findings from a range of sources at different levels of analysis – for example meaningful others, community [geography, socio-linguistics, arts, etc] and culture [religion, nationality(ies), education, class, etc).

In the course of this essay, we will be integrating perspectives to make the most of the findings in distinguishing differences in personality, by reflecting on the links to be made by psychologists between the concept of the child’s “internal working model of relationships” and discoveries about the “theory of mind”.

It is fundamental to acknowledge that psychology itself is mostly based on accurate approximations due to the statistical methods used and the problematic nature of the qualitative variables measured, and not precision. And with this in mind, we should accept the complementary virtues of various different methods of investigation and gain a sense that the child’s process of development and the socio-behavioural context in which they exist are closely intertwined, each having an influence on the other.

 

Defining development according to world views

Intellectuals and researchers who study development also have different views on the topic, that is, the way in which development is defined, and the areas of development that are of interest to individual researchers generally orients them towards specific methodologies and philosophy when studying development.

We are now going to look at the 2 main views in the study of development given by psychologists who hold different views or sometimes combine elements of both, like ourselves, being firmly on the organic perspective of development and construction.

A world view [also known as paradigm, model, or world hypothesis] can be characterised as “a philosophical system of thinking, perceiving and feeling [ideas and more] that serves to organise a set or family of scientific theories and associated scientific methods” (1986, p. 42).

They are beliefs we adopt because it aligns with our values, and these are qualitative and often not open to common reductive empirical tests – that is precisely why we believe them!

Lerner and others note that many developmental theories appear to fall under one or two world views: organismic and mechanistic.

 

Organismic World View

The organismic world view which is the main view that we adopted to be the foundation of the Organic Theory, is one that sees a human being on earth as a biological organism that is inherently active and continually interacting with the environment [all aspects and dimensions], and therefore helping to shape its own development. The organismic worldview emphasises the interaction between maturation and experience that leads to the development of new internal, psychological structures for processing environmental input (e.g. Getsdottir & Lerner, 2008).

As Lerner states: “The Organismic model stresses the integrated structural features of the organism. If the parts making up the whole become reorganised as a consequence of the organism’s active construction of its own functioning, the structure of the organism may take on a new meaning; thus qualitatively distinct principles may be involved in human functioning at different points in life. These distinct, or new, levels of organisation are termed stages…” (p.57). A good analogy would be qualitative changes that take place when the molecules of two gasses hydrogen and oxygen, combine to form a liquid, water. Many other qualitative changes happen to water when it changes from frozen (ice) to liquid (water) to steam (vapour). Depending on the temperature, these qualitative changes in the state of water are easily reversed, BUT in human development the qualitative changes that take place are very rarely, if ever, reversible – that is, each new stage represents an advance on the previous stage, and the organism [human being] does not regress to former stages.

Irreversible

The main argument is that the new stage is not simply reducible to components of the previous stage; it represents new characteristics that were not present in the previous stage.

For example, the organism appears to pass through structural changes during foetal development [See Picture A].

PA Development of the human foetal brain_A_v2.jpg

PICTURE A. Development of the human foetal brain / Source: Adapted from J.H.Martin (2003), Neuroanatomy Text and Atlas (3rd ed., p.51). Stamford, CT:Appleton & Lange.

In the initial stage [Period of the Ovumfirst few weeks after conception] cells multiply and form clusters; in the second stage [Period of the Embryo – 2 – 8 weeks] the major body parts are formed by cell multiplication, specialisation and migration as well as cell death; in the last stage [Period of the Foetus] the body parts mature and begin to operate as an integrated system [e.g. head orientation towards and away from stimulation, arm extensions and grasping, thumb sucking, startles to loud noises, and so on (Fifer, 2010; Hepper, 2007)]. It is important to understand that similar stages of psychological development are postulated to happen after birth also, and the individual from one stage to another is different with new abilities that cannot be reversed.

Jean Piaget is perhaps the greatest and best example of a successful organismic theorist. Piaget suggested that cognitive development occurs in stages and that the reasoning of the child at one stage is qualitatively different from that of the earlier or later stages.

Partir en Livre BNF Bibliothèque nationale de France dpurb d'purb site web

“Chaque civilisation se forge un mythe destiné à expliquer son apparition et construit sa tradition écrite autour d’un support privilégié” / Découvrez (Liens): (i) l’aventure des écritures et (ii) l’aventure du livre | Source: La Bibliothèque Nationale de France (BNF)

The main job of the developmental psychologist who believes in the organismic worldview [like ourselves] is to determine when [i.e., at what age?] different psychological stages operate and what variables and processes represent the different between stages and determine the transition between them.

 

Mechanistic World View

From the mechanistic world view, it is assumed that a person can be broken down into components and can be represented as being like a machine [such as a computer], which is inherently passive until stimulated by the environment [this view seems to be more in line with the early British thinkers about the brain]. Human behaviour is reducible to the operation of fundamental behavioural units [e.g. habits] that are acquired in a progressive, cumulative manner. The mechanistic view assumes that the frequency of behaviours can increase with age due to various learning processes and they can decrease with age when they no longer have any functional consequence, or lead to negative consequences [such as punishment]. The developmental psychologists job here is to study environmental factors, or principles of learning, which determine the way organisms respond to stimulation, and which results in increases, decreases, and changes in behaviour.

Quite unlike the organismic world view, the mechanistic world view sees development as reflected by a more continuous growth function, rather than occurring in qualitatively different stages, and the child is believed to be passive rather than active in shaping its own development and its environment. This mechanistic view is generally embraced by behaviourists and cognitive-behaviourists who function on a reductionist philosophy based on the limitations of the scientific method when faced with understanding psychology and the mechanism of mind; instead they tend to focus on measurable behaviour and treat the brain as an information processing centre with a highly similar logic to a computer. The mechanistic view while being fairly grotesque due to its reductionist values, has revealed to be very practical in the study of human-machine interaction and along with new cognitive methods, it has helped to enhance the design of technological equipment to improve human experience in a wide range of areas.

As for us, we are mostly on the perspective of the organismic school of thought but refuse to completely dismiss all the mechanistic world view’s elements, because some of it can be embedded as secondary cognitive processes carried out by the conscious or preconscious areas of the mind when appraising stimuli from an organism’s environment. Hence, some elements can be embedded in understanding interaction with basic objects and elements of an organism’s “external” [not internal] environment, but to fully base our thoughts and behaviour on a mechanistic world view would arguably be irrationally reductionist.

 

Theories of Development

 

“Es gibt nichts Praktischeres al seine gute Theorie.”

–Emmanuel Kant (1724 – 1804)

 

“There is nothing so practical as a good theory.”

-Kurt Lewin (1944, p. 195)

 

Human development is complex and it would be irrational to expect a single universal theory of development that could do justice to this complexity, and indeed no theory of development attempts to do so. Each theory attempts to account for only a limited range of development and it is often the case that within each area of development there are competing theoretical views, each attempting to account for the same aspects of development. We shall see below some of this complexity and conflict in our account of different theoretical views.

First of all, it would be helpful to understand what is implied by a “Theory” in the field of developmental psychology. A theory of development is a scheme or system of ideas that is generally based on evidence and attempts to explain, describe and predict behaviour and development. So, from this account, it is quite clear that a theory aims to bring order to what might otherwise be a chaotic mass of information – and hence why there may indeed not be anything more practical than a good theory.

We usually deal with at least 2 kinds of theory in every area of development, we have the minor theories [that are generally concerned with very specific and narrow areas of development such as eye movements, the origins of pointing and so on], and we have the major theories which are the ones we are primarily interested in as they attempt to explain large areas of development.

They have been divided in 3 groups for the purpose of this essay, with cognition, emotion and motivation in focus:

(I) The Theory of Cognitive Development of Jean Piaget


(II) The Theory of Attachment in Emotional Development by John Bowlby


(III) The Genetic/Psychosexual Model of Development by Sigmund Freud

 

__________

 

(I) The Theory of Cognitive Development (Jean Piaget)

The theory of cognitive development we are interested in is that of Jean Piaget who saw children as active agents in shaping their own development,  and not simply blank slates who passively and unthinkingly responds to whatever the environment throws at them or treats them to [an assumption that is insulting to human intelligence, hence why we do not subscribe blindly to the passive school of thought but only consider some elements related to very basic cognitive processes].

This suggests that children’s behaviour and development is motivated largely intrinsically (internally) rather than extrinsically (externally).

For Piaget and intellectuals with a firm belief in the mind as an active entity, children learn to adapt to their environment and as a result of their cognitive adaptations they are now better able to understand their world. Adaptation is an act that all living organisms have evolved to do and as children adapt, they also gradually construct more advanced understanding [internal working models] of their worlds.

These more advanced understanding of the world reflect themselves in the appearance of new stages of development. Piaget’s theory is the best and most accomplished example of the organismic world view, and it portrays children as inherently active, continually interacting with various dimensions of their environments, in such a way as to shape their own development.

With this assumption in mind, Piaget’s theory is also often referred to the Constructivist Theory.

 

Piaget’s Theory of Cognitive Development (0 – 12 yrs)

Jean Piaget’s theory developed out of his early interest in observing animals in their natural environment. Piaget published his first article at the age of 10 about the description of an albino sparrow that he had observed in the park, and before the age of 18, journals had accepted several of his papers about molluscs. During his adolescent years, the young theorist developed a keen interest in philosophy, particularly “epistemology” [the branch of philosophy focused on knowledge and the acquisition of it]. However, his undergraduate studies were in the field of biology and his doctoral dissertation was once again, on molluscs.

For a short while, Piaget then worked at Bleuler’s psychiatric clinic where his interest in psychoanalysis grew. As a results, he moved to France and attended the Sorbonne university, in 1919 to study clinical psychology and also pursued his interest in philosophy. In Paris, he worked in the Binet Laboratory with Theodore Simon on the standardisation of intelligence tests. Piaget’s task was to monitor children’s correct response to test times, but instead, he became much more interested in the mistakes that children made, and developed the idea that the study of children’s errors could provide an insight into their cognitive processes.

Piaget came to realise that through the process and discipline of psychology, he had an opportunity to create links between epistemology and biology. Through the integration of the disciplines of psychology, biology and epistemology, Piaget aimed to develop a scientific approach to the understanding of knowledge – the nature of knowledge and the ways in which an organism acquires knowledge. As a man who valued richness and detail, Piaget was not at all impressed by the reductionist quantitative methods used by the empiricists of the time, however, he was influenced by the work on developmental psychology by Binet, a French psychologist who had pioneered studies of children’s thinking [his method of observing children in their natural setting was one that Piaget followed himself when he left the Binet laboratory].

Piaget later integrated his own experience of psychiatric work in Bleuler’s clinic with the observational and questioning strategies that he had learned from Binet. Out of this fusion of techniques emerged the “Clinical Interview” [an open-ended, conversational technique for eliciting children’s thinking (cognitive) processes]. It was the child’s own subjective judgement and explanation that was of interest to Piaget, as he was not testing a particular hypothesis, but rather looking for an explanation of how the child comes to understand his or her world. The method is not simple, and the team of Piaget’s researchers had to be trained for 1 year before they actually started collecting data. They were trained and educated about the “art” of asking the right questions and testing the truth of what the children said.

Piaget’s career was devoted to the quest for the mechanisms guiding biological adaptation, and also the analysis of logical thought [that derives from these adaptations and interaction with the exterior environment] (Boden, 1979). He wrote more than 50 books and hundreds of articles, correcting many of his earlier ideas in later life. At its core, the theory of Jean Piaget is concerned with the human need to discover and acquire deeper understanding and knowledge.

Piaget’s incredible output of concepts and ideas characterises his attitude towards constant construction and reconstruction of his theoretical system, which was quite consistent with his philosophy of knowledge, and perhaps indirectly to the school of thought of the mind as an “active” entity.

This section will explore the model of cognitive structure developed by Piaget along with the modifications and some of the re-interpretations that subsequent Piagetian researchers have made to the master’s initial ideas. Although many details have been questioned, it is undeniable that Piaget’s contribution to the understanding of thinking processes [cognitive] of both children and adults.

One great argument made by the theorist suggested that if we are to understand how children think we ought to look at the qualitative development of their problem-solving abilities.

Two famous examples from Piaget’s experiments will be considered that explore the thinking processes in children, showing how they develop more sophisticated problem-solving skills.

Example 1 – One of Piaget’s dialogue with a 7-year-old

Adult:    Does the moon move or not?
Child:    When we go, it goes.
Adult:    What makes it move?
Child:    We do.
Adult:    How?
Child:    When we walk. It goes by itself.

(Piaget, 1929, pp. 146-7)

From this example and other observations based on the similar theme, Piaget described a particular period in childhood which is marked by egocentrism. Since the moon appears to move with the child, she concluded that it does indeed do so. But as the child grows and her sense of logic follows, there is a shift from her own egocentric perspective where the child starts to learn to differentiate between what she sees and she “knows”. Gruber and Vonèche (1977) provide a good example of how an older child used her sense of logic to investigate the movement of the moon. This particular child had sent his younger brother for a walk down the garden while he himself remained immobile. The younger child reported that the moon moved with him, but the older boy realised from his observation that the moon did not move and could then disprove this wrong information with his brother.

Example 2 – Estimating the Quantity of a Liquid

FA Piaget Liquid Quantity

FIGURE A. Estimating a quantity of liquid

This example is taken from Piaget’s research into children’s understanding of quantity. Let us assume that John [aged 4] and Mary [aged 7] are given a problem; two glasses, A and B, are of equal capacity [volume] but glass A is short and wide and glass B is tall and narrow [See Figure A]. Glass A is filled to a particular height and the children would then be asked, separately, to pour liquid into glass B [tall and narrow] so that it would contain the same amount as glass A. Despite the striking proportional differences of the 2 containers, John could not grasp that the smaller diameter of glass B requires a higher level of liquid. To Mary, John’s response is incredibly senseless and stupid: of course one would have to add more to glass B. Piaget interestingly saw the depth of the argument that was in the responses of those children. John could not “see” that the liquid in A and the liquid in B are not equal, because his thought processes are using a mechanism that is qualitatively different in terms of reasoning and that is not yet developed [perhaps due to physiological/hardware limitations] and lacks the mental operations that would have allowed him to solve the problem. Mary, the 7 year old girl finds it hard to understand 4 year old John’s stupidity and why he could not perceive his error.

Facing this situation, Piaget brilliantly proposed that the essence of knowledge is “activity” – a line of thought and perspective adopted by many psychologists and intellectuals from the German and French school of Lacan quite opposite to the early British thoughts that assumed the mind to be “passive” and mostly shaped by the effects of the outside environment.  This argument is not only one that embraces human ingenuity and creativity and acknowledges our instinctual drives to thrive and succeed but also characterises the mind as an entity with high creative power instead of simple junction of neurons conditioned to react to stimuli from its environment almost helplessly as the “passive” school assumed it to be. Hence, to Piaget and ourselves, the essence of knowledge is “activity”, he could be referring to the infant directly manipulating objects and in doing so also learning about their properties. It may also refer to a child pouring liquid from one glass to another to find out which has more in it. Or it may refer to the adolescent forming hypotheses to solve a scientific dilemma. In the examples mentioned, it is important to note that the learning process of the child is taking place through “action”, whether physical (e.g. exploring a ball of clay) or mental (e.g. thinking of various outcomes and reflecting on what they mean). Piaget’s emphasis on activity was important in stimulating the child-centred approach to education, because he firmly believed that for lasting learning to occur, children would not only have to manipulate objects but also manipulate and define ideas. The major educational implications of Piaget will be discussed later in this section.

 

Assumptions of Piaget’s Theory of Development: Structure & Organisation

Through his carefully devised techniques, and using observations, dialogues and small-scale experiments, Piaget suggested that children progress through a series of stages in their thinking, each of which synchronises with major changes in the structure or logic of their intelligence. [See Table A]

TA Piaget - Stages of Intellectual Development

TABLE A. The Stages of Intellectual Development in Piaget’s Theory

Piaget named the main stages of development and the order in which the occurred as:

I. The Sensori-Motor Stage [0 – 2 years]
II. The Pre-Operational Stage [2 – 7 years]
III. The Concrete Operational Stage [7 – 12 years]
IV. The Formal Operational Stages [12 years but may vary from one child to the other]

Piaget’s structures are sets of mental operations, which can be applied to objects, beliefs, ideas or anything in the child’s world, and these mental operations are known as “schemas”. The schemas are characterised as being evolving structures, in other words, structures that grow and change from one stage to the next.

The details of each section of the 4 stages will be explored below, however it is fundamental that we first understand Piaget’s concept of the unchanging or “invariant” [to use his own term – this may be related to temperament but here it involves another set of abilities] aspects of thought, which refers to the broad characteristics of intelligent activity that remains constant throughout the human organism’s life.

These are the organisation of schemas and their adaptation through assimilation and accommodation.

Organisation: Piaget used this term to explain the innate ability to coordinate existing cognitive structures, or schemas, and combine them into more complex systems [e.g. a baby of 3 months old has gained the ability to combine looking and grasping, with the earlier reflex of sucking]. The baby is able to perform all three actions together when feeding from her mother’s breast or a feeding bottle, an ability that the new born child did not originally have in his/her repertoire. A further example would be Ben who at the age of 2 had learned to climb downstairs while carrying objects without dropping them, and also to open doors. This means that he could then combine all three operations to deliver newspaper to his grandmother in the basement flat. To note, each separate operation combines into a new action more complex than the sum of the parts.

The complexity of the organisation also grows as the schemas become more elaborate. Piaget described the development of a particular action schema in his son Laurent as he attempted to strike a hanging object. Initially, Laurent only made random movement towards the object, but at the age of 6 months the movements had evolved and were now deliberate, focused and well directed. As Piaget put it in his description, at 6 months old, Laurent possessed the mental structure that guided the action involved in hitting a toy. Laurent had also gained the ability to accommodate his actions to the weight, size and shape of the toy and its distance from him.

The next invariant function, adaptation is characterised by the striving of the organism for balance [or equilibrium] with the environment, and is achieved through the further processes of “assimilation” and “accommodation”. During the process of assimilation, the child’s repertoire of knowledge expands and he/she takes in [learns about] a new experience [and the knowledge acquired with it] and fits it into an existing schema. For example, a child may learn the words “dog” and “car”, and following this enigmatic event, the child may call all animals “dogs” [i.e. different animals taken into a schema related to the child’s understanding of dog], or all vehicles with four wheels are called “cars”. The process of accommodation balances this erroneous process, where the child adjusts an existing schema to fit in with the nature of the environment [i.e. from experience, the child begins to perceive that cats can be distinguished from dogs, and may develop schemas for these 2 different animals – also that cars can be distinguished from other vehicles such as trucks or lorries.

By these two processes, namely assimilation and accommodation, the child achieves a new state of equilibrium which is however not permanent as this balance is generally soon upset as the child assimilates further new experiences or accommodates her existing schemas to another new idea.

Equilibrium only seems to prepare the child for more disequilibrium through further learning and adaptation; these two processes occur together and cannot be thought of separately. Assimilation provides the child with consolidation for mental structures; and accommodation results in growth and change. All adaptations contains the components of both processes and striving for balance between assimilation and accommodation [Remember: Organisation  Adaptation + (Assimilation & Accommodation)] leads to the child’s intrinsic motivation to learn [This is also reminiscent of the psychodynamic school of thought as several processes colliding to find balance in its model of the mental life of the individual mind]. When new experiences are within the child’s response range in terms of abilities, then conditions are said to be at their best for change and growth to occur.


The Stages of Cognitive Development

To adepts of Piaget’s outlook, intellectual development is a continuous process of assimilation and accommodation. We will not describe the four stages identified in the development of cognition from birth to about 12 years old [in normal children]. This order is similar for all children but the age these milestones are achieved may vary from one child to another – with the stages being:

I. The Sensori-Motor Stage [0 – 2 years]
II. The Pre-Operational Stage [2 – 7 years]
III. The Concrete Operational Stage [7 – 12 years]
IV. The Formal Operational Stages [12 years but may vary from one child to the other]


I. The Sensori-Motor Stage (about 0 – 2 years) | Stage 1 of 4

During the sensori-motor stage the child changes from a newborn, who focuses almost entirely on immediate sensory and motor experiences, to a toddler who possesses a rudimentary capacity for thinking. Piaget described in detail the process by which this occurs, by documenting his own children’s behaviour. On the basis of such observations, carried over the first 2 years of life, Piaget divided the sensori-motor stage into 6 sub-stages. [See Table B]

TB Sub-stages of the sensori-motor period

TABLE B. Substages of the sensori-motor period according to Piaget

The first substage, reflex activity, included the reflexive behaviours and spontaneous rhythmic activity with which the infant is born. Piaget called the second substage primary circular reactions. He used the term “circular” to emphasise how children tend to repeat an activity, especially those that are pleasing or satisfying (e.g. thumb sucking). The term “primary” refers to simple behaviours that are derived from the reflexes of the first period [e.g. thumb sucking develops as the thumb is assimilated into a schema based on the innate suckling reflex].

Secondary circular reactions refer to the child’s willingness to repeat actions, but the word “secondary” is used here to point out the behaviours that are the child’s very own. In other words, she is not limited to just repeating actions based on early reflexes, but having initiated new actions, she can now repeat these if they are satisfying. However, at the same time, these actions tend to be directed outside the child (unlike simple actions like thumb sucking) and are aimed at influencing the environment around her.

This is Piaget’s description of his own daughter Jacqueline at 5 months old, kicking her legs (in itself a primary circular reaction) in what gradually ascends to a secondary circular reaction as the leg movement is repeated not just for itself, but is initiated in the presence of a doll.

Jacqueline looks at a doll attached to a string which is stretched from the hood to the handle of the cradle. The doll is approximately the same level as the child’s feet. Jacqueline moves her feet and finally strikes the doll, whose movement she immediately notices… The activity of the feet grows increasingly regular whereas Jacqueline’s eyes are fixed on the doll. Moreover, when I remove the doll Jacqueline occupies herself quite differently; when I replace it, after a moment, she immediately starts to move her legs again.

(Piaget, 1936, p. 182)

In displaying such behaviours, Jacqueline seemed to have established a general relation between her movement and the doll’s, and was also engaged in a secondary circular reaction.

Coordination of Secondary Circular Reactions, being substage 4 of the Sensori-motor period, and as the word “coordination” implies, it is particularly at this substage that children begin to combine different behavioural schema. In the following extracted section, Piaget described how his daughter (aged 8 months) combined several schemas, such as “sucking an object” and “grasping an object” in a series of coordinated actions when playing with a new object:

Jacqueline grasps an unfamiliar cigarette case which I present to her. At first she examines it very attentively, turns it over, then holds it in both hands while making the sound apff (a kind of hiss which she usually makes in the presence of people). After than she rubs it against the wicker of her cradle then draws herself up while looking at it, then swings it above her and finally puts it in her mouth.

(Piaget, 1936, p. 284)

Jacqueline’s behaviour illustrates how a new object is assimilated to various existing schema in the fourth substage. In the following stage, that of tertiary circular reactions children’s behaviours become more flexible and when they repeat actions they may do so with variations, which can lead to new results. By repeating actions with variations, children are, in effect, accommodating established schema to new contexts and needs.

The final sub-stage of the sensori-motor period is known as the substage of Internal Representations and it refers to the child’s achievement of mental representation. The previous substages the child has interacted with the world through her physical motor schema, another way of phrasing it would be that, she has acted directly on the world. In this final substage, she can now act “indirectly” on the world because she has developed the capacity to hold mental representations of the world – that is, she can now think and plan.

As evidence for children attaining the level of mental representation, Piaget pointed out that by this substage children have a full concept of object permanence. Piaget noticed that very young infants ignored even highly attractive objects once they were out of sight [e.g. a child reaching for a toy, but then the toy is suddenly covered with a cloth and it immediately leads to the child losing all interest in it and would not attempt to search for it, and might even just look away]. According to Piaget it was only after the later substages that children demonstrated an awareness [by searching and trying to retrieve the object] that the object was “permanently” present even if it was temporarily out of sight. Searching for an object that cannot be seen directly implies that the child has a memory of the object, i.e. a mental representation of it.

It is only towards the end of the sensori-motor period that children demonstrated novel patterns of behaviour in response to a problem. For example, if a child wants to reach for a toy and comes across an object between herself and the desired toy, younger children might just try and reach for the toy directly and it is possible that the child knocks over the object while reaching for the target toy – this is best described as “Trial and Error” performance. In the later substages, the child might solve the problem by instead first removing the object out of the way before reaching for the desired toy. Such structured behaviour suggests that the child was able to plan ahead, which indicates that he/she had a mental representation of what she was going to do.

An example of planned behaviour by Jacqueline was given where she was trying to solve the problem of opening a door while carrying two blades of grass at the same time:

She stretches out her right hand towards the knob but sees that she is cannot turn it without letting go of the grass. She puts the grass on the floor, opens the door, picks up the grass again and enters. But when she wants to leave the room things become complicated. She put the grass on the floor and grasps the door knob but then she realises that in pulling the door towards her she will simultaneously chase away the grass which she placed between the door and the threshold. She therefore picks it up in order to put it outside of the door’s zone of movement.

(Piaget, 1936, pp. 376-7)

Jacqueline solved the problem of the grass and the door before she opened the door. It is assumed that she would have had a mental representation of the problem, which permitted her to work out the solution, before she acted.

A third line of evidence for mental representations comes from Piaget’s observation of deferred imitation, that is when children carry out a behaviour that is a reflection of copied behaviour that was previously taken in by the developing child. Piaget provides a good example of this:

At 16 months old Jacqueline had a visit from a little boy of 18 months who she used to see from time to time, and who, in the course of the afternoon got into a terrible temper. He screamed and he tried to get out of a playpen and pushed it backward, stamping his feet. Jacqueline stood observing him in amazement, having never witnessed such a scene before. The following day, she herself screamed in her playpen and tried to move it, stamping her foot lightly several times in succession.

(Piaget, 1951, p. 63)

This suggests that if the little boy’s behaviour was repeated by Jacqueline a day later, she would have had to have retained an image of his behaviour, i.e. she had a mental representation of what she had seen from the day before, and that representation provided the basis for her own copy of the temper tantrum.

To conclude, during the sensori-motor period, the child advances from very simple and limited reflex behaviours at birth, to complex behaviours at the end of the period. The more complex behaviours depend on the progressive combination and elaboration of the schema, but are, at the beginning, limited to direct interactions with the world – thus, the name Piaget gave to this period because he thought of the child developing through her sensori-motor interaction with the environment. It is only towards the end of that period that the child is not limited to immediate interaction anymore because she has now developed the ability to mentally represent her world [mental representation], and with this ability the child can manipulate her mental images (or symbols) of her world, in other words, she can now act on her thoughts about the world as well as on the world itself.

fille-dessinant-mur d'purb dpurb site web

Revisions of the Sensori-motor Stage

Jean Piaget’s observations of babies during this first stage lasting until 2 years of age, have been largely confirmed by subsequent reseachers, however Piaget may have underestimated children’s mental capacity to organize the sensory and motor information they take in. Several investigators have shown that children have abilities and concepts earlier than Piaget thought.

Bower (1982) examined Piaget’s hypothesis that young children did not have an appreciation of objects if they were not in sight. For this experiment, children a few months old were recruited and shown an object, and shortly after a screen was moved across in front of the object [so that it would be hidden/unseen from the child’s visual field], to then finally be moved back to its original position. This scenario was presented with 2 slight changes: in Condition 1 the object was still in place and hence seen again by the child when the screen was moved back to its original location; and in Condition 2, the object was removed so the child would perceive the object to have disappeared when the screen was moved back. After monitoring the children’s heart rate to measure changes [which reflect surprise]. To go back to Piaget’s assumptions from his qualitative observations, it would be assumed that children of a few months old do not retain information about objects that are no longer present, and if this was the case, we would not register any heart rate change because as there should be no element of surprise [i.e. the child would not expect an object to be there once the screen was moved back to its original location], thus in Condition 2, no reaction should be displayed by the children, however it was found that children displayed more surprise in Condition 2 and Bower inferred that the children would have had an expectation of the object to still be in its position or “re-appear” after the screen was moved back – this would be the evidence that young children must retain a mental representation of the object in their mind [could be interpreted as young children having some basic form of object permanence even if not properly developed at an earlier age than the assumptions of Piaget based on the results of his experimental methods].

In a further experiment, Baillargeon and DeVos (1991) showed 3-month-old children objects that moved behind a screen and then re-appeared from the other side of the screen. The upper half of the screen had a window and in one condition the children saw a short object move behind the screen [the object was small and below the level of the window and hence when it passed behind the screen it was completely out of sight / not visible, until it appeared at the other side of the screen].

In a second condition a taller object was passed behind the screen, and it was high enough to be seen through the window as it passed from one side to the other. Furthermore, Baillargeon and DeVos created an “impossible event” by passing the tall object through the screen without it appearing through the window, and it lead to the children displaying more interest by looking longer at the scenario than that with the small object. This lead to the argument that children reacted so, due to their expectation of the taller object to appear through the window, and hence this would suggest that young children early in the sensori-motor stage have an awareness of the continued existence of objects even when they are out of view. These results along with that of Bower (1982) seem to suggest that young children to have “some” understanding of object permanence earlier than assumed.

Another one of Piaget’s conclusion was also investigated further by another group of researchers who wanted to find out if children only developed planned action [which demonstrated their ability to form mental representations] at the end of the sensori-motor stage. Willatts (1989) placed an attractive toy on a cloth, out of the reach of 9-month-old children; the children could pull the cloth to access the attractive toy. However, the children could not reach the cloth directly since it was not accessible as Willatts placed a light barrier between the child and the cloth [the child had to move the barrier to reach the cloth]. The experiment showed that children were able to access the toy by carrying out appropriate the series of actions [i.e. first moving the barrier, then pulling the cloth to bring the toy within reach]. Most importantly, many of the children carried out the correct actions within the first occasion of being presented with the problem without the need of going through a “trial and error” phase. Willatts argued that for such young children to demonstrate novel planned actions, it may be inferred from such behaviour that they are operating on a mental representation of the world which they can make use of to organise their behaviour before carrying it out [This is also earlier than assumed by Piaget’s experiments].

Another point made by Piaget was that deferred imitation was an evidence that children should have a memory representation of what they had seen earlier. Soon after birth however it was found that babies are able to imitate the facial expression of an adult or the head movement (Meltzoff and Moore, 1983, 1989), however such imitation is performed in the presence of the stimulus being imitated. From Piaget’s experiments, it was initially deduced that stored representations are only achieved by children towards the end of the sensori-motor stage, however, Meltzoff and Moore (1994) showed that 6-week old infants could imitate a behaviour a day after they had seen the original behaviour. In Meltzoff and Moore’s study some children saw an adult make a facial gesture [e.g. sticking out her tongue] and others just saw the adult’s face while she maintained a neutral expression. The next day, all the children in the experiment saw the same adult, however this time, she kept a passive face. Compared to the children who had not seen any gesture, the children who had seen the tongue protrusion gesture the day before were more likely to make tongue protrusions to the adult the second time they saw her. Meltzoff and Moore argued that for the children to be able to perform those actions they would have had to have a mental representation of the action at a much earlier age than Piaget’s experiments concluded

 

II. The Pre-operational Stage (about 2 – 7 years) | Stage 2 of 4

This stage will be divided in 2 periods: (a) The Pre-conceptual Period (2 – 4 years) and (b) the Intuitive Period (4 – 7 years)


(a) The Pre-Conceptual Period (2 – 4 years)

The pre-conceptual period builds on the ability for internal, or symbolic thought to develop based on the latest advancements during the final stages of the sensori-motor period. During the pre-conceptual period [2 – 4 years old], we can observe a rapid increase in children’s language which, in Piaget’s view, results from the development of symbolic thought. Piaget unlike other theorists of language [who suggested that thought emerges from linguistic competence] argued that thought arises out of action and this idea is supported by research into cognitive abilities of deaf children who, despite limitations in language, have the abilities for reasoning and problem solving. Piaget argued that thought shapes language far more than language shapes thought [at least during the pre-conceptual period], and symbolic thought is also expressed in imaginative play.

However there are some limitations in the child’s abilities at the pre-conceptual period (2-4 years) of the pre-operational stage. The pre-operational child is still centred in her own perspective and finds it difficult to understand that other people can look at things differently. Piaget called this the “self-centred” view of the world and used the term egocentrism.

Egocentric thinking occurs due to the child’s belief that the universe is centred on herself, and thus finds it hard to “decentre”, that is, to take the perspective of another individual. The dialogue below gives an example of a 3-year-old’s difficulty in taking the perspective of another person:

Adult: Have you any brothers or sisters?
John: Yes, a brother.
Adult: What is his name?
John: Sammy.
Adult: Does Sammy have a brother?
John: No.

It is quite clear here that 3-year old John’s inability to decentre makes it hard for the child to realise that from Sammy’s perspective, he himself is a brother.

The egocentric trait at this particular period of development is apparent in their flawed perspective taking tasks. One of the most famous experiments carried out by Piaget is the three mountains experiment tasks, and it involves exploring children’s ability to see things from the perspective of another. In 1956, Piaget and Inhelder asked children between the ages of four and twelve [4 – 12 years old] to say how a doll would perceive an array of three mountains from different perspectives [i.e. by placing the doll at different locations].

FJ Piaget III Mountain Task.jpg

FIGURE J. Model of the mountain range used by Piaget and Inhelder viewed from 4 different sides

For example in Figure J, a child might be asked to sit at position A, and a doll would be placed at one of the other positions (B, C or D), then the child would be made to choose from a set of different views of the model, the view that the doll could see. When four and five year old children [4 and 5 years old] were asked to do this task, they often chose the view that they themselves could see (rather than the doll’s view) and it was not until 8 or 9 years of age that children could confidently work out the doll’s view. Piaget argued that this should be convincing in asserting that young children were still learning to manage their egocentricity and could not decentre from their own perspective to work out the perspective / view of the doll.

However, several criticisms have been made regarding the 3 mountain tasks, and one researcher, Donaldson (1978) pointed out that the tasks were unusual to use with young children who might not have a good familiarity with model mountains or be used to working out other people’s views of landscapes. Borke (1975) carried out a similar task to Piaget, but instead of using model mountains, he used the layout of toys that young children typically spend time with in play. She also altered the way that children were asked to respond to the question about what a different person’s view would be, and found that children as young as 3 or 4 years of age had some basic understanding of how another person’s perspective would be different from another position. This was much earlier than previously deduced from Piaget’s experiments, and shows that the type of objects and procedures used in a task can have a huge impact on the performance of the children. By using mountains, Piaget may have selected a far too complex content for such young children’s perspective-taking abilities to be demonstrated optimally.


Borke’s Experiment: Piaget’s Mountains Revised & Changes in the Egocentric Landscape

Borke’s main inquisition was about the appropriateness of Piaget’s three mountain tasks for such young children, and was concerned with the aspects of the task that were not related to perspective-taking and whether this might have adversely affected the children’s performance. These aspects were:

(i) the mountain from a different angle or not may not have sparked any interest or motivation in the children
(ii) the pictures of the doll’s views that Piaget had asked the children to select may have been too taxing for their intelligence
(iii) due to the task being unusual in nature, children may have performed poorly because they were unfamiliar with such a task

Borke considered if some initial practice and familiarity with the task would improve the children’s performance, and with those points in mind, Borke repeated the basic design of Piaget and Inhelder’s experiment but changed the content of the task, avoided the use of pictures and gave children some initial practice. She also used 4 three-dimensional  displays: there were a practice display and three experimental displays [see FIGURE B].

FB Borke's 4 three-dimensional displays

FIGURE B. A schematic view of Borke’s four three-dimensional displays viewed from above.

Borke’s participants were 8 three-year-old children and 14 four-year-old children attending a day nursery. Grover, a character from the popular children’s television show, “Sesame Street” was used for the experiment as a substitute for Piaget’s doll. There we 2 identical versions of each display (A and B), and Display A was for Grover and the child to look at, and Display B was on a turntable next to the child.

The children were tested individually and were first shown a practice display which consisted of a large toy fire engine. Borke placed Grover at one of the sides of the practice Display A so that Grover could view the fire engine from a point of view [perspective] that was different from the child’s own view of this display.

A duplicate of the fire engine [practice Display B] appeared on a revolving turntable, and Borke briefed the children, explaining that the table could be turned so that the child could look at the fire engine from ANY side. Children were then prompted to turn the table until their view of the Display B matched the exact perspective that Grover had while looking at Display A. If necessary, Borke even helped the children to move the turntable to the correct position or walked the children round Display A to show them the exact view [perspective] that Grover had in view

Once the practice session was over, the child was ready to take part in the experiment itself. This time, the procedures were similar, except no help was provided by the experimenter. Every single child was shown three dimensional displays, one at a time [see FIGURE B].

Display 1 included a toy house, lake and animals
Display 2 was based in Piaget’s model of three mountains
Display 3 included several scenes with figures and animals
Note: There were 2 identical copies of each display, and of course, children had to rotate the second  copy which was on a turntable to match the perspective [view] that Grover had in sight [as prepared in the practice session].

What Borke found was that most of the children in the experiment were able to work out Grover’s perspective for Display 1 [three and four-year-olds were correct in 80% of trials] and for Display 3 [three-year-olds were correct in 79% of trials and four-year-olds, in 93% of trials. However, for Display 2 [Piaget’s mountains], the three-year-olds were correct in only 42% of trials and four-year-olds in 67% of trials. Borke calculated an analysis of variance, and found that the difference between Displays 1 & 3 and Display 2 was significant at p < 0.001. As for errors, there were no significant differences in the children’s responses for any of the 3 positions – 31% of errors were egocentric [i.e. child rotated Display B to show their OWN view/perspective of Display A, rather than Grover’s view].

Borke successfully demonstrated that the task had a major influence on the perspective-taking performances of young children. When the display included toys that the children were familiar with and hence recognisable, and when the response involved rotating a turntable to work out Grover’s perspective, even the comparatively complex Display 3 task was successfully achieved by the children.

This seems to suggest that the poor performance by the children in Piaget’s original experiment involving three mountains was due in part to the unfamiliar nature of the objects that the children were shown.

Borke concluded that the potential for understanding the viewpoint of another was already present in children as young as 3 and 4 years of age, and this seems to be a reliable addition and revision to Piaget’s original assumption that children of this age are egocentric and incapable to taking the viewpoint of others. It now seems clear that although their perspective taking abilities may not be fully developed, they tend to make egocentric responses when they misunderstood the task, but when given the appropriate conditions, they show that they are capable of working out another’s viewpoint.

However, on a final note, it is important to also consider that Borke’s finding that children as young as three years can perform correctly in perspective-taking tasks stands in firm contrast to other researchers who have found that three-year-olds have difficulty realising another person’s perspective when the child and the other person are both looking at the same picture from different point of view [e.g. at the Louvres museum] (e.g., Masangkay et al, 1974).

 

(a) The Pre-Conceptual Period (2 – 4 years)… continued from above

Piaget use the three mountains task to investigate visual perspective taking and it was on the basis of this task that he concluded that young children were egocentric. There are also a variety of other perspective taking scenarios, and these include the ability to empathise with other people’s emotions, and the ability to know what other people are or may be thinking depending on the scene, setting and scenario (Wimmer and Perner, 1983). In other words, young children are less egocentric than Piaget initially assumed.

 

(b) The Intuitive Period (4 – 7 years)

At about the age of four, there is a further shift in thinking where the child begins to develop the mental operation of ordering, classifying and quantifying in a more systematic way. The term “intuitive” was particularly chosen by Piaget because the child is largely unaware of the principles that underlie the operations she completes and cannot explain why she has done them, nor can she carry them out in a fully satisfactory way, although she is able to carry out such operations involving ordering, classifying and quantifying.

langues-enfants-cartes-orange-d'purb dpurb site web

Difficulties can be observed if a pre-operational child is asked to arrange sticks in a particular order. 10 sticks of different sizes from A (the shortest) to J (the longest), arranged randomly on a table were given to the children. The child was asked to arrange them in ascending order [order of length]. Some pre-operational children could not complete the task at all. Some other children arrange a few sticks correctly, but could not complete the task properly. And some put all the smaller ones in one and all the longer one in another. A more advance response was to arrange the sticks so that tops of the sticks when order even though the bottoms were not [See FIGURE C].

FC Pre-operational ordering different-sized sticks

FIGURE C. The pre-operational child’s ordering of different-sized sticks. An arrangement in which the child has solved the problem of seriation by ignoring the length of the sticks.

To sum up, the pre-operational child is not capable of arranging more than a very few objects in the appropriate order.

It was also discovered that pre-operational children also have difficulty with class inclusion tasks – those that involve part-whole relations. Let us assume that a child is given a box that contains 18 brown beads and 2 white beads; all the beads are wooden. When asked “Are there more brown beads than wooden beads?” [note that the question does not make sense since all the beads are made of wood but some are brown and some are white], the pre-operational child tends to say that there are “more brown beads”. The child at the intuitive-period of the pre-operational stage finds it hard to consider the class of “all beads” [wooden] and at the same time considering the subset of beads, the class of “brown beads”[wooden + brown].

This findings is generally true for all children in the pre-operational stage, irrespective of their cultural background. Investigators further found that Thai and Malaysian children gave responses that were very similar to those of Swiss children at this stage of life [4 – 7 years old] and in the same sequence od development [the intuitive period].

Here, a Thai boy who was shown a bunch of 7 roses and 2 lotus [all are in the class of flowers], states that there are more roses than flowers [problem with class of all flowers] when prompted by the standard Piagetian questions:

Child: More roses.
Experimenter: More than what?
Child: More than flowers.
Experimenter: What are the flowers?
Child: Roses.
Experimenter: Are there any others?
Child: There are.
Experimenter: What?
Child: Lotus
Experimenter: So in this bunch which is more roses or flowers?
Child: More roses.

(Ginsburg and Opper, 1979, pp. 130-1)

One of the most extensively investigated aspects of the pre-operational child’s thinking processes is what Piaget called “conservation”. Conservation refers to the understanding that superficial changes in the appearance of a quantity do not mean that there has been any real change in the quantity. For example, if we had 10 dolls placed in line, and then they were re-arranged in a circle, it would not mean that the quantity has been altered [i.e. if nothing is added or subtracted from a quantity then it remains the same – conservation].

Piaget’s experiments revealed that children in the pre-operational stage generally find it hard to grasp the concept that an object’s qualities remain intact even if it is changed in shape and appearance. A series of conservation tasks were used in the investigations and examples are given in FIGURE D and PLATE A.

FD Piaget - Tests de Conservation

FIGURE D. Some tests of conservation: (a) two tests of conservation of number (rows of sweets and coins; and flowers in vases); (b) conservation of mass (two balls of clay); (c) conservation of quantity (liquid in glasses). In each case illustration A shows the material when the child is first asked if the two items or sets of items are the same and illustration B shows the way that one item or set of items is transformed before the child is asked a second time if they are still similar.

PA Piaget - Conservation of Number

PLATE A. A 4-year-old puzzles over Piaget’s conservation of number experiments; he says that the rows are equal in number in arrangement (a), but not in arrangement (b) “because they’re all bunched together here”.

If 2 perfectly identical balls of clay are given to a child and if questioned about whether the quantity of clay being similar in both balls, the child will generally agree that it is. However, if one of the balls of clay is rolled and shaped into a sausage [see FIGURE D(b)], and the child is questioned again about whether the amount are similar, he/she is more likely to say that one is larger than the other. When asked about the reasons for the answer, they are generally unable to give an explanation, but simply say “because it is larger”.

Piaget suggested that a child has difficulty in a task such as this because she could only focus on one attribute at a time [e.g. if length is being focussed on, then she may think that the sausage shaped clay, being longer, has more clay it it. According to Piaget, for a child to appreciate that the sausage of clay has the same amount of clay as the ball would require an understanding that the greater length of the sausage is compensated for by the smaller cross section of the sausage. Piaget said that pre-operational children cannot apply principles such as compensation.

A further example to demonstrate this weakness in the child’s reasoning about conservation is through the sweets task [see FIGURE D(a)]. In this scenario, a child is shown 2 rows of sweets with a similar number of sweets in each row [presented with one to one layout] and when asked if the numbers match in each row, she will usually agree. Shortly after, one row of sweets is made longer by spreading them out, and the child is once again asked whether the number of sweets in similar in each row; the pre-operational child usually makes a choice between the rows suggesting that one has more sweets in it. He/she may for example think that the longer row means more objects [logic of the pre-operational child]. At this stage, the child does not realise that the greater length of the row of sweets is compensated for by the greater distance between the sweets.

Compensation is only one of several processes that can help children overcome changes in appearance; another process is known as “reversibility”. This is where the children could think of literally “reversing” the change; for example if the children imagine the sausage of clay being rolled back and reshaped into a ball of clay, or the row of sweets being pushed back together, they may realise that once the change has been reversed the quantity of an object or the number of items in the row remains similar to before. Pre-operational children lack the thought processes needed to apply principles like “compensation” and “reversibility”, and therefore they have difficulty in conservation tasks.

In the next stage, which is the third stage of development known as the “Concrete Operational Stage”, children will have achieved the necessary logical thought processes that give them the ability to use the required principles and handle conservation techniques and other problem-solving tasks easily.

 

Revisions of the Pre-Operational Stage

While Piaget claimed that the pre-operational child cannot cope with tasks like part-whole relations or conservations, because they lack the logical thought processes to apply principles like compensation. Other researchers have pointed out that children’s lack of success in some tasks may be due to factors other than ones associated with logical processes.

The pre-operational child seems to lack the ability to grasp the concept of the relationship between the whole and the part in class inclusion tasks, and will happily state that there are more brown beads than wooden beads in a box of brown and white wooden beads “because there are only two white ones”. Some other researchers have focussed their attention on the questions that children are asked during such studies and found them to be unusual [e.g. it is not often in every day conversation that we ask questions such as “Are there more brown beads or more wooden beads?”]

Minor variations in the wording of the questions that enhances and clarifies meaning can have positive effects on the child’s performance. McGarrigle (quoted Donaldson, 1978) showed children 4 toy cows, 3 black and 1 white, all were lying asleep on their sides. If the children were asked “Are there more black cows or more cows?” [as in a standard Piagetian experiment with a meaningless trap wording of the question] they tended not to answer correctly. McGarrigle found that in a group of children aged 6 years old, 25% answered the standard Piagetian question correctly, and when it was rephrased, 48% of the children answered correctly – a significant increase. From such an observation it was deduced that some of the difficulty of the task was in the wording of the question rather than just an inability to understand part-whole relations.

Donaldson (1978) put forward a different reason from Piaget as a cause for children’s poor performance in conservation tasks, he argued that children have a build in model of the world by formulating hypotheses that help them anticipate future events based on their past experiences. Hence, in the case of the child there is an expectation about any situation, and his/her interpretation of the words she hears will be influenced by the expectations she brings to the situation. When in a conservation experiment, for example, the experimenter asks a child if there are the same number of sweets in two rows [FIGURE D(a)]. Then one of the rows is changed by the experimenter while emphasising that it is being altered. Donaldson suggested that it is quite fair to assume that a child may be compelled to deduce that there would be a link between the change that occurred [the display change] and the following question [about the number of sweets in each row]; otherwise why would such a precise question come from an adult if there had not been any change? If the child is of the belief that adults only carry actions when they desire a change, then he/she might assume that a change has occurred.

McGarrigle and Donaldson (1974) explored this idea in an experiment with a character known as “Naughty Teddy”, and it was this character rather than the experimenter who changed the display layout and the modification was explained to the children as an “accident” [in such a context the child might have less expectation that a deliberate treatment had been applied to the objects, and there would be no reason to believe a change had taken place]. This procedure was setup in such a way because McGarrigle and Donaldson found that children were more likely to give the correct answer [that the objects remained the same after being messed up by Naughty Teddy] in this new context than in the classical Piagetian context.

Piaget was correct to point out the problems that pre-operational children face with conservation and other reasoning tasks. However, other researchers since Piaget have found out that, given the appropriate wording and context, young children seem capable of demonstrating at least some of the abilities that Piaget thought only developed later [even if these abilities are not well developed at such a stage].

Piaget also found that pre-operational children had difficulties when faced with tasks requiring “transitive inferences”. In this case, the children were showed 2 rods, A and B. Rod A was longer than Rod B, and then Rod A was taken out of sight of the children, who were then showed only Rod B and Rod C [B was longer than C]. When the children were then asked which rod was longer, Rod A or Rod C? Young children on the pre-operational stage find such questions hard and Piaget provided the explanation that these children cannot make logical inferences such as: if A is longer than B and B is longer than C, then A must be longer than C.

Bryant and Trabasso (1971) also considered transitive inference tasks and wondered whether children’s difficulties had more to do with remembering all the specific information about the objects rather than making an inference [i.e. for children to respond correctly they would not only have to make an inference but also remember the lengths of all the rods they had seen]. Bryant and Trabasso proposed that it was possible that young children [with brains still growing and developing physiologically] who have limited working memory capacity, were unable to retain in memory all the information they needed for the task.

In another scenario, children were faced with the similar task in an investigation of transitive inferences, however this time they were trained to remember the lengths of the rods [they were trained on the comparisons they needed to remember, i.e., that A was longer than B, and B was longer than C]. It is only when Bryant and Trabasson were satisfied that the children could remember all the information were they asked the test question [i.e. which rod was longer? A or C?]. The experimenters found that children could now answer correctly. So, the difficulty that Piaget noted in those tasks was more to do with forgetting some of the information needed to make the necessary comparisons, rather than a failure in making logical inferences.

 

III. The Concrete Operational Stage (about 7 – 12 years) | Stage 3 of 4

Mikail Akar Art Education Jan 2020 dpurb site web

Image: Mikail Akar, the 7-year-old being crowned the “Mini Picasso” (2020)

At the age of about 7 years old, the thinking processes of children change once again as they develop a new set of strategies which Piaget called “concrete operations”. These strategies are considered concrete because children can only apply them to immediately present objects. However, thinking becomes much more flexible during the concrete operational period because children lose their tendency to simply focus on one aspect of the problem, rather now, they are able to consider different aspects of a task at the same time. They now have processes like compensation and reversibility [as explained earlier in understanding volume], and they now succeed on conservation tasks. For example, when a round ball of clay is transformed into a sausage shape, children in the concrete operational stage will say, “It’s longer but it’s thinner” or “If you change it back, it will be the same.”

Conservation of number is achieved first [about 5 or 6 years], then this is followed by the conservation of weight [around 7 or 8], and the conservation of volume is fully understood at about 10 or 11 years old. Operations like addition and subtraction, multiplication and division become easier at this stage. Another major shift comes with the concrete operational child’s ability to classify and order, and to understand the principle of class inclusion. The ability to consider different aspects of a situation at the same time enables a child to perform successfully in perspective taking tasks [e.g. in the three mountains task of Piaget, a child can consider that she has one view of the model and that someone else may have a different view].

However, there are still some limitations on thinking, because children are reliant on the immediate environment and have difficulty with abstract ideas. Take the following question: “Edith is fairer than Susan. Edith is darker than Lily. Who is the darkest and who is the fairest?” Such a problem is quite difficult for concrete operational children who may not be able to answer it correctly. However, if children instead are given a set of dolls representing Susan, Edith and Lily, they are able to answer the question quickly. Hence, when the task is made a “concrete” one, in this case with physical representations, children can deal with the problem, but when it is presented verbally, as an abstract task, children have difficulty. Abstract reasoning is not found within the repertoire of the child’s skills until the latter has reached the stage of formal operations.

 

Revisions of the Concrete Operational Stage

A great amount of Piaget’s observations and conclusions about the concrete operational stage have been broadly confirmed by subsequent research. Tomlinson-Keasey (1978) found that conservation of number, weight and volume are acquired in the order stated by Piaget.

As in the previous stage, the performance of children in the concrete operational period may be influenced by the context of the task. In some context, children in concrete operational period may display more advanced reasoning that would typically be expected of children in that stage. Jahoda (1983) showed that 9-year-olds in Harare, Zimbabwe, had more advanced understanding of economic principles than British 9-year-olds. The Harare children, who were involved in the small business of their parents, had strong motivation to understand the principles of profit and loss. Jahoda set up a mock shop and played a shopping game with the children. The British 9-year-olds could not provide any explanation about the functioning of the shop, did not understand that a shopkeeper buys for less than he sells, and did not know that some of the profit has to be set aside for the purchase of new goods. The Harare children, by contrast, had mastered the concept of profit and could understand trading strategies. These principles had been grasped by the children as a direct outcome of their own active participation in running a business. Jahoda’s experiment, like Donaldson’s studies (1978), indicated the important function of context in the cognitive development of children.

 

IV. The Formal Operational Stage (12 years old) | Stage 4 of 4

During the third period of development, the Concrete operations stage, we have seen that the child is able to reason in terms of objects [e.g., classes of objects, relations between objects) when the objects are present. Piaget argued that only during the period of Formal Operations that young people are able to reason hypothetically, now they no longer depend on the “concrete” existence of objects in the real world, instead they now reason with verbally stated hypotheses to consider logical relations among several possibilities or to deduce conclusions from abstract statements [e.g. consider the syllogistic statement, “all blue birds have two hearts”; “I have a blue bird at home called Adornia”; “How many hearts does Adornia have?” The young person who has now reached formal operational thinking will give the correct answer by abstract logic, which is: “Two hearts!” Children within the previous stage will generally not get past complaining about the absurdity of the scenario.

Young people are now also better at solving problems by considering all possible solutions systematically. If requested to formulate as many combinations of grammatically correct words from the letters A, C, E, N, E, V, A, a young person at the formal operational stage could first consider all combination of letters AC, AE, AN, etc., verifying if such combinations are words, and then going on to consider all three letter combinations, and so on. In the earlier stages, children would attend to such tasks in a disorganised and unsystematic fashion.

Inhelder and Piaget (1958) explained the process of logical reasoning used by young people when presented with a number of natural science experiments. An example of one of their task, “The Pendulum Task” can be seen in Figure E.

FE Piaget - Pendulum Prob

FIGURE E. The pendulum problem. The child is given a pendulum, different lengths of string, and different weights. She is asked to use these to work out what determines the speed of the swing of the pendulum (from Inhelder and Piaget, 1958).

The young person as the participant here is given a string [that can be shortened or lengthened], and a set of weights, and then asked to figure out what determines the speed of the swing of the pendulum. The possible factors are the length of the string, the weight at the end of the string, the height of the release point and the force of the push. In this particular scenario the solutions to the solving the problem are all in front of the participant, however the successful reasoning involves formal operations that would also have to incorporate a systematic consideration of various possibilities, the formulation of hypotheses (e.g., “What could happen if I tried a heavier weight?”) and logical deductions from the results of trials with different combinations of materials.

The other tasks investigated by Inhelder and Piaget (1958) included determining the flexibility of metal rods, balancing different weights around a fulcrum, and predicting chemical reactions. These tasks mimic the steps required for scientific inquiry, and Piaget argued that formal scientific reasoning is one of the most important characteristic of formal operational thinking. From his original work, carried out in schools in Geneva, Piaget claimed that formal operational thinking was a characteristic stage that children or young people reached between the ages of 11 and 15 years – having previously gone through the earlier stages of development.

 

Revision of the Formal Operational Stage

Piaget’s claim has been rectified by recent research, more researchers have found that the achievement of formal operational thinking is more gradual and haphazard than Piaget assumed – it may be dependent on the nature of the task and is often limited to certain domains.

FF Piaget - Proportion of boys at different Piagetian stg

FIGURE F. Proportion of boys at different Piagetian stages as assessed by three tasks (from Shayer and Wylam, 1978).

Shayer et al. (1976; Shayer and Wylam 1978) gave problems such as the pendulum task [FIGURE E] to school children in the UK. Their results [see FIGURE F] showed that by 16 years of age only about 30% of young people had achieved “early formal operations” [Is this shocking compared to French speaking Europe where Piaget implemented his theory? Could this provide a partial explanation to the lack of personality, emotion, creativity, openness, depth and sophistication in some populations? Interesting questions…]. Martorano (1977) gave ten of Piaget’s formal operational tasks to girls and young woman aged 12 – 18 years in the USA. At 18 years of age success on the different tasks varied from 15% to 95%; but only 2 children out of 20 succeeded on all ten tasks. Young people’s success on one or two tasks might indicate some formal operational reasoning, but their failure on other tasks demonstrated that such reasoning might be limited to certain tasks or contexts. It is highly likely that young people only manage to achieve and apply formal reasoning across a range of problem tasks much later during their adolescence.

Formal thinking has been shown by some researchers as an ability that can be achieved through training, FIGURE G shows the results of such a study by Danner and Day (1977), where they mentored students aged 10 years, 13 years and 17 years in 3 formal operational tasks. As expected, training had a limited effect on the 10-year-olds, but it had marked effects at 17 years old. In summary, it seems that the period from 11 – 15 years signals the beginning of the potential for formal operational thought, rather than its achievement. Formal operational thought may only be used some of the time, in the domains we are generally familiar with, are trained in, or which have a great significance to us – in most cases formal thinking is not used. After all, we tend to know areas of life where we should have thought things out logically, but in retrospect realise we did not do so [without any regrets sometimes].

FG Piaget - LvL of availability of formal thought

FIGURE G. Levels of availability of formal thought. Percentage of adolescents showing formal thought, with and without coaching (from Danner and Day, 1977).

The Educational Implications of Jean Piaget’s Theory of Cognitive Development

Piaget’s theory was planned and developed over many decades throughout his long life, and at first, it was slow to make any productive impact in the UK and the USA, but from the 1950s its ambitious, embracing framework for understanding cognitive growth was becoming the accepted and dominant paradigm in cognitive development.

Whatever the shortcomings are with Piaget’s theory, it impossible to deny his ingenious contributions, as his approach provided the most comprehensive description of cognitive growth ever put forward on earth. It has had considerable impact in the domains of education, most notably for child-centred learning methods in nursery and infant schools, for mathematics curricula in the primary school, and for science curricula at the secondary school level.

Piaget argued that young children’s thinking processes are quite different from that of an adult, and they also view he world from a qualitatively different perspective. It goes with the logic that a teacher must make a firm effort to adapt to the child and never assume that what may be appropriate for adults should necessarily be right for the child. The idea of “active learning” is what lies at the heart of this child-centre approach to education. From the Piagetian perspective, children learn better from actions rather than from passive observations [e.g., telling a child about the properties of a particular material is less effective than creating an environment in which the child is free to explore, touch, manipulate and experiment with different materials]. A good teacher should recognise that each child needs to construct knowledge for him or herself, and active learning results in deeper understanding.

JeanPiaget

“Our real problem is: what is the goal of education? Are we forming children who are only capable of learning what is already known? Or should we try to develop creative and innovative minds capable of discovery from the preschool age through life?” – Jean Piaget (1896 – 1980)

So, how can a teacher promote active learning on the part of the pupil? First, it should be the child rather than the teacher who initiates the activity. This should not lead us to allow the child a complete freedom to do anything they want to do, but rather a teacher should set tasks which are finely adjusted to the needs of their pupils and which, as a result, are intrinsically motivating to young learners. For example, nursery school classrooms can provide children with play materials that encourage their learning; set of toys that encourage the practice of sorting, grading and counting; play areas, like the Wendy House, where children can develop role-taking skills through imaginative and explorative play; and materials like water, sand, bricks and crayons that help children make their own constructions and create symbolic representations of the objects and people in their lives. From this range of experiences, the child develops knowledge and understanding for herself, and a good teacher’s role is to create the conditions in which learning may best take place, since the aim of education is to encourage the child to ask questions, try out experiments and speculate, rather than accept information and routine conventions unthinkingly – this also allows the child to learn and be creative about her subjective experience which is unique and different to any other child.

(1919) Jaroslava &amp; Jiri by Alphonse Mucha (1860 - 1939)

(1919) Jaroslava & Jiri, The Artist’s Children by Alphonse Mucha (1860 – 1939)

Secondly, a teacher should be concerned with the process rather that the end-product. This is in line with the belief that a teacher should be interested in the reasoning behind the answer that a child gives to a question rather than just in the correct answer. Conversely, mistakes should not be penalised, but treated as responses that can give a teacher insight into the child’s thinking processes at that time.

The whole idea of active learning resulted in changed attitudes towards education in all its domains. A teacher’s role is not to impart information, because in Piaget’s view, knowledge is not something to be transmitted from an expert master teacher to an inexpert pupil. It should be the child, according to Piaget, who sets the pace, where the teacher’s role is to create situations that challenge the child [creatively] to ask questions, to form hypotheses and to discover new concepts. A teacher is the guide in the child’s process of discovery, and the curriculum should be adapted to each child’s individual needs and intellectual level.

In mathematics and science lessons at primary school, children are helped to make the transition from pre-operational thinking to concrete operations through carefully arranged sequences of experiences which develop an understanding for example of class inclusion, conservation and perspective-taking. At a later period, a teacher can also encourage practical and experimental work before moving on to abstract deductive reasoning. Through this process, a teacher can provide the conditions that are appropriate for the transition from concrete operational thinking to the stage of formal operations.

The post-Piagetian research into formal operational thought also has strong implications for teaching, especially science teaching in secondary schools. The tasks that are used in teaching can be analysed for the logical abilities that are required to fulfil them, and the tasks can then be adjusted to the age and expected abilities of the children who will attempt them.

Considering the wide range of activities and interests that appear in any class of children, learning should be individualised, so that tasks are appropriate to individual children’s level of understanding. Piaget did not ignore the importance of social interaction in the process of learning, he recognised the social value of interaction and viewed it as an important factor in cognitive growth. Piaget pointed out that through interaction with peers, a child can move out of the egocentric viewpoint. This generally occurs through cooperation with others, arguments and discussions. By listening to the opinion of others, having one’s own view challenged and experiencing through others’ reactions the illogicality of certain concepts, a child can learn about perspectives other than her own [egocentric]. Communication of ideas to others also helps a child to sharpen concepts by finding the appropriate words.

SigmundFreudYouthAge

 

“Everyone knows that Piaget was the most important figure the field has ever known… [he] transformed the field of developmental psychology.”

(Flavell, 1996, p.200)

“Once psychologists looked at development through Piaget’s eyes, they never saw children in quite the same way.”

(Miller, 1993, p.81)

“A towering figure internationally.”

(Bliss, 2010, p.446)

 

__________

 

(II) The Theory of Attachment in Emotional Development (John Bowlby)

If we pick up a new born baby , he/she will respond without any difference to us or to any other person. However, after 9 months, the same baby will have developed one or more selective attachments and will discriminate familiar faces to unfamiliar ones. So, if we were to pick up the baby again, we may face scenarios where he/she displays anxiety or cries, but if the mother or father picks her/him up, the baby will be reassured and pacified.

This section will explore and give an account of the development of attachment relationships between infants, parents, and other close primary caregivers. The significance of such attachments for development in adult life will also be considered, with its implication for the philosophy of education in sculpting the minds of tomorrow, along with some research on parenting styles analysing some of the factors affecting successful and less successful parenting.

 

The Development of Attachment Relationships: Attachment as an innate drive

The infant’s expression of emotions and the caregiver’s response to these emotions is the fundamental foundation of John Bowlby’s Theory of Attachment. Bowlby’s (1958, 1969 / 1982, 1973, 1980) theory was inspired and influenced by an exciting and creative range of disciplines including psychoanalysis, ethology and the biological sciences. Before Bowlby, the main assumption and view of the infant-mother attachment was that it was a “secondary drive” or a side-product of the infant associating the mother with the provision of physiological needs, such as hunger [Picture B – breast feeding image].

Breastfeeding Mother

PICTURE B. Early theories of infant-mother attachment suggested that it was a secondary drive resulting from the mother satisfying the infant’s primary drives, such as hunger. / Photography:  Jo Frances

Bowlby defied this logic, and argued convincingly that attachment was an innate primary drive in all infants, and while his theory went through many revisions over the years, this argument remained fundamental.

In Bowlby’s first version of his theory of attachment (Bowlby, 1958), the emphasis was on the role of behaviours resulting from our instincts [on how behaviours such as crying, clinging and smiling served the purpose of eliciting a reciprocal attachment response from the caregiver]:

There matures in the early months of life of the human infant a complex and nicely balanced equipment of instinctual responses, the function of which is to ensure that he obtains parental care sufficient for his survival. To this end the equipment includes responses which promote his close proximity to a parent and… evoke parental activity.

(Bowlby, 1958, p. 346)

However, in the 1969 version of his theory (1st volume of his trilogy, Attachment and Loss),  Bowlby focussed on highlighting the dynamics of attachment behaviour, and switched to explaining the infant-mother tie in terms of a goal-corrected system which was triggered by environmental cues rather than innate instinctual behaviours. Whether attachment is instinctual or goal-corrected, we know that it eventually leads to the infant maintaining proximity to the primary caregiver.

Bowlby acknowledged that the development of an attachment relationship was not dependent purely upon the social and emotional interplay between infant and caregiver. Since we can only observe attachment behaviour primarily when the infant is separated from the caregiver, it is logically dependent upon the infant’s level of cognitive development in the ability for object permanence [i.e. the ability to represent an object (living or non-living) that is not physically present within the child’s proximity].

This seems to synchronise partly with Piaget’s outlook and theory of cognitive development, and indeed Bowlby was inspired by Jean Piaget, and based his argument on Piaget’s (1955) contention that this level of object permanence is not attained until the infant is approximately 8 months old. Furthermore, while children would be able to recognise familiar people before such age, they would still not miss the attachment figure and thus display attachment behaviour until they have reached the level of cognitive sophistication that comes with the ability to represent absent objects [and people, who are in the same class].

 

The Phases of Attachment: Development of Attachment Relationships

Let us imagine a classic example of a mother and child [about 1 – 2 year-old] in a park. What we might observe is that the mother is seated on a bench while the infant runs off to explore the area. Periodically, the child may be seen to stop and look back at the mother, and every once in a while may even return close to her, or make physical contact, staying close for a while before venturing off again. In most cases, the infant rarely goes beyond about 60 metres from the mother or primary caregiver, who may however have to go and retrieve the child if the distance gets too great or if the need to leave is imminent.

The scenario here from a developmental psychologist’s perspective is fairly simple; the infant is exploring the environment it is being exposed to inquisitively, and is using the mother as a “secure base” to which to return periodically for reassurance. This is one of the hallmarks of an “attachment relationship”. These observations of children in parks were made by a student of John Bowlby, Anderson (1972) in London, and the development of attachment has been described in detail by John Bowlby (1969).

Bowlby (1969, p. 79) described 4 phases in the development of attachment and subsequently extended it to a 5th.

The phases are:

I. The pre-attachment phase (0 – 2 months) is characterised by the infant showing hardly any differentiation in their responses to familiar or unfamiliar faces.

II. During the second phase (2 – 7 months), the foundations of attachment are being laid. Here infants start to recognise their caregivers, even if they still do not possess the ability to show attachment behaviours upon separation. The infant is also more likely to smile at the mother or important caregivers and to be comforted by them if distressed.

bebe-mange-puree-de-fruits.jpg

III. Clear cut attachment behaviours only start to appear after 7 months. At this phase, infants start to protest at being separated from their caregivers and become very wary of strangers [so called stranger anxiety] – this is often taken as a definition of attachment to caregiver and this onset of attachment happens from 7 – 9 months.

IV. When the attachment relationship has evolved into a goal-corrected partnership (from around 24 months / 2 years of age), [i.e. when the child also begins to accommodate to the mother’s needs, e.g. being prepared to wait alone if requested until the mother returns]. This is an important change because before this phase, the infant only saw the mother as a resource that had to be available when needed. Bowlby saw this as characterising the child at 3 years of age, although as mentioned from 2 years old babies can partly accommodate to verbal requests by mothers to await for her return (Weinraub and Lewis, 1977). From this phase onwards, the child relies on representation or internal working models of attachment relationships to guide their future social interactions.

V. The lessening of attachment is noticed as measured by the child maintaining proximity. The characteristics of a school-age child, and older, is the idea of a relationship based more on abstract considerations such as affection, trust, loyalty and approval, exemplified by an internal working model of the relationship.

Bowlby viewed attachment as a canalized developmental process where both the mainly instinctive repertoire of the new born and certain forms of learning are important in early social interactions. Certain aspects of cognitive sensori-motor development [as supported by Jean Piaget] are also fundamental for attachment. Until the developing infant can master the concept of cause-effect relations, and of the continued existence of objects [incl. persons] when out of sight, he or she cannot protest at separation and attempt to maintain proximity [note the importance of object permanence in emotional development and internal working models]. Hence, sensori-motor development is also a canalised process, and it should not be in opposition to an ethological and a cognitive-learning approach to attachment development.

 

Attachments: Between whom?

Many articles and textbooks have characterised the attachment relationship as mainly focussed on the mother (e.g. Sylvia and Lunt, 1981), and this may not be completely true, since many studies have suggested that early attachments are usually multiple, and although the strongest attachment is often to the mother, this need not always be so.

In a study conducted in Scotland, mothers were interviewed and asked to whom their toddlers showed separation protest (Schaffer and Emerson, 1964), the proportion of babies with more than 1 attachment figure increased from 29% when separation protest first appeared [about 7 – 9 months] to 87% at 18 months [1 and half year old]. It was also found that for about one third of babies, the strongest attachment seemed to be to someone other than the mother, such as father, or other trusted primary caregivers. In most cases, attachment were formed to responsive persons who interacted and played a lot with the infant; basic caregiving such as nappy changing was clearly not in itself such an important factor; and similar results were obtained by Cohen and Campos (1974).

UglyLeeches

Peinture: Sandrine Arbon

Studies in other cultures also support this conclusion, for example in the Israeli kibbutzim, young children spend the majority of their waking hours in small communal nurseries, in the charge of a nurse or metapelet. In a study of 1- and 2- year-olds reared in this way, it was found that the infants were very strongly attached to both the mother, and the metapelet; either could serve as a base for exploration, and provide reassurance when the infant felt insecure (Fox, 1977). In many agricultural societies, mothers tend to work in the fields, and often leave infants in the village, in the care of grandparents, or older siblings, returning periodically to breastfeed. In a survey of 186 non-industrial societies, it was found that the mother was rated as the “almost exclusive” caretaker in infancy in only 5 of them; hence other persons had important caregiving roles in 40% of societies during the infancy period, and in 80% of societies during early childhood (Weisner and Gallimore, 1977).

 

The Security of Attachment

Early infant-caregiver attachment relationships and the internal working models are the main aspects of Bowlby’s theory of attachment and have been given the greatest attention, with researchers developing 2 of the most widely used measuring instruments in developmental psychology to investigate Bowlby’s theoretical claims: the strange situation procedure to assess the goal-corrected system that evolved from the early attachment relationship, and the Adult Attachment Interview to assess internal working models.

Bowlby’s theory was focussed and interested with the making and breaking of attachment ties, probably because his experiences of working as a child psychologist exposed him to the negative consequences for emotional development of severe maternal deprivation [such as long term separation or being orphaned].

Nowadays, researchers and intellectuals are generally less concerned with whether a child has formed an attachment [since any child who experiences any degree of continuous care will become attached to the caregiver], but are rather more interested in the quality or security of the attachment relationship. This important shift in emphasis was due to the empirical work of Mary Ainsworth.

Ainsworth interest in the concept of attachment grew after working with Bowlby in London during the 1950s. Later, she moved to Uganda to live with the Ganda people where she made systematic observations of infant-mother interactions in order to investigate Bowlby’s goal-corrected attachment systems in action.

One factor that struck Mary Ainsworth (1963; 1967), was the lack of uniformity in infant’s attachment behaviour, in terms of its frequency, strength, and degree of organisation. Furthermore, these differences were not specific to Gandan infants, since she replicated these findings in a sample of children in the USA when she moved to Baltimore. These variations in attachment type had not been accounted for by John Bowlby’s Theory and hence, this led Ainsworth to investigate the question of individual differences in attachment.

Mary Ainsworth experience of working with Bowlby, along with her rich collection of data harvested over a period of many years, put her in a unique position in the development of attachment as an empirical field of research. Her contribution led to attachment issues becoming part of mainstream developmental psychology, rather than being simply confined to child psychiatry, and behind this achievement was an investigation of the development of attachment under normal family conditions and by developing a quick and effective way of assessing attachment patterns in the developmental laboratory.

Although the strange situation procedure (Ainsworth & Wittig, 1969) circumvented [found a way around] the need for researchers to conduct lengthy observations in the home, it was not developed simply for research convenience, but because there are problems in trying to evaluate attachment type in the child’s own home environment. For example, if a child becomes extremely distressed upon the mother moving to another room in their own home environment, this may be an indication of a less than optimal attachment achieved, because if a child feels secure then such a separation should not trigger any distress. The extensive experience of Ainsworth in observing infant-mother interactions enabled her to identify the situations that we most crucial in attachment terms, and therefore formed the basis of the strange situation procedure.

 

The Strange Situation Procedure

Ainsworth and her colleagues then developed a method for assessing the attachment strength of an individual infant towards her mother or caregiver (Ainsworth et al., 1978). The method is known as the Strange Situation, and has been widely used with 12 – 24 months old infants in many countries worldwide. To sum up, it is a method for checking in a standardised way, how well the infant uses the caregiver as a secure base for exploration, and is comforted by the caregiver after a lightly stressful experience.

The strange situation assesses infants’ responses to separations from and subsequent reunions with, the caregiver [mother here], and their reactions to an unfamiliar woman [the so-called “stranger”]. In the testing room, there are only 2 chairs [one for the mother and one for the stranger] and a range of toys with which the infant can play.

TA - The Strange Situation Procedure

Table A. The Strange Situation Procedure

As Table A shows, the episodes are ordered so that the infant’s attention should shift from the exploration of the environment to attachment behaviour towards the caregiver as the Strange Situation proceeds. The most crucial points are the infant’s responses to the 2 reunion episodes, and form the basis for assessing an infant’s security of attachment. The coding scheme for security attachment was developed by Ainsworth et al. (1978) and describes infant behaviour according to 4 indices:

1) Proximity-seeking
2) Contact-maintenance
3) Resistance
4) Avoidance

Referring to Table A, in a well-functioning attachment relationship, it would generally assumed that the infant would use the mother as a base to explore [Episodes 2, 3 and the end of Episode 5], but be stressed by the mother’s absence (Episodes 4, 6 and 7;  these episodes are cancelled if the infant is overly distressed or the mother wants to return sooner]. Special attention is also given to the infant’s behaviour in the reunion episodes (5 and 8), to see if her or she is effectively comforted by the mother. Based on those measures, Ainsworth and others distinguished a number of different attachment types.

The 4 primary ones are:

Type A – Insecure Avoidant Attachment

Insecure-Avoidant (Type A) infants display high levels of environment-directed behaviour to the detriment of attachment behaviour towards the caregiver [i.e. Avoidant (A) – avoids caregiver and explores environment]. The Insecure Avoidant Types display little if any proximity-seeking behaviour, and even tend to avoid the caregiver, by averting gaze or turning or moving away, if the caregiver attempts to make contact. Throughout the whole process of the Strange Situation, Insecure Avoidant infants appear completely indifferent toward the caregiver, and treat both the latter and the stranger is very similar ways; hence, these infants may show less avoidance of the stranger than of the caregiver.

Note that conversely, the (Type C) Insecure Resistant / Ambivalent Attached infants show high levels of environmental-directed behaviour to the detriment of the caregiver [the complete opposite to Type A].

Type B – Secure Attachment

When the dynamics of the attachment relationship is a balance between environmental exploration and attachment behaviour directed towards the caregiver [See PICTURE C], then the securely attached infants are considered as having the right balance.

PC Attachment as a balance of behaviour TA

PICTURE C. Attachment as a balance of behaviour directed toward mother and the environment. Source: Adapted from Meins (1997).

The presence of the caregiver in the pre-separation episodes affords them the security to turn their attention to exploration and play, with the confident knowledge that the caregiver will be available for comfort or support should it be required. However, attachment behaviour is triggered in securely-attached infants during the separation episodes, leading to seek contact, comfort, proximity or interaction with the caregiver when they return. Securely attached infants may or may not become distressed upon separation from caregivers, and this makes the infants’ response to separation a relatively unreliable and poor indicator of attachment security. However, regardless of their response to separation, securely attached children are marked by their positive and quick response to the caregiver’s return, displayed generally by their readiness to approach, greet and interact with the caregiver.

It important to note that Type B [Secure] Attachment is the only “secure” attachment in the group, all the rest are insecure attachment types, and in contrast to Type B, they have their balance of infant attachment tipped to either extreme [i.e. Avoidant (A) – avoids caregiver and explores environment / Resistant (C) – avoids environment and exhausts caregiver]

Type C- Insecure Resistance / Ambivalent

Insecure-resistant infants are over-involved with [to the point of exhausting] the caregiver, showing attachment behaviour even during the pre-separation episodes, with little or no interest in exploring the environment. The Insecure Resistant (Type C) infants tend to become extremely distressed upon separation, however, the over-activation of their attachment system hampers their ability to be comforted by the caregiver upon reunion – this leads to angry or petulant behaviour, with the infant resisting contact with and from the caregiver [in extreme cases this manifests itself as tantrum behaviour where the caregiver may sometimes be hit or kicked by the infant].

Type D – Insecure Disorganised

Besides the original 3 categories mentioned above distinguished by Ainsworth et al. (1978), Main and Solomon (1986, 1990) established a fourth category, Type D [Insecure Disorganised Attachment] for infants whose behaviours appeared not to match any of the A [Avoidant], B [Secure] and C [Resistant/Ambivalent] categories. These insecure-disorganised infants look disoriented during the strange situation procedure, and display no clear strategy for coping with separations from and reunion with their caregivers. Infants classified as insecure-disorganised may simultaneously display contradictory behaviour during the reunion episodes, such as seeking proximity while also displaying obvious avoidance [e.g. backing to which the caregiver or approaching with head sharply averted]. Insecure-disorganised infants (Type D) may also react to reunion with fearful, stereotypical or odd behaviours [e.g. rocking themselves, ear pulling, or freezing]. Main and Hesse (1990) argued that, although the classification criteria for insecure-disorganised attachment are diverse, the characteristic disorganised behaviours all include a lack of coherence in the infant’s response to attachment distress and betray the “contradiction or inhibition of action as it is being undertaken” (p.173).

Main and her colleagues (1985) believe the Type D [Insecure-disorganised ] is a useful extension of the original Ainsworth classification.

There are many subtypes of these main types, however most studies do not refer to them, and in older studies, type D babies [who are often difficult to classify as they do not show a clear pattern] were ‘forced’ into 3-way and 4-way classifications.

In most cases, type B babies (secure – considered as most desired, i.e. “normal” / although debated] are compared with types A and C [inscure-avoidant and insecure-resistant/ambivalent], and the type B [secure-attachment] tends to be seen as developmentally normal, or advantageous. Many criticisms have been made of the attachment typing resulting from the Strange Situation procedure (Lamb et al., 1984), particularly of the earlier work that was based on small samples, and of the normative assumption that “B is best”. They also pointed out the procedure only measures the relationship between mother and infant, and not the characteristics of the infant. Since attachment security is the dyadic measure, infant-mother attachment type is not necessarily the same as infant-father attachment type. In fact, many studies have found that the attachment type to father is not related to that with the mother; meta-analyses (Fox et al., 1991; van Ijzendoorn and De Wolff, 1997) found a very modest association between the two.

However, the strange situation procedure is today a commonly and internationally used technique. One of the most important test of utility of attachment types is that it should allow us to predict other aspects of development, and we now have considerable evidence for this (see Bretherton and Waters, 1985 and Waters et al., 1995, for reviews).

Kochanska (2001) followed infants longitudinally from 9 to 33 months and observe their emotions in standard laboratory episodes designed to elicit fear, anger or joy. Over time, type A (Avoidant – towards caregiver) infants became more fearful, type C (Resistant/Ambivalent – exhausts caregiver) infants became less joyful, type D (Disorganised – does not fit in A, B or C behavioural categories) infants became more angry; whereas type B (Secure) infants showed less fear, anger or distress. Using the strange situation procedure, secure attachment to mother at 12 months has been found to predict curiosity and problem solving at age 2, social confidence at nursery school at age 3, and empathy and independence at age 5 (Oppenheim et al., 1988), and a lack of behaviour problems (in boys) at age 6 (Lewis et al., 1984).

Is the Strange Situation valid across populations worldwide?

Van Ijzendoorn and Kroonenberg (1988) provided a cross-cultural comparison of strange situation studies in a variety of different countries. In American studies, some 70% of infants were classified as securely attached to their mothers (type B), some 20% as Type A, and some 10% as Type C. However, German investigators found that some 40-50% of infants were of Type A (Grossman et al., 1981), while a Japanese study found 35% to be of Type C (Miyake et al., 1985). These percentages do raise the question about the nature of “insecure attachment”: is it a less satisfactory mode of development or are these just different styles of interaction?

Takahashi (1990) argued that the Strange Situation must be interpreted carefully when it is applied across cultures. He found that Japanese were excessively distressed by infant alone episode (episode 6 – Table A), because generally in Japanese culture babies are never left alone at 12 months. This is the reason why fewer Japanese babies scored B (Secure). It is also important to note, that there was no chance for them to show avoidance (and score A – insecure avoidant), since the mother seeing the level of distress went straight on without hesitation to pick up the baby. This may also be possible explanation as to why many Japanese babies were C (Insecure Resistant/Ambivalent) at 12 months [still they are not at 24 months, nor are adverse consequences apparent]. This distortion can be avoided by virtually omitting episode 6 (see Table A) for such babies. Rothbaum et al. (2000) do take a more radical stance, in comparing the assessment security in the USA and Japan. They argue that these two cultures put different cultural values on constructs such as independence, autonomy, social competence and sensitivity; such that some fundamental tenets of attachment theory are called into question as cross-cultural universals.

Cole (1998) suggested that we need information of the geographical trends in socio-behavioural patterns [culture, heritage, language, arts, etc] under study if we are to understand the nature of the everyday interactions that shape the development of young children in relation to their caregivers. The strange situation may be a valid indicator but we at least need to redefine the meaning of the categories “avoidant, secure and resistant / ambivalent” according to the geographical socio-behavioural patterns [culture]. He also argued that although it is a standardised test, strange situation is really a different situation in different environmental circumstances. However for successful use of the strange situation in a non-western culture [one that is not of Western European heritage], we can take a look at the Dogon people of Mali.

Infant-mother attachment among the Dogon of Mali

The study we are about to discuss is a very rare one among its kind which took place among the Dogon people of Eastern Mali, a primarily agrarian people living by subsistence farming of millet and other crops, as well as cash economy in towns [see PICTURE D].

PD - Dogon mother spinning cotton with child on her lap

PICTURE D. Dogon mother spinning cotton with child on her lap

The study was carried out in 2 villages with a total population of about 400, and one town population of 9000, with the researchers attempting to get a complete coverage of infants born between mid-July and mid-September 1989. Not all infants could take part, due to relocation or refusal, and the researchers excluded 2 infants who had birth defects, and 8 suffering from severe malnutrition. In addition, after recruitment two infants die before or during the two-month testing period. Finally, 42 mother-infant pairs took part and provide a good quality data. The infants were 10 to 12 months old at the time of testing.

The Dogon are a polyamorous society, and mothers typically live in a compound with an open courtyard, often shared with co-wives. There was some degree of shared care of infants, about one half were cared for primarily or exclusively by the mother, about one third primarily by the maternal grandmother with a mother however being responsible for breastfeeding (see PICTURE E).

PE - Dogon mother breastfeeding her child

PICTURE E. Dogon mother breastfeeding her child.

Breastfeeding is a normative response by the mother to signs of distress in in the Dogon infants. Three related features of infant care in the Dogon – frequent breastfeeding on demand, quick response to infant distress, and constant proximity to the mother or caregiver – are seen as adaptive and there is high infant mortality [as in some other traditional African cultures].

The researchers have several objectives in mind, they wish to see if the strange situation could be used successfully in Dogon culture; one distribution of attachment types was obtained; whether infant security correlated with maternal sensitivity – a test of the Maternal Sensitivity Hypothesis; whether infant attachment type related to patterns of attachment-related communications in mother-infant interaction – the test of what the authors call the Communication Hypothesis; and to see if frightened or frightening behaviour by the mother predicted disorganised infant attachment.

Three situations were used to obtain relevant data, the behaviour being recorded on videotape in each case. One was rather new – the Weigh-In, part of the regular well-infant examination, in which the mother handed over the infant to be weighed on a scale – and mildly stressful separation for the in, especially in Dogon culture. The other two were more standard – the strange situation, carried out in an area of courtyard separated off by hanging mats; and two 15 minute observations in the infant’s home, and the mother was cooking, bathing/caring for the infant.

The following data were obtained:

  • Infant attachment classification (from the strange situation)
  • A rating of infant security on a 9-point scale (from the strange situation)
  • Mother and infant communication related to attachment, coded by 5-point Communications Violations Rating scales (from the Weigh-in)
  • Maternal sensitivity, rated in terms of promptness, appropriateness and completeness of response to infant signals (from the home observations)
  • Frightened or frightening behaviours by the mother, such as aggressive approach, disorientation, trance state, rough handling as if baby is an object, on a 5-point scale (from the home observations and the Weigh-In).

[REMEMBER!!!! [although we are quite sure you know this already] : “r” is known as the correlation coefficient and tells us 2 things: (i) Direction of Relationship + or – & (ii) Strength of Relationship : +or- .1 is a small effect / +or- .3 is a medium effect / +or- .5 is a large effect | and p-value is the critical decider of whether to reject Null Hypothesis( i.e. the scenario we rightly thought would be opposite to our predictions) if p small enough (if p < .05 we say results were statistically significant, if p < .01 we say it is HIGHLY statistically significant) we reject the Null Hypothesis [both cases].

The strange situation was found to be feasible, following quite standard procedures. The distribution of attachment types was 67% B (Secure), 0% A (Avoidant), 8% C (Resistant/Ambivalent), and 25% D (or on a forced 3-way classification, 87% B, 0% A and 13% C). This is quite unusual in having no avoidant (A) classifications; D is high but not significantly greater than Western norms.

The Maternal Sensitivity Hypothesis only received weak support. The correlation between infant security and maternal sensitivity was r = 0.28, and with p < .10; the difference in means between attachment classifications was not statistically significant (B=5.26, C=5.00, D=4.20).

The Communications Hypothesis did get support. Infant security correlated -.54 with Communications Violations (p < .001), and the attachment classifications differed significantly (B=2.66, C = 3.50, D = 3.89; p < .01).

Finally, frightened or frightening behaviour by the mother correlated r = -.40 (p < .01) with infant security, and was particularly high in children with disorganised attachment (B= 1.23, C = 1.33, D = 2.35; p < .01).

Besides demonstrating the general application of the strange situation procedure in a nonwestern group with socio-behavioural patterns very different to our own, the findings provides support for the Communication Hypothesis. The case here would have been stronger if the different kinds of communication patterns for each attachment classification had been described in more detail. For example, that insecure resistant / ambivalent (C) attachment type infants would be inconsistent and often unable to convey their intent, or to terminate their own or another’s arousal, whereas insecure disorganised (D) attachment type infants would “manifest contextually irrational behaviours and dysfluent communication” (p. 1451). As it is, the main findings show that insecure infants show more communications violations, do not describe the detailed typology. Indeed, since some of the Communications Violations rating scales were of “avoidance, resistance and disorganisation” (p. 1456), there is a possible danger of conceptual overlap between this scale and the attachment classifications.

Although support for the Maternal Sensitivity Hypothesis was we, the correlation of r = .28 is in line with the average of r = .24 found in the meta-analysis by De Wolff and Van Ijzendoorn (1997) on mainly Western samples. The researchers used a multiple regression analysis to examine the contributions of both maternal sensitivity and mothers frightened/frightening behaviour, to attachment security. They found that the contribution of maternal sensitivity remain modest, whereas the contribution of mothers frightened/frightening behaviour was substantial and significant; ratings of maternal sensitivity do not normally take account of mothers frightened/frightening behaviour, and the researchers suggest that this might explain the modest effects found for maternal sensitivity to date.

The absence of avoidant (A – avoids caregiver and favours exploration) type infants is interesting and the researchers argue that, given the close contact mothers maintained with the Dogon infants, and the normal use of breastfeeding as a comforting activity, it would be very difficult for it Dogon infant to develop an avoidant strategy [this may have some similarity with the low proportion of A-type in Japanese infants). If avoidant (A) attachment is a rare or absent when infants nursed on demand (which probably characterises much of human evolution), this might suggest that A type attachment was and is a rare except in Western samples in which infants tend to be fed on schedule, and often by bottle rather than breast, so that the attachment and feeding systems are effectively separated.

Most Dogon infants showed secure (B) attachment, but 25% scored as disorganised (D) [though mostly with secure as the forced 3-way classification]. The researchers comment that the frightened or frightening behaviours were mild to moderate, and did not constitute physical abuse. But why should mothers show these sorts of behaviour at all? An intriguing possibility is that it is related to the high level of infant mortality prevalent in the Dogon. About one third of infants died before five years of age, and most mothers will have experience in early bereavement. Unresolved loss experienced by a mother is hypothesised to disorganised (D) attachment; perhaps, frightened behaviours are more rational or expected, when the risk for infants are so much higher.

This study to great efforts to be sensitive to the geographically specific socio-behavioural patterns (culture) of the venue, when using procedures and instruments derive mainly from Western samples. A Malian researcher assisted in developing the maternal sensitivity coding, and Dogon women acted as strangers in the strange situation procedure. The Weigh-In and home observations were natural settings. The authors comment, however, that future work might make more effort to tap the perceptions of mothering and attachment held by the Dogon people themselves, in addition to the constructs coming from Western psychology.

(True, M. M., et al, 2001)

 

Back Home in the West: Why do infants develop certain attachment types?

Enfant en train de lire

Individual differences in the caregiver’s sensitivity to infant’s cues were the earliest reported predictors of attachment security. Ainsworth and colleagues (Ainsworth, Bell & Stayton, 1971, 1974; Ainsworth et al., 1978) found that mothers who responded most sensitively to their infants’ cues during the first year of life tended subsequently to have securely attached infants. The insecure-avoidant (Type A) pattern of attachment was associated with mothers who tended to reject or ignore their infants’ cues, and inconsistent patterns of mothering were related insecure-resistant/ambivalent (Type C) pattern of attachment. Although further research has largely supported this link between early caregiver sensitivity and later attachment security, the strength of the relation between these factors has not been replicated. For example, De Wolff and van Ijzendoorn (1997) conducted a meta-analysis to explore the parental antecedents of attachment security using data from 21 studies involving over 1000 infant-mother says, and reported a moderate effect size for the relation between sensitivity and attachment security (r = 0.24), compared with the large effect (r = 0.85) in Ainsworth et al.’s (1978) study. This led De Wolff and van Ijzendoorn to come to the conclusion that “sensitivity cannot be considered to be the exclusive and most important factor in the development of attachment” (p. 585).

It seemed that the construct of sensitivity might have been responsible for the result, so we return to Ainsworth et al.’s (1971, 1974) original definitions in order to have a better understanding of predictors of attachment security. In this research, we were particularly influenced by Ainsworth’s focus on the caregiver’s ability not merely to respond to the infant, but to respond in a way that was consistent with the infants cue. For example, Ainsworth et al., (1971) describe how mothers of securely attached infants appeared “capable of perceiving things from the child’s point of view” (p. 43), whereas maternal insensitivity involve the mother attempting to “socialise with the baby when he is hungry, play with him when he is tired, and feed him when he is trying to initiate social interaction” (Ainsworth et al., 1974, p. 129). Meins et al. (2001) verse argued that the critical aspect of sensitivity was the caregiver’s ability to “read” the infant’s signals accurately so that the response could be matched to this passive cue from the child.

baby-bebe-d'purb dpurb site web.jpg

In order to test this proposal, Meins et al. (2001) obtain measures of mothers’ ability to read their 6-month-olds’ signals appropriately (so called mind-mindedness), and investigated the comparative strength of mind-mindedness versus general maternal sensitivity in predicting subsequent infant-mother attachment security. Meins et al. reported that maternal mind-mindedness was a better predictor of attachment security 6 months later than was maternal sensitivity, with mind-mindedness accounting for almost twice the variance in attachment security than that accounted for by sensitivity.

This seems like a strong conclusion, since the genetic factors have been accounted for and do not contribute to attachment type as van Ijzendoorn et al. (2000) argued that it has a modest if any influence on attachment type. This can be confirmed from a twin study conducted by O’Connor and Croft (2001) when they assessed 110 twin pairs in the strange situation and found concordance of 70% in monozygotic twins and 64% in dizygotic twins – not significantly different. The model suggested estimates of only 14% of variance in attachment type due to genetics, 32% to shared environment, and 53% in non-shared environment.

A study of attachments formed by babies to foster mothers (Dozier et al., 2001) found as good a concordance between mothers’ attachment state of mind (from the Adult Attachment Interview, see below) and infant attachment type from the strange situation, as for biological mother-infant pairs, once again suggesting little genetic influence on attachment type.

So, it is fairly accepted today that mothers’ mind-mindedness is an important construct and it is defined as the mother treating her infant as an individual with a mind, instead of just an organism or small creature with needs to be satisfied. The emphasis should be on responding to the infant’s inferred state of mind, rather than simply their behaviour. In a longitudinal study of 71 mother-infant pairs, they found that maternal sensitivity (responding to infant cues) and some aspects of mind-mindedness, especially appropriate mind-related comments by the mother, measured at six months, both independently predicted security of attachment at 12 months. True et al., (2001) also found evidence that mothers’ frightened or frightening behaviour may also contribute independently to attachment security (Refer to Dogon Study above – Picture D and Picture E).

We should also take note that a huge amount of variance in attachment type appears to be related non-shared environment, and this cannot be explained by generalised maternal sensitivity. It is highly probable that, mothers are more sensitive and behave differently to some infants than others, depending on birth order, gender and infant characteristics, suggesting the need for family systems on these issues (van Ijzendoorn et al., 2000).

 

Attachment Beyond Infancy & The Internal Working Model

The attachment theory proposes that children use their early experiences with their caregivers to form internal working models (Bowlby, 1969 /1982, 1980) which incorporate representations of themselves, their caregivers, and their relationships with others. These internal working models will then be used by the child as templates for interacting with others. Consequently, because of the sensitive, loving support that securely attached children’s caregivers have supplied, these children are self-confident and have a model of themselves as being worthy; they therefore expect others to behave in a sensitive and supportive fashion. Conversely, given the patterns of interaction typically experienced by avoidant and resistant infants, insecurely attached children expect people to be rejecting, or inconsistent and ambivalent when interacting with them.

The strange situation measures security of attachment in terms of behaviours; especially how the infant behaves at a reunion of the separation. The strange situation procedure is generally used with infants between the ages of 12-24 months old. For 3 – 6 year-olds, variants of the strange situation, such as a reunion episodes after separation, have been used with some success (Main and Cassidy, 1988).

Research during the last 10 years has seen attachment become a life-span construct with corresponding attempts to measure it at different developmental stages (see Melhuish, 1993, for a review). It has been revealed that as infants grow older, in Bowlby’s 4th and 5th stages, attachment relationships become less dependent on physical proximity and overt behaviour, and more dependent on abstract qualities of the relationship such as affection, trust, approval, internalised in the child and also in the adult.

Research has revealed that it is useful to think of internal representations of the relationship in the child’s mind; the child is thought of as having an internal working model of his or her relationship with the mother, and with other attachment figures (Bowlby, 1988; Main et al., 1985). These are characterised as cognitive structures embodying the memories of day-to-day interactions with the attachment figure. They may be ‘schemas’ or ‘event scripts’ that guide the child’s action with the attachment figure, based on their previous interactions and the expectations and affective experiences associated with them.

Different attachment type would be expected to have differing working models of the relationship. Secure (Type B) attachment would be based on models of trust and affection [and a Type B infant would be able to communicate openly and directly about attachment-related circumstances, such as how they felt if left alone for a while]. By contrast, a boy or girl with an Insecure Avoidant (Type A) attachment may have an internal model of his/her mother that leaves the child without any expectancy of secure comforting from the latter when he/she is distressed [the mother may in fact reject his/her approaches]. The child’s action rules then become focused on avoiding her, thus inhibiting approaches to her that could be ineffective and instead lead to further distress; and this can be problematic, as there is less open communication between mother and son, and their respective internal working models of each other are not being accurately updated.

Insecure Resistant / Ambivalent (Type C) infants might not know what to expect from their mother, and they in turn would be inconsistent in their communication with the latter and often unable to convey their intent.

PF - Boy by Land Rover - from Separation Anxiety Test

PICTURE F. Boy by Land Rover: A picture from the Separation Anxiety Test

Over the last 15 years, researchers have attempted to measure attachment quality in older children [as much as the empirical methods allowed them to do in terms of construct validity and internal consistency], by trying to tap in to their internal working models (Stevenson-Hinde and Verschueren, 2002). One of the methods used involved narrative tasks, often using doll-play; children use a doll family and some props and complete a set of standardised attachment related story beginnings. Another method used has been the Separation Anxiety Test, in which children or adolescents respond to photographs showing separation experiences [see Picture G for an example]. The child is questioned about how the child in the photograph would “feel and act”, and then how he/she [the participating child] would feel and act if in that situation (Main et al., 1985). This test was found to have a good rater reliability and consistency for 8 to 12-year-olds. Large differences in responses between children having clinical treatment for behaviour disturbance and a normal control group was found (See Table B)

TB - Two Protocols from the Separation Anxiety Test

TABLE B. Two protocols from the Separation Anxiety Test

Securely attached children generally acknowledge the anxiety due to the separation but come up with feasible coping responses; insecurely attached children generally deny the anxiety, or give inappropriate or bizarre coping responses.

 

The Adult Attachment Interview

The internal working models of relationships can normally be updated or modified as new interactions develop. It is likely possibility that for younger children, these changes must be based on actual physical encounters. However, the Main et al. (1985) suggested that in adolescents or adults who have achieved formal operational thinking [Jean Piaget’s 4th and final stage at around the age of 12 as explained in our essay], it is possible to change / modify their internal working models without the need for such direct interaction. In order to measure attachment in older adolescents and adults, they developed the Adult Attachment Interview. This is a semi-structured interview that proves memories of one’s own early childhood experiences. The transcripts are coded, not on the basis of experiences themselves, so much as on how the person reflects on and evaluate them, and how coherent total account is [Adults’ attachment classifications are not based on the nature of their actual childhood experiences, but on the way they represent these experiences, be they good or bad]. They are also generally asked to describe their childhood relationships with mother and father, and to recall times when they were separated from their parents or felt upset or rejected. There are specific questions that also deal with experiences of loss and abuse. According to their responses during the AAI, Allsopp placed into one of the 4 attachment categories: (i) Autononous, (ii) Dismissing, (iii) Preoccupied [Or Enmeshed] and (iv) Unresolved

 

(i) Autonomous Attachment

Autonomous adults are able to give coherent, well-balanced accounts of their attachment experiences, showing clear valuing of close personal and meaningful relationships [note meaningful subjectively to the individual]. These adults classified as autonomous may have experience problems in childhood, or even had a very difficult or abusive upbringings, but they can generally have an open conversation and talk openly about the negative experiences and most seem to have managed to resolve any early difficulties and conflicts. In contrast to the open and balanced way in which autonomous adults talk about childhood experiences, adults in the remaining three categories have incredible difficulties in talking about attachment relationships.

 

 (ii) Dismissing Attachment

Dismissing adults deny the importance of attachment experiences and insist they cannot recall childhood events and emotions, or provide idealised representations of the attachment relationship that they are unable to corroborate the real-life events. [i.e. dismiss attachment relationship as of little importance, concern or influence

 

(iii) Preoccupied [or Enmeshed] Attachment

Preoccupied adults lack the ability to move on from the childhood experiences, and are still overinvolved with issues relating to the early attachment relationship [generally preoccupied with dependency on their own parents and still struggle to please them].

 

(iv) Unresolved Attachment

The final category is reserved for adults who are unable to resolve feelings relating to the death of a loved one or to abuse they may have suffered [people who have not come to terms with a traumatic experience, or work through the mourning process]

It is to be noted that, people from lower socio-economic groups are slightly more likely to score as Dismissing. However the large difference is in people receiving clinical treat, the great majority of whom do not score as Autonomous on the AAI.

 

Are attachments stable over time? From Infancy to Adult Attachment Type

The main question should be asking ourselves is does the security of attachment change the life, or does infant-parent attachment set the pattern not only for later attachment in childhood, but even for one’s own future parenting? As attachment has become lifespan construct, these questions have generated considerable research and debate.

Many studies have now spanned a period of some 20 years to examine whether strange situation classification in infancy predicts Adult Attachment Interview (AAI) classification as young adults (Lewis et al., 2000; Waters et al., 2000). The outcome is varied, but some of these studies have found significant continuity of the 3 main attachment types; that is, from Secure to Autonomous; Avoidant to Dismissive, and Resistant (Ambivalent) to Enmeshed. Several studies have also found relationships between discontinuities in attachment classification, and negative life events such as the experience of parent divorce.

 

Relationship between Adult Attachment Interview (AAI) and Infant-Parent Attachments

Adult Attachment Interview (AAI) and classifications have been found to relate systematically to the security of the infant-parent attachment relationship. Autonomous parents are more likely to have securely attached infants, and parents in the 3 non-autonomous group. Dismissing, Preoccupied and Unresolved are much more likely to form insecure attachment relationships with their infants. This relationship has been identified for both patterns of infant-mother (e.g. Fonagy et al., 1991; Levine et al., 1991) and infant-father (Steele et al., 1996) attachment. Furthermore, unresolved maternal AAI classification has been identified as a predictor of insecure-disorganised attachment (Main & Hesse, 1990; van Ijzendoorn, 1995). Thus, the way in which a parent represents their own childhood attachment experiences is related to the types of relationship formed with their children.

 

Are attachment stable over generations?

On top of the degree of continuity over time for an individual’s attachment typing, there is also evidence for the transmission of attachment type across generations; specially from the parent’s AAI (Adult Attachment Interview) Coding and their infant’s strange situation coding. Main et al. (1985) had reported some evidence for such a link, and indeed the AAI coding system is based on it; it was argued that Autonomous adults would end up with Secure infants; Dismissing adults with Avoidant infants, Enmeshed adults with Resistant (Ambivalent) infants; and Unresolved adults would have Disorganised infants. [See Table C].

TC - Hypothesized relationships between maternal stage of mind (AAI), maternal behaviour, and child attachment type

TABLE C. Hypothesised relationships between maternal stage of mind (from the AAI – Adult Attachment Interview), maternal behaviour, and child attachment type

Van Ijzendoorn (1995) looked at a large number of available studies in the decade since Main’s work and found considerable linkage between adult AAI (Adult Attachment Interview) and infant Strange Situation coding; Van Ijzendoorn argued that this “intergenerational transmission” of attachment may be via parent responsiveness and sensitivity. We discussed above how this is only a partial explanation, and other aspects of maternal behaviour and of the home environment may also be involved.

We have considerable evidence for some degree of continuity of attachment security through life, and onto the next generation; but considerable evidence that this can be affected by life events. An adult’s attachment security can also be influenced by counselling, clinical treatment, or simply by reflection [self mind-mindedness].

Some insight into this matter comes from a study by Fonagy et al. (1994). In a longitudinal study with 100 mothers and 100 fathers in London, who are given the AAI and other measures shortly before their child was born. The strange situation was used subsequently to measure security of attachment, to mother at 12 months and the father at 18 months. As many other studies have discovered, the parent’s AAI scores predicted the Strange Situation scores of the infants. The researchers also calculated the estimates of the amount of disrupted parenting and deprivation which the parents had experienced themselves, and use the measures to find out if these influenced infant attachment, which they did. However, the amount of disrupted parenting and deprivation the parents had experienced interacted strongly with the way in which the parents had dealt with their own representations of their experiences of being parented. Coding the AAI (Adult Attachment Interview), the researchers developed a Reflective Self-function scale to assess the ability parents had to reflect on conscious and unconscious psychological states, and conflicting beliefs and desires. Of 17 mothers with deprived parenting and low reflecting self-function scores, 16 had insecurely attached infants, as might have been expected. Completely opposite to this scenario 10 mothers who had experienced deprived but had high reflective self-function scores, all had securely infants. It was argued that reflective self-function could have the saliency to change the internal working models of people, and also demonstrate resilience to adversity and a way of breaking the inter-generational transmission of insecure attachment.

Adults who experienced difficult childhoods but have overcome early adversity and insecure attachment by a process of reflection, counselling or clinical help, are known as “earned secures”, and could be distinguished from “continuous secures”, who had a positive upbringing and what most might quality as “normal” childhood. Phelps et al. (1998) made home observations of mothers and their 27-month-old children, and found that earned-secures, like continuous secures, showed positive parenting; under conditions of stress, both these groups showed more positive parenting than insecure mothers.

Another fascinating perspective on this issue of inter-generational transmission of insecure attachments would be the Holocaust study (Bar-On et al., 1998; van Ijzendoorn et al., 1999). The Holocaust refers to the experiences of Jews and other persecuted unwanted & unassimilated minorities [who did not want to be Germans] in the concentration camps of World War II to be securely offloaded/deported when Adolf Hitler’s Germany became the Third Reich and when the policies changed to focus on National Socialism and Imperial Intentions of Expansion and Conquest (1939-45).

LittleJewsToBeSentBack

Jew Children: Here we see Jew school children in 1942. They look like younger children who are just beginning school. Notice that at least 2 teachers are with them. By this time the Jewish children had been forced out of public schools. For a short time however, they we allowed to attend schools set up by the Jewish community. At the time this photograph was taken, the transports to the deportation camps had already begun. Often children under 10-years of age were not required to wear the badges, but some of these children look much younger.

Although many revisionist such as the English historian, David Irving, of this dark part of human history are finding out inaccuracies regarding the true people responsible for those massacres [since no evidence has been found of Hitler giving any extermination order] along with other atrocities as evil if not worse than the deaths in concentration camps [for a section of a population that was causing instability to the proper functioning of a nation during times of revolt and huge global conflicts involving economic treaties, Jewish propaganda and ultra-liberal communist migration agendas fused with policies based on business & banking motives] committed by many of the “supposed good guys of the Allies” that involved the rape and murder of innocent children and women, fuelled by pure hate, Bolshevism and Jewish Communism against the native aryans of Germany [i.e. the German Volk/People].

A documentary extract from the diary of Dr. Joseph Goebbels who decided to take a firm stance against the national destruction of Germany (and Western Europe), Christianity, and whom many Nationally oriented thinkers consider to be among the bravest of the last great Christian Aryan men to have walked the earth. [See Aryan Race et aussi Race Aryenne / Also to be noted perhaps quite surprisingly that there were strong ancient Aryan religious & mythological warrior values embedded in the mind of Heinrich Himmler (the Reichsführer of the SS), the person believed to have taken the decision to exterminate the jews (remember the term itself originated from human sacrifices by Jews to their god, Baal), as he told his personal masseur & physician Felix Kersten that he always carried with him a copy of the ancient Aryan scripture, the Bhagavad Gita because it relieved him of guilt about what he was doing – he felt that like the sacred warrior Arjuna, who was simply doing his duty for his people and their future without attachment to his actions]

But, since the majority on this planet have been made to believe one version where all the Jews and the alien army of the allies are the good guys, and all the Germans [including Adolf Hitler] were the blood-sucking vampires who also turned into cannibals on the week ends, we are going to base our comments on this politically correct version that the history books and mainstream publishers prefer. [Politics too nowadays is in serious need of revision; are people really divided into 3 main categories? Left, Centre and Right? I tend to believe that we are above all this and have elements of all 3 embedded in us as modern human beings of the 21st century]

But getting back to the Bedouin cultured civilisation’s distinguished members, i.e. Jews as an example of victims in those concentration camps [that many people have begun to question the evidence used to claims of gas chambers (with a great amount found on territories occupied by Stalin) with many camp detainees reporting being kept in facilities with swimming pools, orchestras and kitchens, the number of casualties, and the true perpetrator of the crimes]. It is believed by most people of the 21st century who have had no other options but to take in their news from mainstream Jewish-owned media, that besides being treated like despicable rats, degraded and tortured, many of the Jews to be deported kept in those camps were killed [some shot like parasitic animals as they tried to escape], leaving behind them orphaned children in traumatic circumstances.

Our question here however in regards to the focal point of this section, i.e. “insecure attachments”, is whether such traumatic experiences could have an impact on attachment, and could this also have been transmitted inter-generationally to the Jewish children scattered around the globe today like modern gypsies? This issue of inter-generational transmission of insecure attachments is the focus of the Holocaust study (Bar-On et al., 1998; van Ijzendoorn et al., 1999). The study we are looking at encompasses 3 generations of Jews, now grandparents, who went through the Holocaust [note that the name Holocaust itself comes from an event involving human sacrifices to the Jewish god, Baal], typically as children themselves who had lost their parents; their children, now parents; and their grandchildren. These generations are compared here with comparable 3-generation families who had not experienced the Holocaust.

It was found that the effects of the Holocaust were evident in the grandparent generations, who showed distinctive patterns on the AAI (Adult Attachment Interview), scoring high on Unresolved, as would have been predicted, and high on unusual beliefs – another predicted effect of trauma and unresolved attachment issues. They also displayed avoidance of the Holocaust topic; a very common finding was that the experiences had been so horrific and disgusting that they were unable to talk about their experiences with their own offspring.

However, inter-generational transmission of attachment type was quite low for this group of Jews. The Holocaust parents (‘children of the Holocaust’) showed rather small differences from controls, scoring just slightly higher on Unresolved on the AAI. This normalization process continued to the next generation (‘grandchildren of the Holocaust’), for whom no significant differences in attachment were found from controls. This seems to suggest a minor trend of  “Unresolved” attachment among these Jews [note that this is linked to Disorganised attachment in infants and today some question whether Type-B Securely attached infants are really the “Best” way to be, and whether other personality characteristics also help shape the individual’s uniqueness throughout life, such as their reflective abilities and internal working models (reshaped by other meaningful events/relationships) – however it is also important to note that attachment types are known to remain and be transmitted over generations for the majority of people with low self-reflective skills and intelligence].

 

Disorganised Attachment and Unresolved Attachment Representation

The pattern of infant attachment classed as “Disorganised” from the Strange Situation procedure, was only acknowledged much later than the other well known attachment types [Secure, Inscure Avoidant & Insecure Resistant/Ambivalent], and appears to have rather distinctive correlates.

It has been noted that Disorganised infants may show stereotypic behaviours such as freezing, or hair-pulling; contradictory behaviour such as avoiding the caregiver [e.g. mother] despite experiencing severe distress on separation; and also misdirected behaviour such as seeking proximity to the stranger instead of the caregiver. These characteristic behaviours are known as signs of Unresolved stress and anxiety, and for these types of infants the caregiver is a source of fright rather than a symbol of safety (See Table C) – (see Vondra and Barnett, 1999, for a collection of recent research).

Van Ijzendoorn, Schuengel and Bakermans-Kranenburg (1999) reviewed a series of studies on Disorganized attachment, and argued that it was mainly caused by environmental factors [i.e. exposure]; although there is also some evidence for genetic factors in Disorganised infant attachment, and it is known to be higher in infants with severe neurological abnormalities [e.g. cerebral palsy, autism, Down’s syndrome] – around 35%, compared with around 15% in normal samples. However, Type-D (Disorganised Attachment) is also especially for mothers with alcohol or drug abuse problems (43%) or who have maltreated or abused their infants (48%). Type-D attachment is not higher in infants with physical disabilities; and it is not strongly related to maternal sensitivity as such, however there is evidence relating it to maternal unresolved loss or trauma [like the Jews of the Holocaust generation as mentioned above].

While the Maternal Sensitivity Hypothesis suggests that maternal (in)sensitivity predicts secure (B) or insecure (A,C) attachment, a different hypothesis has been proposed to explain Disorganised Type-D attachment (See Table C), which is that it is would be the result from frightened or frightening behaviour by the caregiver (generally the mother) to the infant, resulting from the mother’s own unresolved mental state related to attachment issues [e.g. abuse by her own parent; violent death of a parent/or close one; sudden loss of a child].

A study in London by Hughes et al. (2001) compared the Unresolved scores on the AAI (Adult Attachment Interview) for 53 mothers who had infants born next after still birth, with 53 controls [normal mothers], and found out that among the mothers who had previously stillborn infants, 58% scored as Unresolved, compared to 8% of Controls; furthermore, 36% had Disorganised (Type D) infants, compared with 13% of controls. A statistical path analysis [looking at the relationships among all the variables showed that the stillbirth experience predicted Unresolved maternal state of mind, and that it was this variable [i.e. Unresolved state of mind] then predicted infant disorganisation.

The hypothesised behavioural aspects of maternal unresolved state of mind [and Disorganisation in infants] were supported by the study in Mali reported above. A study in Germany by Jacobsen et al. (2000) provided further support in which 33 children were examined along with their mothers at 6 years of age. Disorganised attachment (assessed from a reunion episode) was significantly related to high levels of maternal expressed emotion, defined as speech to the child that was severely critical of them or over-involved with them.

Van Ijzendoorn et al., (1999), in a review, also found that insecure Disorganised (Type D) attachment in infants predicted later aggressive behaviour, and child psychopathology. Carlson (1998) found significant prediction from attachment disorganisation at 24 and 42 months, to child behaviour problems in preschool, elementary school and high school. Taking into consideration the prior links to parental maltreatment and abuse, it is highly likely that the Disorganised (Type D) attachment type will be found to be the most relevant aspect of attachment in understanding severely maladaptative or antisocial behaviour in later life.

 

Origins of the Insecure Disorganised State of Mind

The origins of insecure-disorganised (Type D) attachment is becoming an increasingly researched topic, and this may be due to the fact that early disorganisation (Type D) has been identified as a risk factor for later psychopathology (Fearon et al., 2010; van Ijzendoorn et al., 1999), with studies identifying a link between insecure-disorganised attachment in infancy and behavioural problems in later childhood (Lyons-Ruth et al., 1993; Munson et al., 2001; Shaw et al., 1996).

In Main and Hesse’s (1990; Hesse & Main, 2000) their seminal work led to the argument that these insecure-disorganised (Type D) infants have not been able to establish an organised pattern of attachment because they have been frightened by the caregivers or have experienced their caregivers themselves showing fearful behaviour. This is supported by findings that have linked insecure-disorganised attachment to infant maltreatment or hostile caregiving (Carlson, Cicchetti, Brnett & Braunwald, 1989; Lyons-Ruth et al., 1991), maternal depression (Radke-Yarrow et al., 1995), and maternal histories of loss through separation, divorce and death (Lyons-Ruth et al., 1991).

In a meta-analytic review however, van Ijzendoorn et al. (1999) reported that 15% of infants in non-clinical middle class American samples are classified as insecure-disorganised (Type D), suggesting that pathological parenting practices cannot fully account for disorganised attachment in infants. As highlighted by Bernier and Mains (2008), the origins of attachment disorganisation are very complex, involving factors ranging from infants’ genetic make up to parents’ experiences of loss or abuse, and much remains to be learned about why some infants are unable to form and organised attachment relationship with the caregiver.

 

Links between Attachment & Emotional Development

It is fundamental to understand and grasp the importance of the early stages of life, as the brain’s cognitive patterns are shaped by these early experiences that tend to have a lasting effect on personality. The infant’s earliest mode of exploring and engaging with the world revolves around conveying emotions: fear, discomfort, pain, contentment, happiness.

As we have already explained above in the section exploring the reasons why infants develop particular attachment types, the caregiver’s responses [not sensitivity, but mind-mindedness, i.e. the ability to respond “appropriately” to the cues] to such emotional cues and their representations of their own childhood emotional experiences [generally measured with the AAI for Autonomous, Dismissing, Preoccupied or Unresolved] are accepted as strong predictors of attachment security [i.e. Autonomous – Secure, Dismissing –Avoidant, Preoccupied- Resistant and Unresolved – Disorganised].

With this in mind, it is quite surprising that so little research has been conducted on the relation between security and children’s emotional development.

There are 2 main ways in which links between attachment and emotional development have been addressed:

(i) The research has investigated whether infants’ early emotional experiences predict attachment security

(ii) The researchers have explored whether the security of the infant-caregiver attachment relationship predicts children’s subsequent emotional development.

 

Emotional Regulation and Attachment Security

This section is focussed mainly on how caregivers’ ways of responding to the infants’ emotional cues predict later attachment security.

Mothers of insecure-avoidant infants have been found to withdraw when their infants express negative emotions (Escher-Graeub & Grossmann, 1983). Conversely, mothers of insecure-resistant infants typically find it difficult to comfort their infants effectively, meaning that their responses result in prolonging their infants’ feelings of distress (Ainsworth et al., 1978).

Cassidy (1994) argued that caregivers may enable their children to develop good emotional coping and regulation strategies through their willingness to acknowledge and respond to their children’s emotions. She also argued that secure attachment is characterised by the openness with which the caregiver [mother, father, etc] recognises and discusses the full spectrum of emotions [which leads to the child’s understanding that emotions should not be supressed and can be dealt with effectively]. Insecure-avoidant attachment is generally associated with caregivers failing to respond to their infants’ negative emotions because of their tendency to bias interactions in favour of positive emotional expressions. On the opposite, insecure-resistant attachment is associated with the caregiver amplifying the infant’s negative affect. Cassidy maintained that mothers of insecure-resistant children fail to emphasise the importance of attachment relationships, and therefore adopt strategies that fail to help the child regulate negative emotion, hence, prolonging the need for contact with the mother [or caregiver].

 

Affect Attunement

Cassidy’s views are in synchronisation with other theoretical positions, such as Stern’s (1985) characterisation of sensitive parenting in terms of effect attunement, with the sensitive mother being the type of human being who is attuned to all of her infant’s emotions, is also accepting and sharing in their affective content.

Insensitive mothers on the other hand, undermatch or overmatch their infants’ emotional signals because of their own perceptual biases.

In support of these approaches, Pauli-Pott and Mertesacker’s (2009) investigation revealed that mismatches between maternal and infant affect at 4 months [e.g. mother shows positive affect while her infant demonstrates neutral or negative affect] predicted insecure mother-infant attachment at 18 months. Mind-mindedness is also operationalised in terms of the caregiver’s tendency to accurately interpret the infant’s cognitions and emotions, and has been found to predict later attachment security (Meins er al., 2001). Thus, observations by a mother of her infant displaying surprise in response to a jack-in-the-box, followed by enigmatic comments such as “my infant is surprised” are associated with subsequent secure attachment. In contrast, insecure attachment is related to mothers misreading their infants’ internal stress by, for example, commenting that the infant is scared when no cue to suggest such an emotion is present in the infant’s overt behaviour. In more recent work it has been found that these inappropriate mind-related comments are particularly common in mothers of insecure-resistant infants, with mothers in this group being more likely to comment inappropriately on their infants’ thoughts and feelings than their counterparts in the secure, insecure-avoidant and insecure-disorganised groups.

Evidence suggests that mothers in the insecure-avoidant and insecure-resistant groups are aware of over-controlling and under controlling strategies respectively in coping with their children’s negative emotions. Berlin and Cassidy (2003) followed up a sample of infants who had been assessed in the strange situation in infancy, and questioned the mothers when the children were aged 3 about how they dealt with their child’s emotional expressive, and found that insecure-Avoidant (Type A) group mothers reported the greatest control of their 3-year-olds’ negative emotional expressiveness [e.g. expression anger or fear], whereas mothers in the insecure-Resistant(Ambivalent – Type C) reported the least control of children of their children’s expressing negative emotions.

These findings suggest that maternal behaviours associated with avoidant and resistant attachment that have been observed in infancy are stable and persist into the preschool years.

Security-related differences in the way in which children regulate their emotions are also in line with Cassidy’s (1994) approach. Spangler and Grossman (1993) took physiological measures of infant distress during the strange situation procedure and compared these measures with infants’ outward shows of upset and negative affect. The physiological measures showed that insecure-Avoidant (Type A) group infants were as distressed or more distressed than their secured group conterparts (Type B), despite the absence of overt behavioural distress observed in the insecure-avoidant (Type A) groups infants. It was therefore concluded by Spangler and Grossman that insecure-Avoidant infants mask or dampen their expression of negative emotions as a way of coping with the facts that caregivers are likely to ignore or reject their bids for contact and comfort when they are distressed.

Belsky, Spritz, and Crnic (1996) reported that 3-year-olds who had been securely attached in infancy were more likely to recall and memorise the positive emotional events that had witnessed on a puppet show, whereas insecurely attached children tended to attend and remember only the negative events. On the same note, Kirsch and Cassidy (1997) found that both secure and insecure-resistant attachment in infancy were associated at 3 years of age with better remembering and recall for a story in which a mother responded sensitively to her child than to a story where the child was rejected.

In contrast to the scenario above, insecure-Avoidant infants showed no difference in their recall of the responsive versus rejecting stories. Kirsch and Cassidy also found that 3-year-olds classified as insecure in infancy were more likely than those in secure groups to look away from drawings depicting “mother” – child engagement.

These findings suggest that the positive experiences of secure infants with their caregivers may result in these children attending more to positive emotional events because they are consistent with their attachment security.

 

__________

 

(III) The Genetic/Psychosexual Model of Development (Sigmund Freud)

“For generations almost every branch of human knowledge will be enriched and illuminated by the imagination of Freud” (Jane Harrison, 1850- 1928)

The Genetic Model of Psychosexual Stages

The genetic model that we are now going to explore may not have much to do with genes, and relates more to the “development” of the child. Sigmund Freud proposed that childhood development proceeds through a series of distinct stages to adulthood, each of them with their own themes and preoccupations.

The stages are based on the life-drive present in all organisms, as Freud proposed, and it seems logical from a physician who carried empirical work on the sexual organs of eels, to assume that all organisms have the embedded urge for “life” [i.e the life drive to keep itself and its species alive, which involves sexual selection and the fertilisation achieved through sex] that is primarily sexual but some also argued that it can be interpreted (unconsciously or consciously) in other forms [as flamboyant French psychoanalyst, Jacques Lacan proposed in his Theory with the Symbolic, the Imaginary and the Real] to suit a sophisticated society [e.g. France] with all its dimensions. Freud proposed that the psychosexual stages are understood to be organised around the child’s emerging sexuality.

It is important however to not exaggerate or misinterpret Freud’s assumption and also to remember the logic and vital purpose behind the sexual (life) drive in organisms in its own existence and continuity [breeding]. This is also a very good discussion point for the 21st century as it seems to imply that all healthy organisms should have healthy sexual drives, but whether these should “always” find expression through genital sexual acts with another organism is debatable and questionable from an ethical and moral perspective [especially for those not in a healthy and stable relationship]; hence many psychologists recommend “masturbation” as a healthy and safe alternative in managing excessive sexual desires in both young people and adults.

In the process of the child’s emerging “sexuality”, the term “sexual drive” itself meant more than simply adult genital sexuality, and from a psychological perspective, was broadly referring to a physiological/biological sense of “pleasure in the body” and more to “sensuality”. As many psychologists who based their foundations on some aspects of Freudian perspectives, it is assumed that adult sexuality is nothing more than the simple culmination of an orderly set of steps in which the child’s “psychosexual” focus shifted from one part of the body to another, with these body parts or “erotogenic zones” all having something in common with the generation of pleasure; which are orifices lined with sensitive mucous membranes.

Hence, Sigmund Freud may have adequately proposed in a statement regarding mental health that, “the only unnatural sexual behaviour is none at all.”, taking note once again that the term “sexual” from a psychologist exploring the developmental stages of a child generally tends to refer more to “sensuality”. The erotogenic body parts with orifices and sensitive mucous membranes leads to the infant sensuality being initially centred on the mouth (oral cavity), followed by the anus and then the genitals in early childhood. After some characteristic drama at about the age of 5, the child’s sexuality goes nearly completely dormant for a few years, before re-emerging with a vengeance [a rush of hardly managed sexual feelings] when puberty hits.

As the tradition on the debate of the development of the mind itself as an entity [that reflects in linguistic form the desires, both conscious and unconscious of the human organism] goes on among psychologists in the quest for these answers, we are also familiar with critics [mostly from the reductionist schools of thoughts (e.g. Pavlovian) such as the cognitive-behavioural enthusiasts and the medical department with its accolade, the pharmaceutical industry] who have not been entirely positive about Freud’s contribution to knowledge and are still unconvinced [perhaps due to their philosophy on a kind of methodological epistemology that is lacking to cope with matters of the mind] about the unconscious part of the mind that plays a huge role in our conscious behaviour. This may not be completely negative to intellectuals who subscribe to a version of reality that is embedded in language since critics in many cases have led to systematic investigations [scientific methodology] and until now there is an increasing body of evidence that points to the existence of an unconscious drift/urge/motive that exists in all organisms [e.g. as we noted in the essay about Biological Constraints in Learning by Operant Conditioning and also other studies carried out on priming along with observations of the symptomatic manifestations of certain mental disorders such as OCD and Panic Attacks].

The psychoanalytic theory has been modified by some of the best minds of the psychoanalytic tradition [e.g. Jung, Lacan, and some components adopted by ourselves in the conception of the model of mental life within the Organic Theory] since Freud left the questions open with the freedom of dialogue over the concepts and their expansions and applications throughout various dimensions [e.g. analysing qualitative subjective experiences of the expression of love and passion, or the obsoleteness of politics in modern society, or the impact of animal studies in designing a human world]. However, between all the versions of Freud’s theories, there are 3 components that have never been denied by any great psychoanalyst, which are the 3 structures first mentioned in the early Topographic Model, that is, the Unconscious, the Subconscious and the Conscious. These were later replaced with the Structural Model, which is the popular version that remapped and renamed the concepts, and which includes the (unconscious) id [present in all new born infants which consists of impulses, emotions & desires – id demands instant gratification of all wishes and needs], the (conscious, me) ego [which acts a mediator between reality and the desires of the id] and the (subconscious) superego [the conscience: the sense of duty & responsibility], that adepts such as Jacques Lacan and Carl Jung rejected over the earlier Topographic model [being one that is more flexible for the development of further refined models that also have the option to define the life force in other ways than the questionable specificity of the Structural Model’s id, ego & superego.

 

The 5 Psychosexual Stages

Stage I: The Oral Stage (from birth to 1 year old approximately)

From Freudian assumptions, it is believed that the voracious sucking of infants is not pure nutritional, although the infant clearly has a basic need to feed, it also takes a “pleasure” in the act of feeding, a feeling that Freud did not hesitate to quality as sexual and perhaps more “sensual” at this stage as babies appear to enjoy the stimulation of the lips [in play] and the oral cavity, and will often happily engage in “non-nutritive sucking” when they are no longer hungry and the milk supply is withdrawn. Beyond being an intense source of bodily pleasure – an early expression of later sexuality – sucking also represents the infant’s way of expressing love for and dependency on its feeder [normally it is the mother, but it can also be a primary caregiver that the child is attached to, hence Lacan proposed that the Oedipal & Electra complexes may not only not be true for ALL cases, but the child’s early sexual feelings may be projected on other primary caregivers and not necessarily the direct parents]. The sucking behaviour also serves to a general stance that the infant takes towards the world, one of “incorporation” or the taking in of new experiences.

 

Stage II: The Anal Stage (1 to 3 years old approximately)

At the second stage, the Anal stage, the focus shifts from one end of the digestive tract to the other at it happens at around the age of 2, when the child is developing an increasing degree of autonomous control over its muscles, including the sphincters that control excretion. After the incorporative passivity and dependency of the oral stage, the child begins to take a more active approach to life [note the term active also in line with Jean Piaget’s views on the development of the human child]. Sigmund Freud proposed that these themes of activity, autonomy and control, play out most crucially around the anus as the child learns to control defecation, and learns that it can control its direct external environment, in particular its caregivers attention, by expelling or withholding faeces. Moreover, the child takes a sort of sadistic pleasure in this control, a form of pleasure described as “Anal erotism”. An important conflict for the child during this stage involves toilet training, with struggles/disapproval taking place over the parents/caregivers demand that the child control its defecation according to particular rules. However, the anal stage represents a set of themes, struggles, pleasures, and preoccupations that cannot be reduced in any simple way to toilet-training, as many common psychology students from the wrong linguistic vein are in caricatures of Freud maybe in a defensive act for their lack of linguistic subtlety to understand the mental life and the models that govern it.

 

Stage III: The Phallic Stage (3 to 6 years old approximately)

Gradually, although still in the early childhood years, the primary location of sexual pleasure and interest shifts from the anus to the genitals, where the little boy starts to become fascinated with his penis while his counterpart, the little girl on other side of the gender register, develops a fascination with her clitoris. However, this stage is known as “phallic” and not “genital” because Freud maintained that both sexes were focused on the male organ; “phallus” referring not to the actual physical organ, the anatomical penis, but to its “symbolic value”. Briefly explained, the phallic stage is set as the little boy understanding that he has the penis [which has a symbolic value] which the little girl lacks, and develops the belief that he could possibly lose it. In contrast, the little girl does not have a penis and wishes to have one.

This is the very first time that the difference between the sexes comes into play in childhood development, and the contrast between masculinity and feminity, really becomes an issue for the child. It is also the 1st stage at which Freud’s psychosexual theory recognises sexual differences, and marks the crucial point at which, children become gendered beings [between the ages of 3 – 6].

The little boy’s and the girl’s differing relation to the phallus [remember: the “symbolic value” of it not the actual organ] plays a vital role in unfolding drama that takes place within the family during this stage, somewhere around the age of 3 to 5. It has been dubbed the “Oedipus complex”, after the Greek legend in which Oedipus unwittingly murders his father and marries his mother, his original love-object [remember the attachment period in Bowlby’s along with breastfeeding] after all, as is consequently envious of his father, who seems to have his mother to himself. The boy’s fearful recognition that he could lose his penis [symbolically: “masculinity”] – “castration anxiety” – becomes focussed on the idea that the competing male for the love of the mother[the father], could inflict this punishment on him if the boy’s sexual feelings and desire for the female figure of the caregiving mother is recognised. So, faced with fear, he renounces and represses the sexual feelings and desire, to instead identity with the father, becoming his imitator rather than his rival. In this process, the boy learns about masculinity and internalises the societal rules and norms [e.g. about relationships] that the father represents [the development of the Super-Ego, a sense of duty and responsibility, i.e. “conscience” takes place as the Structural Model suggests].

In the case of the little girl, matters are slightly different, and the developing child soon feels her lack of a penis keenly (“penis envy”) and blames the mother for leaving her so grievously unequipped, and then the father soon turns into her primary love-object [the “Electra Complex” appears as the opposite of the “Oedipus” Complex], and the mother her rival.

A similar process to the little boy now takes place in the little girl’s realm, resulting in the repression of her sexual feeling, desires and love, to shift to an identification with her mother, and hence with feminity. However, given that the girl is not under any “castration” threat, this process occurs under much less emotional pressure than in the little boy’s case. Consequently, perhaps due to this difference in emotional pressure, Freud proposed that the Electra complex was resolved less conclusively and with much less complete repression in girls than in boys, but also that girls tend to internalise a conscience [preconscious, or superego] that is in some ways weaker and less prohibitive and punitive than boys. It is not surprising that such a controversial claim about girls has been highly criticised specially with no scientific evidence to back it up; and is perhaps also one reason why Freud’s account of Oedipal [Electra complex] conflict in women has been the subject of much revision [e.g. by Jacques Lacan].

 

Stage IV: Latency (6 years old to puberty)

After the upheavals of the Oedipus and Electra complexes, the sexual drives go into a prolonged “semi-hibernation”. During the pre-pubertal school years, children engage in much less sexual activity and their relationships with others are also desexualised. Instead of desiring the primary caregivers and original love-objects, their parents, children now begin to identify with them – having structured their understanding of the world. However, this sudden interruption of childhood sexuality is largely a result of the massive repression of sexual feelings that concluded the phallic stage. One of the main consequence of this repression is that children come to completely forget their earlier sexual feelings, a major source [Freud claimed] of our general amnesia for early childhood experiences. Other institutional settings with their own social models such as formal schooling, reinforce the repression of sexuality during latency, leading children to focus their energies instead on mastering “culturally valued” knowledge and skills. Freud observed that the desexualisation of latency-age children was less complete among so-called “primitive” peoples.

 

Stage V: The Genital Stage (from the onset of puberty to death)

The latency period of forced or socially imposed sexual repression ends with the biologically-driven surge of sexual energy that accompanies puberty. This marks the final stage of psychosexual development where it all the previous stages were successfully completed, leaves the person with the ability for mature love with sexual feelings. It is important to note that the focus on sexual pleasure is once more shifted to the genitals as it was before the stage of latency [during the phallic stage (3 – 6 years old)] however, now it is fused with the ability for sensible and true affection for the object of desire [and not simply immature sexual feelings trying to find expression from an inadequately developed brain being projected at the easiest accessible caregiver].

In addition, both sexes are now invested in their own genitals rather than sharing a focus on the “symbolic value” of the penis as it occurred during the “Phallic stage”. The Genital Stage therefore marks the end of the “polymorphous perversity” of childhood sexuality. However, these erotic moments have not completely vanished but are instead subordinated to genital sexuality, often finding expression in other subtle ways [e.g. sexual foreplay].

According to the genetic model of psychosexual stages, we pass through each of the psychosexual stages on the way to maturity. However, we do not pass through them unscathed, and there are many ways in which people have problematic difficulties in particular stages [unable to progress successfully] and when such incidents happen a “fixation” develops. A fixation is simply an unresolved difficulty involving the characteristic issues of the particular stage, and leads to a fault-line in our personality, according to Freudian developmental perspectives.

If the individual failed to receive proper and reliable nurturance and gratification during the oral stage – or alternatively if they were over-indulged – a fixation on that stage may develop. It is believed that when a person is confronted with some forms of stresses, they may revert to the typical immature ways of dealing with the world of that period [at the particular point in time of that stage], this process was referred to as “regression” by Freud.

In some cases, fixations may lead to full-fledged mental disorders: Oral fixations are linked to depression and addictions, anal fixations to obsessive-compulsive disorder, and phallic fixations to hysteria [in severe cases]. Fixations [generally later countered by Reaction Formation] do not simply represent forms of behaviour and thinking that people regress to when faced with difficulties but the whole personality [thought structure] or “character” – the term Freud preferred – may be organised around the themes of the stage at which the person is most strongly fixated. As a result, Freud proposed a set of distinct stage-based character types:

(i) Oral Characters

This category of characters tend to be marked by passivity and dependency [think of the sheep metaphor], and are liable to use relatively immature ego defences such as denial.

(ii) Anal Characters

Anal people tend to be inflexible, stingy, obstinate and orderly, with a preference for defence mechanisms such as the isolation of affect [hide their feelings] and reaction formation.

(iii) Phallic Characters

Phallic characters are generally impulsive, vain and headstrong [think alpha-male prototype] with a preference for a defensive style that favours repression.

It is important to note that this 3-part typology is the closest that the psychoanalytic theory of personality comes to bringing forward an explanation for individual differences in personality from early childhood experiences. A phase of development pivotal to the other 2 mentioned theories which also attribute the foundations of fundamental structures to the period of infancy and childhood, although they all also acknowledge the individual’s ability to shape their own minds and correct their own problematic traits through reflection, and indeed as mentioned in the section on John Bowlby’s theory of attachment mothers with high reflective abilities were able to reshape the internal working models of their children’s attachment style and subsequent emotional development. It is to also be noted how all these 3 theorists although different in their perspectives, have been inspired by each other’s works, the idea of attachment itself was inspired by Freud’s pre-oedipal claims, and Jean Piaget like Sigmund Freud came from the school of thought that viewed the mind as an “active” entity in its development and creation, and not a “passive” entity generated by a ball of soft matter acting like a junction box with scripts for stimuli.

 

Psychoanalysis, then and now

One of the main claims of Freudian theory is that much of what motivates us to move forward in life is determined by the unconscious, and since by the reductionist mind state of the common researcher who sadly only had empiricism to dream of a better life for himself, these unconscious processes cannot be measured [such as moles, weight, fingers, teeth, sheep, cattle, etc], and hence it is often claimed [without much understanding or linguistic abilities or skills in discourse and philosophy] that belief in Freudian ideas is precisely that – beliefs rather than mechanical models based on empirical evidence [e.g. medicine, physics, surgery, chemistry, biology, etc – all the disciplines of the hard sciences].

However, while Freud’s views are almost impossible to test with reductionist quantitative methods, his theories and claims have influenced many psychologists who work with different methodologies and the unconscious processes of the brain are also being backed up by emerging fields that focus on the physiology of the brain [e.g. cognitive-neuroscience].

To illustrate one of those views that are hard to test empirically, consider the Freudian notion of “Reaction Formation”. It is assumed for example that if an individual is harshly [by strict parents] toilet trained as a child then the Freudian prediction would be that the person becomes “anally retentive” [i.e. excessively neat and tidy]. However, if in some ways we do recognise such tendencies in ourselves [once again prompting to the existence of a well developed with reflective and perspective taking abilities fully developed by Piaget’s standards], maybe even unconsciously, then we may react against it [Reaction Formation occurs] and we actively become very untidy.

This suggests that we are in control of ourselves and we have the ability to reverse the effects of our upbringing and early childhood experiences, which means in turn that it is impossible to predict a child’s development despite the fact that the first 6 years from birth are supposedly critical in determining later personality formation [self-reflective people save themselves from the mediocrity of the masses].

Freudian Theory has been of immense importance in pointing out 2 possibilities. One is that early childhood can be immensely important in affecting and determining later development [a position also adopted by other major theorists as we have seen such as Bowlby], and the other is that we can be driven by unconscious needs and desires which we are not aware of [until exposed to the right environmental stimuli that release them from their hidden depths]. Thus, it is assumed that if we not complete one of the childhood psychosexual stages very well, it could reflect itself later in adult disorders such as neurotic symptoms, but we would not be aware of the source or cause of the problem. The only way to come to terms with these deeply embedded problems in the depth of the individual’s psyche that has more saliency than the minor cognitive schemas for basic environmental interactions [e.g. making a cup of tea or a sandwich], is through close intensive sessions of psychoanalysis (see Picture G) in which the analyst peers into the unconscious to try and unravel [discover] the problems that occurred during the patient’s childhood development that is causing the current problems.

PG Psychoanalyst tries to discover what went wrong in your childhood that is causing your current problems

PICTURE G. The psychoanalyst tries to uncover the childhood unresolved issues to find the causes of the current problems.

Whatever its weaknesses are, the psychoanalytic theory remains the most complete theory in terms of depth and detail in capturing the essence of the human mind [soul as metaphor, or psyche], and today there are still many who believe that psychoanalytic theories are fundamental in understanding human development with many theoreticians who have brought forward variations and alternatives to Freud’s proposals on some controversial issues [e.g. Jacques Lacan, John Bowlby and Carl Jung] while many of his proposals have also lead to the scientific discovery of unconscious mental processes.

_____________________________________

 

*****

 

Références

  1. Ainsworth, M.D.S. & Wittig, B.A. (1969). Attachment and exploratory behaviour of one year olds in a strange situation. In B.M. Foss (Ed.) Determinants of infant behaviour, vol. 4. New York: Barnes and Noble.
  2. Ainsworth, M.D.S. (1963). The development of infant-mother interaction among the Ganda. In B.M. Foss (Ed.) Determinants of infant behaviour (Vol. 2). London:Methuen; New York: Wiley.
  3. Ainsworth, M.D.S. (1967). Infancy in Uganda: Infant care and the growth of love. Baltimore, MD: Johns Hopkins University Press.
  4. Ainsworth, M.D.S., Bell, S.M. & Stayton, D.J. (1971). Individual differences in Strange Situation behaviour of one year olds. In H.R. Schaffer (Ed.) The origins of human social relations. New York: Academic Press.
  5. Ainsworth, M.D.S., Bell, S.M. & Stayton, D.J. (1974). Infant-mother attachment and social development: Socialisation as a product of reciprocal responsiveness to signals. In M.P.M. Richards (Ed.) The introduction of the child into a social world. London: Cambridge University Press.
  6. Ainsworth, M.D.S., Blehar, M.C., Waters, E. & Wall, S. (1978). Patterns of attachment: Assessed in the strange situation and at home. Hillsdale, N.J.: Lawrence Erlbaum.
  7. Anderson, J. W. (1972). Attachment behaviour out of doors. In N. Blurton Jones (ed.), Ethological Studies of Child Behaviour. Cambridge: Cambridge University Press.
  8. Anderson, S.W., Bechara, A., Damasio, H., Tranel, D., Damasio, A.R. (1999) Impairment of social and moral behaviour related to early damage in human prefrontal cortex. Nat Neurisci, 2(11), 1032-7
  9. Baillargeon, R and DeVos, J. (1991). Object permanence in young infants: further evidence. Child Development, 62, 1227-46.
  10. Bar-On, D., Eland, J., Kleber, R.J., Krell, R., Moore, Y., Sagi, A., Soriano, E., Suedfeld, P., van der Velden, P.G. & Van Ijzendoorn, M.H. (1998). Multigenerational perspectives on coping with the Holocaust experience: an attachment perspective for understanding the developmental sequelae of trauma across generations. International Journal of Behavioural Development, 22, 315-38.
  11. Belsky, J., Spritz, B. & Crnic, K. (1996). Infant attachment security and affective-cognitive information processing at age 3. Psychological Science, 7, 111-114.
  12. Berlin, L.J. & Cassidy, J. (2003). Mothers’ self-reported control of their preschool children’s emotional expressiveness: A longitudinal study of associations with infant-mother attachment and children’s emotion regulation. Social Development, 12, 477-495.
  13. Bernier, A. & Meins, E. (2008). A threshold approach to understanding the origins of attachment disorganisation. Developmental Psychology44, 969-982.
  14. Blakemore, S.J., Den Ouden, H., Choudhury, S., Frith, C. (2007). Adolescent development of the neural circuitry for thinking about intentions. Social Cognitive and Affective Neuroscience, 2(2), 130-9
  15. Bliss, J. (2010). Recollections of Jean Piaget. The Psychologist, 23, 444-446.
  16. Boden, M. A. (1979). London: Fontana.
  17. Borke, H. (1975). Piaget’s mountains revisited: Changes in the egocentric landscape. Developmental Psychology, 11, 240-3.
  18. Bower, T.G.R. (1982). Development in Infancy, 2nd San Francisco: W. H. Freeman
  19. Bowlby, J. (1958). The nature of child’s tie to his mother: International Journal of Psycho-Analysis, 41, 251-269.
  20. Bowlby, J. (1969 / 1982). Attachment and loss, vol. 1: Attachment(2nd) New York: Basic Books.
  21. Bowlby, J. (1988). A Secure Base: Clinical Applications of Attachment Theory. London: Routledge.
  22. Bretherton, I & Waters, E. (eds) (1985). Growing points of attachment theory and research. Monographs of the Society for Research in Child Development, 50, nos 1-2.
  23. Bronfenbrenner, U. (1979). The Ecology of Human Development. Cambridge, MA: Harvard University Press.
  24. Bryant, P. E. and Trabasso, T. (1971). Transitive inferences and memory in young children. Nature, 232, 456-8.
  25. Butler, M., Retzlaff, P., & Vanderploeg, R. (1991). Neuropsychological test usage. Professional Psychology: Research and Practice, 22, 510-512
  26. Carlson, E.A. (1998). A prospective longitudinal study of attachment disorganisation/disorientation. Child Development, 69, 1107-28.
  27. Cassidy, J. (1994). Emotion regulation: Influences of attachment relationships. In N. Fox (Ed.) The development of emotion regulation: Biological and behavioural constraints (pp. 228-250). Monographs of the Society for Research in Child Development, 59 (2-3, Serial No. 240).
  28. Cohen, L. J. & Campos, J. J. (1974). Father, mother and stranger as elicitors of attachment behaviour in infancy. Developmental Psychology, 10, 146-54.
  29. Danner, F. W. and Day, M. C. (1977). Eliciting normal operations. Child Development, 48, 1600-6.
  30. De Wolff, M.S & Van Ijzendoorn, M.H. (1997). Sensitivity and attachment: a meta-analysis on parental antecedents of infant attachment. Child Development, 68, 571-91.
  31. Demakis, G. J. (2003). A meta-analytic review of the sensitivity of the Wisconsin Card Sorting Test to frontal and lateralized frontal brain damage. Neuropsychology, 17, 255-264
  32. Diamond A. (2002). Normal development of prefrontal cortex from birth to young adulthood: cognitive functions, anatomy, and biochemistry. In: Stuss DT, Knight RT, editors. Principles of frontal lobe function. New York: Oxford University Press. P 466-503
  33. Donaldson, M. (1978). Children’s Minds. London: Fontana.
  34. Dozier, M., Stovall, K.C., Albus, K.E. & Bates, B. (2001). Attachment for infants in foster care: The role of caregiver state of mind. Child Development, 72, 1467-77.
  35. Eling, P., Derckx, K., & Maes, R. (2008). On the historical and conceptual background of the Wisconsin Card Sorting Test. Brain and Cognition, 67, 247-253
  36. Fearon, R.P., Bakermans-Kranenburg, M.J. & Van Ijzendoorn, M.J. (2010). The significance of insecure attachment and disorganisation in the development of children’s externalising behaviour: A meta-analytic study. Child Development, 81, 435-456.
  37. Fifer, W. (2010). Prenatal Development and risks. In G. Bremner & T. Wachs (Eds.), The Blackwell Handbook of Infant Development.Oxford: Wiley/Blackwell.
  38. Flavell, J.H. (1996). Piaget’s legacy. Psychological Science, 7, 200-203.
  39. Fonagy, P., Steele, H. & Steele, M. (1991). Maternal representations of attachment during pregnancy predict organisation of infant-mother attachment at one year of age. Child Development62, 891-905.
  40. Fonagy, P., Steele, M., Steele, H., Higgitt, A. & Target, M. (1994). The theory and practice and resilience. Journal of Child Psychology and Psychiatry, 35, no.2, 231-57.
  41. Fox, N. (1977). Attachment of Kibbutz infants to mother and metapelet. Child Development, 48, 1228-39.
  42. Fox, N., Kimmerly, N.L. and Schafer, W. D. (1991). Attachment to mother / Attachment to father: a meta-analysis. Child Development, 62, 210-25.
  43. Gestsdottir, S. & Lerner, R. M. (2008). Positive Development in Adolescence: The development and role of intentional self-regulation. Human Development51, 202-224.
  44. Giedd, J.N., Blumenthal, J., Jeffries, N.O., Castellanos, F.X., Liu, H., Zijdenbos, A., et al. (1999). Brain development during childhood and adolescence: a longitudinal MRI study. Nat Neurosci, 2, 861-863
  45. Ginsburg, H. and Opper, S. (1979). Piaget’s theory of intellectual development: An introduction. Englewood Cliffs, NJ: Prentice-Hall.
  46. Grant, D.A. and Berg, E.A. (1948). A Behavioural Analysis of Degree Impairment and Ease of Shifting to New Responses in Weigh-Type Card Sorting Problem. Journal of Experimental Psychology, 39, 404-411
  47. Grossman, K.E., Grossman, K., Huber, F. & Wartner, U. (1981). German children’s behaviour towards their mothers at 12 months and their fathers at 18 months in Ainsworth’s ‘strange situation’. International Journal of Behavioural Development, 4, 157-81.
  48. Gruber, H. & Vonèche, J.J. (1977). The Essential Piaget. London: Routledge & Kegan Paul.
  49. Heaton, R.K., Chelune, G.J., Talley, J.L., Kay, G.G., & Curtis, G. (1993). Wisconsin Card Sorting Test manual: Revised and expanded. Odessa, FL: Psychological Assessment Resources
  50. Hepper, P. (2007). Prenatal Development. In A. Slater & M. Lewis (Eds), Introduction to Infant Development (2nded; pp. 41-62). Oxford: Oxford University Press.
  51. Hughes, P., Turton, P., Hopper, E., McGauley, G.A. & Fonagy, P. (2001). Disorganised attachment behaviour among infants born subsequent to stillbirth. Journal of Child Psychology and Psychiatry, 52, 791-801.
  52. Inhelder, B. and Piaget, J. (1958). The growth of logical thinking from Childhood to Adolescence. London: Routledge & Kegan Paul.
  53. Jacobsen, T., Hibbs, E. and Ziegenhain, U. (2000). Maternal expressed emotion related to attachment disorganisation in early childhood: a preliminary report. Journal of Child Psychology and Psychiatry, 41, 899-906.
  54. Jahoda, G. (1983). European ‘lag’ in the development of an economic concept: a study in Zimbabwe. British Journal of Developmental Psychology, 1, 113-20.
  55. Kirsch, S.J. & Cassidy, J. (1997). Preschoolers’ attention to and memory for attachment-relevant information. Child Development, 68, 1143-1153.
  56. Kochanska, G. (2001). Emotional development in children with different attachment histories: the first three years. Child Development, 72, 474-90.
  57. Levine, L.V., Tuber, S.B., Slade, H. & Ward, M.J. (1991). Mothers’ mental representations and their relationship to mother-infant attachment. Bulletin of the Menninger Clinic, 55, 454-469.
  58. Lewis, M., Feiring, C. & Rosenthal, S. (2000). Attachment over time. Child Development, 71, 707-20.
  59. Lewis, M., Feiring, C., McGuffoy, C. and Jaskir, J. (1984). Predicting psychopathology in six-year-olds from early social relations. Child Development, 55, 123-36.
  60. Lhermitte, F. (1983) “Utilization Behaviour” and its relation to lesions of the frontal lobes. Brain, 106, 237-255
  61. Lyons-Ruth, K., Alpern, L. & Repacholi, B. (1993). Disorganised infant attachment classification and maternal psychosocial problems as predictors of hostile-aggressive behaviour in the pre-school classroom. Child Development, 64, 527-585.
  62. Lyons-Ruth, K., Repacholi, B., McLeod, S. & Silva, E. (1991). Disorganised attachment behaviour in infancy: Short-term stability, maternal and infant correlates and risk-related subtypes. Development and Psychopathology, 3, 377-396.
  63. Main, M & Solomon, J. (1990). Procedures for identifying infants as disorganised / disoriented during the Ainsworth Strange Situation. In M.T. Greenberg, D. Cicchetti & E.M. Cummings (Eds.) Atttachment in the preschool years (pp. 121-160). Chicago; University of Chicago Press.
  64. Main, M. & Hesse, E. (1990). Parents’ unresolved traumatic experiences are related to infant disorganised attachment status: Is frightened and/or frightening parental behaviour the linking mechanism? In M.T. Greenberg, D. Cicchetti, & E.M. Cummings (Eds.) Attachment in the preschool years (pp.161-182). Chicago: University of Chicago Press.
  65. Main, M. & Solomon, J. (1986). Discovery of a disorganised/disoriented attachment pattern. In T.B. Brazelton & M.W. Yogman (Eds.) Affective Development in Infancy. Norwood, NJ: Ablex.
  66. Main, M., Kaplan, N. & Cassidy, J. (1985). Security in infancy, childhood, and adulthood: a move to the level of representation. In I. Bretherton and E. Waters (eds), Growing Points of Attachment Theory and Research. Monographs of the Society for Research in Child Development, 50, nos 1-2.
  67. Martorano, S. C. (1977). A developmental analysis of performance on Piaget’s formal operations tasks. Developmental Psychology, 13, 666-72.
  68. Masangkay, Z.S., McCluskey, K.A., McIntyre, C. W., Sims-Knight, J., Vaughn, B. E. and Flavell, J.H. (1974). The early development of inferences about the visual percepts of others. Child Development, 45, 237-46.
  69. McGarrigle, J. and Donaldson, M. (1974). Conservation accidents. Cognition, 3, 341-50.
  70. Meins, E., Fernyhough, C., Fradley, E. & Tuckey, M. (2001). Rethinking maternal sensitivity: Mothers’ comments on infants’ mental processes predict security of attachment at 12 months. Journal of Child Psychology and Psychiatry, 42, 637-648.
  71. Melhuish, E. (1993). A measure of love? An overview of the assessment of attachment. ACPP Review & Newsletter, 15, 269-75.
  72. Meltzoff, A and Moore, M. (1983). Newborn infants imitate adult facial gestures. Child Development, 54, 702-9.
  73. Meltzoff, A.N and Moore, M. K. (1989). Imitation in newborn infants: exploring the range of gestures imitated and the underlying mechanisms. Development Psychology, 25, 954-62.
  74. Miller P, Wang XJ (2006) Inhibitory control by an integral feedback signal in prefrontal cortex: A model of discrimination between sequential stimuli. Proc Natl Acad Sci USA, 103(1), 201-206
  75. Miller, P.H. (1993). Theories of Developmental Psychology (3rd edn). Englewood Cliffs, NJ: Prentice-Hall.
  76. Miyake, K., Chen, S.J. and Campos, J.J. (1985). Infant temperament, mother’s mode of interaction and attachment in Japan: an interim report. In I. Bretherton and E. Waters (eds), Growing Points of Attachment Theory and Research. Monographs of the Society for Research in Child Development, 50, 276-97.
  77. Norman, D.A., & Shallice, T. (1986). Attention to action: Willed and automatic control of behaviour. (Center for Human Information Processing Technical Report No. 99, rev. ed.) In R.J. Davidson, G.E. Schartz, & D. Shapiro (Eds.), Consciousness and self-regulation: Advances in research, (pp. 1-18). New York: Plenum Press
  78. O’Connor, T.G. & Croft, C.M. (2001). A twin study of attachment in preschool children. Child Development, 72, 1501-11.
  79. Oppenheim, D., Sagi, A. and Lamb, M.E. (1988). Infant-adult attachments on the kibbutz and their relation to socioemotional development 4 years later. Developmental Psychology, 24, 427-33.
  80. Pauli-Pott, U. & Mertesacker, B. (2009). Affect expression in mother-infant interaction and subsequent attachment development. Infant Behaviour and Development, 32, 208-215.
  81. Phelps, J.L., Belsky, J. & Crnic, K. (1998). Earned security, daily stress and parenting: a comparison of five alternative models. Development and Psychopathology,10, 21-38.
  82. Piaget, J and Inhelder, B. (1956). The Child’s Conception of Space. London: Routledge & Kegan Paul.
  83. Piaget, J. (1929). The Child’s Conception of the World. New York: Harcourt Brace Jovanovich.
  84. Piaget, J. (1936/1952). The Origin of Intelligence in the Child. London: Routledge & Kegan Paul.
  85. Piaget, J. (1951). Play, Dreams and Imitation in Childhood. London: Routledge & Kegan Paul.
  86. Piaget, J. (1955). The child’s construction of reality. London: Routledge and Kegan Paul.
  87. Radke-Yarrow, M., McCann, K., DeMulder, E., Belmont, B., Martinez, P. & Richardson, D.T. (1995). Attachment in the context of high-risk conditions. Development and Psychopathology, 7, 247-265.
  88. Rothbaum, F., Weisz, J., Pott, M., Miyake, K. & Morelli, G. (2000). Attachment and culture: security in the United States and Japan. American Psychologist, 55, 1093-105.
  89. Schaffer, H. R. (1996). Social Development. Oxford: Blackwell Publishers.
  90. Schaffer, H.R. & Emerson, P.E. (1964). The development of social attachments in infancy. Monographs of the Society for Research in Child Development,
  91. Shayer, M. and Wylam, H. (1978). The distribution of Piagetian stages of thinking in British middle and secondary school children: II. British Journal of Educational Psychology, 48, 62-70.
  92. Shayer, M., Kuchemann, D.E. and Wylam, H. (1976). The distribution of Piagetian stages of thinking in British middle and secondary school children. British Journal of Educational Psychology, 46, 164-73.
  93. Slater, A. & Bremner, G. (2011). An Introduction to Developmental Psychology (2nd). Oxford: Blackwell.
  94. Smith, P. K., Cowie, H. & Blades, M. (2003). Understanding Children’s Development. Oxford: Blackwell.
  95. Smith, P., Cowie, H. and Blades, M. (2003). Understanding children’s development. 4th ed. pp.388-416.
  96. Sowell ER, Thompson PM, Holmes C.J., Jernigan, T.L., Toga A.W. (1999). In vivo evidence for post-adolescent brain maturation in frontal and striatal regions. Nat Neurosci, 2, 859-861
  97. Spangler, G. & Grossman, K.E. (1993). Biobehavioural organisation in securely and insecurely attached infants. Child Development, 64, 1439-1450.
  98. Stern, D.N. (1985). The interpersonal world of the infant: A view from psychoanalysis and developmental psychology. New York:Basic.
  99. Stevenson-Hinde, J. & Verschueren, K. (2002). Attachment in childhood. In P.K. Smith and C.H. Hard (eds), Blackwell Handbook of Childhood Social Development. Oxford: Blackwell.
  100. Sylvia, K. & Lunt, I. (1981). Child Development: an Introductory Text. Oxford: Basil Blackwell.
  101. Takahashi, K. (1990). Are the key assumptions of the ‘strange situation’ procedure universal? A view from Japanese research. Human Development, 33, 23-30.
  102. True, M.M., Pisani, L. & Oumar, F. (2001). Infant-mother attachment among the Dogon of Mali. Child Development, 72, 1451-66.
  103. Van Ijzendoorn, M.H & Kroonenberg, P.M. (1988). Cross-cultural patterns of attachment: a meta-analysis of the Strange Situation Child Development, 59, 147-56.
  104. Van Ijzendoorn, M.H. & De Wolff, M.S. (1997). In search of the absent father – meta-analyses of infant-father attachment: a rejoinder to our discussants. Child Development, 68, 604-9.
  105. Van Ijzendoorn, M.H. (1995). Adult attachment representations, parental responsiveness, and infant attachment: A meta-analysis on the predictive validity of the Adult Attachment Interview. Psychological Bulletin, 117, 1-17.
  106. Van Ijzendoorn, M.H., Moran, G., Belsky, J., Pederson, D., Bakermans-Kranenburg, M.J. and Kneppers, K. (2000). The similarity of siblings’ attachment to their mother. Child Development, 71, 1086-98.
  107. Van Ijzendoorn, M.H., Sagi, A. & Grossman, K.I. (1999). Transmission of holocaust experiences across three generations Symposium presentation at IXth European Conference on Developmental Psychology, Spetses, Greece, 3 September.
  108. Van Ijzendoorn, M.H., Schuengel, C. & Bakermans-Kranenburg, M.J. (1999). Disorganised attachment in early childhood: meta-analysis of precursors, concomitants, and sequelae. Development and Psychopathology, 11, 225-49.
  109. Vondra, J.I. & Barnett, D. (1999). Atypical attachment in infancy and early childhood among children at developmental risk. Monographs of the Society for Research in Child Development, 64(3), serial no. 258.
  110. Waters, E., Hamilton, C.E. & Weinfeld, N.S. (2000). The stability of attachment security from infancy to adolescence and early adulthood: general introduction. Child Development, 71, 678-83.
  111. Waters, E., Vaughn, B.E., Posada, G. and Kondo-Ikemura, K. (1995). Caregiving, cultural and cognitive perspectives on secure-base behaviour and working models. Monographs of the Society for Research in Child Development, 60, nos 2-3.
  112. Weinraub, M. & Lewis, M. (1977). The determinants of children’s responses to separation. Monographs of the Society for Research in Child Development, 42, 1-78.
  113. Weisner, T.S. & Gallimore, R. (1977). My brother’s keeper: child and sibling caretaking. Current Anthropology, 18, 169-90.
  114. Willatts, P. (1989). Development of problem solving in infancy. In A. Slater and G. Bremner (eds), Infant Development. Hillsdale, NJ: Lawrence Erlbaum
  115. Wimmer, H. and Perner, J. (1983). Beliefs about beliefs: representations and constraining function of wrong beliefs in young children’s understanding of deception. Cognition, 13, 103-28.

Danny d’Purb | DPURB.com

____________________________________________________

While the aim of the community at dpurb.com has  been & will always be to focus on a modern & progressive culture, human progress, scientific research, philosophical advancement & a future in harmony with our natural environment; the tireless efforts in researching & providing our valued audience the latest & finest information in various fields unfortunately takes its toll on our very human admins, who along with the time sacrificed & the pleasure of contributing in advancing our world through sensitive discussions & progressive ideas, have to deal with the stresses that test even the toughest of minds. Your valued support would ensure our work remains at its standards and remind our admins that their efforts are appreciated while also allowing you to take pride in our journey towards an enlightened human civilization. Your support would benefit a cause that focuses on mankind, current & future generations.

Thank you once again for your time.

Please feel free to support us by considering a donation.

Sincerely,

The Team @ dpurb.com

P.S.
– If you are a group/organization or individual looking for consultancy services, email: info[AT]dpurb.com
If you need to reach Danny J. D’Purb directly for any other queries or questions, email: danny[AT]dpurb.com [Inbox checked periodically / Responses may take up to 20 days or more depending on his schedule]

Stay connected by linking up with us on Facebook and Twitter

Donate Button with Credit Cards

Essay // History on Western Philosophy, Religious cultures, Science, Medicine & Secularisation

Mis à jour le Mercredi, 25 Janvier 2023Essay History Histoire danny d'purb dpurb site web

Part I: Western Philosophy

The fact that philosophy’s focus has never remained static over time makes its history very complex with the added possibility that most of the early writers may have even been philosophers before historians. The world’s main philosophical trends and traditions can however be traced with a decent amount of precision while considering that the ruling philosophy of any period is determined by the socio-cultural climate and economic context [when it was written and published].

The first Western philosophers, starting with Thales of Miletus (c.620-c.555BC), were cosmologists who made inquiries about the nature and origin of all things; what defined them particularly as a new type of thinkers was that their speculations unlike those before them were purely naturalistic and not based on or guided by myth or legend. The traditions of Western philosophy originates around the Aegean Sea and southern Italy in the 6c BC in the Greek-speaking region which saw its philosophical traditions and teachings blossom with Plato (c.428-c.348BC) and Aristotle (384-322BC), who have remained highly influential in Western thought, and who probed virtually all areas of knowledge; no distinction separated theology, philosophy and science then.

As the centuries came, Christianity grew as a major religious and socio-cultural force in Europe (2-5c), and apologists such as Augustine de Hippo (354-430) started to synthesise the Christian world-view with ancient philosophy, a tradition that continued with St Thomas Aquinas (1225-1274) and throughout the Middle Ages.

As the 16c and the 17c were the years that experienced the Scientific Revolution, the physical sciences started to assert their authority as a field of their own and grow separate from theology and philosophy. A new age of Rationalist philosophers, notably Descartes (1596-1650) started their works based on the minute analysis and interpretation of the philosophical implications of the ground-breaking new scientific discoveries and knowledge of the time. The 18c produced the empiricist school of thought of John Locke and David Hume (1711-1776) in the search for the foundations of knowledge, to conclude the turn of the century with Immanuel Kant (1724-1804) who developed a strong synthesis of rationalism and empiricism as a school of philosophy. Further, the development of positivist philosophy in the 19c was inspired and based solely on the scientific method and American pragmatism [with the competing philosophy of Utilitarianism and Marxism]. Later, the individual experienced the philosophy of existentialism based on the works of Soren Kierkegaard (1813-1855) and in the 20c the discipline of psychology had firmly invented itself as a field separate from philosophy [including many branches such as neuroscience, psychiatry, cognitive-behavioural, etc].

 

The 20c and Western Philosophy’s influence across civilisation

Perhaps due to its wide use in maintaining reason among intellectuals and society, philosophy had fragmented into different precise and specific branches by the 20c [philosophy of mind, philosophy of science, philosophy of religion, philosophy of medicine…]. However at its core, the emphasis of philosophy remained on the analytics and linguistic philosophy due to the huge influence of Ludwig Wittgenstein (1889-1951).

Indian philosophy for example shares similarities with some aspects of Western philosophy in its foundations based on the development of logic from the Nyaya School, founded by Gautama (fl. 1c). The tenet of most schools were codified into short aphorisms (sutras) commented upon by later philosophers in the Southern parts of Asia, and India. More specifically the emphasis on linguistic expression and the nature of language which is believed to be similarly important as in the West, but different in theme as India’s language was greatly enhanced by the early development of linguistics or Sanskrit grammar, and the nature of knowledge and its acquisition. In modern times, Indian philosophy has seen an increasing Western influence especially from the social philosophies of utilitarian schools which inspired a number of religious and socio-cultural movements, such as the Brahmo Samaj. The 20c saw the Anglo-American linguistic philosophy form the basis of research, with added influence from European phenomenology present in the works of scholars such as KC Bhattacharya who was known for his method of “constructive interpretation” through which ancient Indian philosophical systems are studied like matters of modern philosophy. Bhattacharya was interested in the problematic of the apparently material universe that the “mind” generates and encouraged the idea of an immersive cosmopolitanism where Indian systems of philosophy were modernised through assimilation and immersion, instead of a blind imitation of Western ideas – fairly similar to the works of Arthur Schopenhauer [See: Philosophy Review: “The World as Will and Idea”, by Arthur Schopenhauer (1818)]. The trend of Western philosophy as inspiration continued to be disseminated by intellectuals in the East, and Chinese philosophy too which first made its appearance during the Zhou Dynasty (1122-256BC) later experienced Western influence in the 20c, most notably in the introduction of the leftist branch of Marxism which became China’s official political philosophy. Around the same period, a New Confucian movement rose, attempting to synthesise the traditions of the West and the East [traditional Confucian values with Western democracy and science].

As for the African continent, starting from the Middle-East and North-Africa, it may be unsurprising that Western values or philosophy had no major influence in the Islamic territories and Muslim world who had been subjugating non-Muslin civilisations with violent wars [jihad] in the name of their God. The major European incursions and hence influence in the Arab world comes from the time of Napoleon I’s invasion of Egypt (1798) which led to the promotion of Western philosophy in the area for a short time before a backlash from Islamic circles called for a religious and politically-oriented philosophy to counter foreign domination.

Regarding African philosophy, it is to this day a subject of intense debates among intellectuals and cultured circles whether such a thing exists, along with the definition that ‘African philosophy’ may include: for example, many scholars associate the term to communal values, beliefs and world-views of traditional Black African oral cultures, highlighting the rich, long and sometimes violent tradition of indigenous African philosophy [stretching back in time] with tales of supernaturalism and communally-derived ethics by tribes. What seems to be a certitude is that African philosophy is unlike Western, Indian, Chinese and Arabic traditions as there is very little in terms of African philosophical traditions before the modern period. However, the logical question remains, and that is: if African philosophy are works that were created within the geographical area that constitutes Africa, then perhaps all of the writings of ancient Egyptians may quality as African, and also Christian apologists of the 4-5c period like St Augustine de Hippo. Indeed, to further the argument of logic, the whole world’s culture and societies could all be qualified as African, since it has recently been proven scientifically that all humans evolved after leaving Africa.

allafricans

_______________________________________________________

 

Part II: Religious Cultures

religiouschoices

Image: The Atlantic

 

SigmundFreudOnReligion

The main driving power behind the psychological movement focused on the “Human Mind”, Sigmund Freud, was an atheist unlike Isaac Newton who was a devout Christian with complex and heterodox private beliefs

The world’s cultures are generally classified into the five major religious traditions:

  • Buddhism
  • Islam
  • Hinduism
  • Judaism
  • Christianity

Buddhism

The tradition of Buddhism which is made up of thought and practice originates in India around 2500 years ago, it was inspired by the teaching of Buddha (Siddhartha Gautama). The concept of Buddha is explained in the ‘Four Noble Truths’, which concludes by the claim of a path leading to deliverance from the universal human experience of suffering. One of its main tenet is the law of karma, which states that good and evil deeds result in the appropriate reward or punishment in life or in a succession of rebirths. 

SONY DSC

Dharma day commemorates the day when Buddha made his first sermon or religious teaching after his enlightenment

Division

Dating from its earliest history, Buddhism is divided into two main traditions.

  • Theravada Buddhism adheres to the strict and narrow of early Buddhist writings, where salvation is possible only for the few who accept the severe discipline and effort necessary to achieve it.
  • Mahayana Buddhism is the more ‘liberal form’ and makes concession to popular piety by seemingly diluting the degree of discipline required for salvation, claiming that it is achievable for everyone instead. It introduces the doctrine of bodhisattva (or personal saviour). The spread of Buddhism lead to other schools to expand, namely Chan or Zen, Tendai, Nichiren, Pure Land and Soka Gakkai.

 

Theravada Buddhism in South and South-East Asia

While being nearly eradicated in its original birthplace, the practice of Theravada Buddhism has turned into a significant religious force in the states of Burma, Cambodia, Laos, Sri Lanka and Thailand. Traditionally, it is believed that missions in the area by the emperor of India, Ashoka in the 3c BC introduced Buddhism. While the evidence lacks the consistency to be conclusive, it is assumed and believed by most that many different variations of Hindu and Buddhist traditional movements were present, scattered across South-East Asia up to the 10c. Theravada Buddhism eventually acquired more influence from the 11c to 15c as it experienced growing contacts with Sri Lanka where the movement was outward looking. In Burma (now Myanmar), Buddhist states arose and soon others followed, namely Cambodia, Laos, Java and Thailand, including the Angkor state in Cambodia and the Pagan state in Burma. During the modern period [at the exception of Thailand which was never colonised], the imperial occupation, Christian missionaries and the Western world-view challenged Theravada Buddhism [the strict version of Buddhist philosophy] in South=East Asia. 

Mahayana Buddhism in North and Central Asia

The Mahayana which is the form of Buddhism commonly practised in China, Tibet, Mongolia, Nepal, Korea and Japan dates from about the 1c when it arose as a more liberal movement within the Buddhist movement in northern India, focussing on various forms of popular devotion.

Tibetan Buddhism

Orthodox Mahayana Buddhism and Vajrayana Buddhism (a Tantric form of Mahayana Buddhism) had been transmitted through missionaries invited from India during the 8c in Tibet. Today’s popular Tibetan Buddhism places an emphasis on the appeasement of malevolent deities, pilgrimages and the accumulation of merit. Since the Chinese invasion in 1959 and the Dalai Lama’s exile from India however, Buddhism has been repressed drastically.

Chinese Buddhism

China’s introduction to Buddhism from India happened in the 1c AD via the central Asian oases along the Silk Route. It had surprisingly established itself as a reasonable presence in China by the end of the Han Dynasty (AD 220). Buddhism had become so successful by the 9c that the Tang Dynasty saw it as ‘an empire within the empire’ and persecuted it in 845 after which the Chan and Pure Land Schools only remained strong, drew closer and found harmony with each other. Buddhism and other religions however was nearly subjugated by the attempts of the Marxist government of Mao Zedong (1949 onwards) when the lands of China were nationalised and Buddhist monks forced into secular employments. Since 1978, the Buddhist movement and other religions have seen a revival in China.

***

Allah

Islam

Islam is simply Arabic for ‘submission to the will of God (Allah)’ and the name of the religion which was founded in Arabia during the 7c throughout a controversial prophet known as Muhammad. Islam relies on prophets to establish its doctrines which it believes have existed since the beginning of time, sent by God like Moses and Jesus, to provide the necessary guidance for the achievement of eternal reward; and the culmination of this succession is assumed by Muslims to be the revelation to Muhammad of the Quran, the ‘perfect Word of God’.

Beliefs and traditions

There are five religious duties that make up the founding pillar of Islam:

  • The shahadah (profession of faith) is the honest recitation of the two-fold creed: ‘There is no god but God’ and ‘Muhammad is the Messenger of God’.
  • The salat (formal prayer) must be said at fixed hours five times a day while facing towards the city of Mecca
  • The payment of zakat (‘purification’) [a form of religious tax by the Muslim community] which is regarded as an act of worship and considered as the duty of sharing one’s wealth out of gratitude for God’s favour, according to the uses laid down in the Quran [such as subjugation of all non-Muslims, the imposition of violent and controversial Sharia law (a section of Islam as a political ideology which dictates all aspects of Muslim life with severe repercussions if transgressed), learning to adapt behaviour to protect Islam at all cost even if it means deceiving (‘Taqqiya’), etc]
  • There is an imposition regarding fasting (saum) which has to be done during the month of Ramadan.
  • The pilgrimage to the Mecca, known as the Haji is part of the sacred law of Islam which applies to all aspects of Muslim life, not simply religious practices. The Haji is described as the Islamic way of life and prescribes the way for a Muslim to fulfil the commands of God and reach heaven, and must be performed at least once during one’s lifetime. The cycle of festivals such as Hijra (Hegira), the start of the Islamic year, and Ramadan, the month where Muslims fast during daytime are two of the most known practices still misunderstood by mainstream media.

Divisions

Although all Muslims believe in the ideology of Islam and its teachings from Muhammad, two basic and distinct groups exist within Islam. The Sunnis are the majority and acknowledge the first four caliphs as Muhammad’s legitimate successors. The other group, known as the Shiites make up the largest minority movement in the Muslim world, and view the imam as the principal religious authority. A number of subsects and derivatives also exist, such as the Ismailis (one group, the Nizaris, regard the Aga Khan as their imam), while the Wahhabis, a movement focussed on reforming Islam begun in the 18c.

Today Islam remains one of the fastest growing religions – probably due to the high birth rate of third world North Africa where it originates. Islam also inculcates strong adversity towards non-muslims, preaching various doctrines such as the subjugation of all non-Muslims into slaves, sexual slavery (Koran 33:50), forced conversation, childhood indoctrination, honour killings and jihad (a war in the name of Islam that guarantees salvation) along with mass migration to promote Islam – and today about 700 million Muslims exist throughout the World.

Since Islam was founded their war on non-muslim civilisation has been relentless and ongoing. During the earlier centuries, the European continent was heavily attacked where Muslim warriors stole, killed, raped and took thousands of slaves from the European continent, including many women as sexual slaves. About 1 million slaves were taken from the Christian world in Europe in order to be put in the hands of the Caliph, who ordered that virgin Christian blonds were to be taken from Spain for him each year.

Marché aux Esclaves Fabbi & Gerome Middle-East Moyen-Orient Islam.jpg

Images: (i): Marché d’Esclaves par Jean-Leon Gerome (1886) | (ii): Marché aux esclaves par Fabio Fabbi (1861 – 1946)

ISIS, the extremist group also go by the Muslim confession of faith, with the message “There is no God but Allah and Muhammad is the Messenger of Allah” on their flag, and fight to re-establish the archetypal Islamic form of governance [the caliphate]. ISIS who are considered as “extremists” justify their actions through endless quotations from the Koran and Sunna [i.e. examples of Prophet Muhammad’s actions that are to be followed by Muslims]. ISIS also implement the standard Islamic response to captured enemies [convert, pay tax or die] as enshrined in the Code of Umar attributed to one of Muhammad’s sucessors as “Commander of the Faithful”; as for the beheadings of disbelieving enemies it is a practice in direct obedience to Koran 8:12: “I will cast terror into the hearts of those who disbelieved, so strike (them) upon their necks and strike from them every fingertip.” and also Koran 47:4, where we can quote: “Therefore when ye meet the Unbelievers (in fight), strike off their heads; at length; then when you have made wide slaughter among them, carefully tie up the remaining captives”: thereafter (is the time for) either generosity or ransom; until the war lays down its burdens.”

We know that ISIS fighters regularly rape women, and Muhammad had his word on rape and sexual slavery in the Koran (33:50), the two trusted sources of Islamic traditions (ahadith) Sahih Muslim and Sahih Bukhari both relate an incident where Muslim warriors were raping some captive women [whom they intended to sell for ransom] while taking care to observe “coitus interruptus” [the withdrawal of the penis before climax]. These warriors asked Muhammad whether their act was religiously lawful, and his answer was shocking in his callousness and its implications for later Muslim behaviour during war: “It does not matter if you do not do it (withdraw before climaxing), for every soul that is to be born up to the Day of Resurrection will be born.” (Sahih Muslim 33:71, see also Sahih Bukhari 34:176:2229). Indeed, when one would expect the perfect example to Muslims to be furious and command them to stop while taking the women in his protection, instead he instructs his followers to do to the women whatever they desired. Even more shocking is the fact that Muslim tradition states that the following verse of the Koran was revealed precisely to ease the qualms of Muslim warriors about having sex with enslaved captives: “Also (prohibited are) women already married, except those whom your right hands posses” (Koran 4:24). Hence, in the world of Allah, if “your right hand possesses” a woman, sex with her is totally lawful even if she is married. The Koran also guides Muslim thought on unbelievers [Kaffirs / infidels]: “are pigs” (5:60); “are asses” (74:50); “Have a disease in their hearts” (2:10); “Are hard-hearted” (39:22); “Impure of hearts” (5:41); “Are deaf” (2:171); “Are blind” (2:171); “Are unjust” (29:49); “Make mischief” (16:88); “Focus only on outward appearance” (19:73-74); “Are impure” (8:37); “Are niggardly” (4:37, 70:21); “Are the worst of men” (98:6); “Are in a state of confusion” (50:5); “Are the lowest of the low” (95:5); “The vilest of animals in Allah’s sight” (8:55); “Are dumb” (2:171, 6:35, 11:29); “Are scum” (13:17); “Are guilty” (30:12, 77:46); “Sinful liars” (45:7); “Allah despises them” (17:18); “Allah has cursed them” (2:88, 48:6); “Allah forsakes them” (32:14, 45:34). Hence, victory is unlikely to be achieved for non-muslims as long as they cannot accept the true nature and motives of Muslims guided by Islam; solutions to countering Islam will always fail if society continues to assume that all the terror is not about Islam when the expansion of Islam is clearly at the very heart of what ISIS fights for.

The constant clash with enlightened movements of the Christian West, with intellectuals such as Dr Bill Warner who initiated the movement for the study of political Islam to help break down and propagate important facts about the ideology of Islam’s political techniques in subjugating global non-Muslim societies, have started to gain major attention from the intellectual crowd [who are active on media platforms such as Twitter, a controversial platform that uses its administrative rights dictatorially, known to restrict freedom of speech, research & factual information that oppose liberal opinions, and many researchers from accessing their archived ‘tweets’ and ‘retweets’, affecting their work and research – a direct breach of Human Rights as specified by Article 10 of the Human Rights Act 1998 – and many have questioned the practice over possibilities of World War III being caused by the USA’s unethical technological monopoly over other Western nations data. Saddam Hussein was assaulted militarily by the UN after breaching human rights]

 

womeninthehadith

Status of Women in the Hadith [purely based on the life, habits & actions of Muhammad]

 

 

Islam remains a controversial religions tradition while also being the only religion with a “manual to run a civilisation” as Dr. Bill Warner phrased it, in the Sharia [an Islamic set of doctrines in managing a civilisation – politics, culture, philosophy and economy] which at its deeper core includes the war on other civilisations through jihad, the subjugation of all non-Muslims, the destruction of all non-Islamic historical heritage, forced circumcision of both sexes and a whole set of violent and radical forms of Islamic lifestyle requirements that include violent and sometimes fatal repercussions [for ‘transgressing‘]. Repeatedly France has profoundly rejected Islam as a dangerous religious practice and culture that is incompatible with the values of French society & culture; however the obsolete system of management that is politics remains an atavistic barrier to banning Islam due to the concept of ‘political correctness’ – an invalid ideology created by the most corrupt & untrustworthy adepts of the obsolete practice of ‘politics‘ [for reasons that are now being scrutinised in the name of change]. The late Christopher Hitchens was also a prominent speaker on secularisation and particularly focused on countering the atavistic Islamisation of the West which threatens personal liberty, freedom of expression, education, innovation, development, cohesion and socio-cultural creativity due to its rigid doctrines.

 

 

 

 

 

 

islam

It is quite obvious nowadays that the majority of mediocre and pathetic politicians from the West of our generation prefer aiming for a prize for peace, and are more scared of being seen as politically incorrect than the destruction of their own people, heritage and civilisation since they dodge these questions and pretend not to see the alarming situation while refusing to relocate the excessive foreign mass every time it has piled up – a heap of incompatible and unskilled people who cannot assimilate waiting to be diplomatically relocated. From history, it seems that only the brave have had the courage to tackle those problems, but when they had done so, they were portrayed as the evil ones, when their actions simply seemed to reflect those of the defenders of Western civilisation, one built and rooted in Christian heritage and the intellectual values of the enlightenment.

Evil, aggressive & violent third-world religious practices should be prohibited in non-Islamic Christian territory to protect the native population, just as pagans [Muslims, for example] forbid and persecute Christians on Islamic territory since to them it is protecting their heritage and their religious beliefs against the non-Muslim invaders (Kaffirs). Moreover Islam has never lied, everything is in the Koran, it is written in black and white that they must kill the ‘Kaffir’ [non-Muslim] and their ultimate goal is a total Islamic world, and all that their prophet Muhammad did [e.g. sexual slavery, decapitation of non-Muslims, the destruction of all other cultures and non-Muslim heritage, forced conversion (Koran 8:39) along with the use of deception to infiltrate other cultures via the Jihad technique [which can be achieved by Taqqiya, a technique for lying and deceiving all enemies of Islam (non-Muslims) in order to gain their trust and then promote the values ​​of Islam] is sacred and should be reproduced without discussion.

Martyrs of Otranto - 813 inhabitants killed in 1480 for refusing to renounce Christianity.jpg

Les saints martyrs d’Otrante ou saints martyrs otrantins sont environ 800 habitants (le chiffre de 813 est souvent évoqué) de cette ville du Salento tués le 14 août 1480 par les Turcs conduits par Gedik Ahmed Pacha pour avoir refusé de se convertir à l’islam après la chute de leur ville. Leur canonisation a eu lieu le 12 mai 2013 place Saint-Pierre. Elle a été prononcée par le pape François. / Traduction(EN): The Otranto martyrs are about 800 inhabitants (the figure of 813 is often mentioned) of this city of Salento killed on August 14, 1480 by the Turks led by Gedik Ahmed Pasha for having refused to convert to Islam after the fall of their city. Their canonization took place on May 12, 2013 in St. Peter’s Square. It was pronounced by Pope Francis.

 

Les 800 crânes et os des martyrs d'Otranto en exposition.jpg

Les 800 crânes et os des martyrs d’Otranto en exposition: Environ 800, selon les estimations, ont eu le choix entre se convertir à l’Islam ou mourir, ils ont choisi la mort. Leurs dépouilles ont été transportées à la cathédrale et placées dans la chapelle des martyrs dans une vitrine en verre derrière l’autel en souvenir de leur sacrifice. / Traduction(EN): The 800 Skulls and Bones of the Martyrs of Otranto on Display: An estimated 800, were given a choice to either covert to Islam or die, they chose death. Their remains were taken to the cathedral and placed in the Chapel of the Martyrs in a glass fronted case behind the altar as a reminder of their sacrifice

Moreover Muslims who define themselves as moderate cannot do anything to help non-muslims since they too have submitted to the ideology of Islam by being muslims, whether they know it or not; muslims who call themselves “moderate” have no legitimacy to change the writings of Islam, and it is also said in the Koran that no one has the right to change the writings or to deny the orders of their prophet Muhammad who is a total and final authority for Muslims, so there is no diplomacy as such with Islam, because all diplomacy to Islam is considered as the stupidity/ignorance of their enemy [non-muslims or “Kaffirs”] to be exploited to promote Islam and dominate non-muslim civilisations through infiltration, mass migration and reproduction with women of non-Muslim civilizations to promote & expand Islam. It is important to note that all muslims abide by the very same book, the Koran, which preaches the same messages and values to all muslims. Recep Erdogan a fervent Muslim did clearly state: “The term ‘moderate Islam’ is ugly and offensive. There is no moderate Islam. Islam is Islam.” In a poem read by Erdogan, we can quote the following, “The mosques are our barracks, the domes our helmets, the minarets our bayonets, and the faithful our soldiers.”

Diplomacy masked under the term “Political Correctness” could eventually be the downfall of non-Muslim civilisation when dealing with Islam. During the history of mankind, defending and fighting the Islamic oppressors used to be called war, now in a generation of ignorance many seem to see it as “Islamophobia”. Islam is anti-Western, anti-Christian, and against anything that is not Islamic and pro-Muslim brotherhood.

Jihad vs Crusades
 

Islam is a society of warriors and they do not hide this fact, it is the ignorance of other civilisations that they exploit globally [fairly similarly to what the Jews do, another bedouin tradition from North Africa] and those who are ignorant due to their lack of knowledge on the writings and philosophies of Islam pay the consequences violently in more ways than one. By the writings of the Koran, and by the analysis of their technique of subjugation, it is therefore almost impossible to trust Muslims, because their religious text ensures that non-Muslims cannot trust them because their words can always be lies [Taqqiya / deception to be used as a war technique as instructed in the Koran against non-muslims], and ultimately they have no power over Islamic instructions themselves because they are forced to follow the Koran’s words to the letter, and if they do not do so, they would be eligible to be murdered by the ‘Ummah’ [Muslim Brotherhood]. It is even well written in the Koran (4: 144) that Muslims should not take non-Muslims as friends because they would give their god Allah a reason to punish them, and also (Koran 3:28) that those who take non-Muslims for friends instead of Muslims will not have the protection of their god. So, ultimately Islam is a civilisation that is based on its own expansion where all blows are allowed to destroy Kaffirs (non-Muslims) and the Muslim existence is based on war and their prophet, who gives them permission to take women of other civilisation as sexual slaves because it is seen as part of the holy war to spread Islamic civilisation (9:5).

 

To good muslims abiding by the Koran, our western politicians are very likely perceived as corrupt, ignorant and unscrupulous Kaffirs [non-muslims], i.e. ignorant primates who contribute to tear apart and shatter their own civilisation to then parade in the mainstream Jewish press who shape the opinion of the mass mediocrity of the majority by portraying these bureaucrats as the guardians of peace and diplomats who want an understanding with a civilisation [Islamic] that is not based and has no place in their text and philosophy for understanding between different religious faiths/traditions [e.g. crucifix images and symbols of Christianity are banned in many Islamic countries where many Christian houses are marked and burned by Muslims].

Mullah Krekar stated it clearly; some politicians understand but they do not really want to understand: Islam is not like Christianity, because Islam is a political movement and the Bible is not similar to the Koran which has 500 verses about politics and ruling and about its Sharia laws and justice system. Hence in Islam it is impossible to separate politics and religion, because they are one. So, we can conclude here that Islam is unique because it is a political movement and not just a religion. At its core, Islam is about the conquest [by all means possible] and the subjugation and destruction of all non-muslim civilisation and heritage, because in the end it is Islam and its ‘Ummah’ (community) that must dominate the world – this is the revelation of their text, the Koran. Hence, Islam being a bedouin warrior religious political movement and culture that has never stopped waging war on non-muslim civilisations shows that chivalry in war [specially defensively] must be revised and considered by the non-Islamic Christian West, as a necessary and noble act in the protection and expansion of our own people and civilization.

Antoine Leiris 13 Novembre Bataclan d'purb dpurb site web.jpg

Geographical management by exploring the logic of the « Organic Theory » involves prioritizing our own organisms [i.e. those who are part of, have become part of, and have the skills, attributes, values, sensibilities and sense of belonging to thrive in our environment and also contribute to the continuity and growth of our people and society]. Hence, as an act of honour, Muslims could consider relocating their whole community on islamic territory to prevent further wars and murders. Using myself as an example, if I was a burden to Western Europe because of my religious beliefs, maladaptive needs, education, intelligence, organic composition, philosophical perspectives, traditions, psycholinguistic heritage and national outlook, then I would change geographical location to one that is more suited to myself. But since, I am of 100% Franco-British heritage and would not be able to thrive in a different environment other than Western Europe, I live here and have fully assimilated, thus, the concept of « geographical management », which is simply to bring together organisms sharing similar beliefs, philosophy, culture, vision, intellect and identity for peace, harmony and mutual understanding.

Muslims would certainly face less problems and stress from religious and cultural differences if they left non-muslim environments and civilisation and moved back on Islamic territory with an islamic community, because the West is a product of Christian civilisation and heritage, does not want to become Islamic and has more to lose on the long term in welcoming the followers of Muhammad with the ideology of Islam since it fragments it own people and societies due to an incompatible system of values. Former Muslim, Magdi Allam thought that Mosques are the terror factories of Islamic terrorism and that open borders must be stopped to defeat islamic terrorism; that we should stop believing in the myth of “moderate Islam”. Allam also declared that in Sousse, a Tunisian Islamic ISIS terrorist massacred 45 tourists who were sunbathing on the beach, the Tunisian government ordered the closure of 80 mosques calling them ‘terrorist hideouts’. Hence, Allam made the point that if the Muslim governments warn that mosques are ‘dens of terrorism’, we cannot behave more Islamic than the Islamists, granting blindly the mosques to the Islamic militants. He said: “It is time for our government to stop chasing the chimera of sponsoring mosques of a ‘moderate Islam,’ adding: “The truth is that there is only one Islam because there is only one Koran and one Muhammad.” Allam dismissed claims deporting terrorists reduced terror, stating the mosques would just produce replacements. “If we scratch the tip of the iceberg without undermining the iceberg, it will not save us from catastrophe. In this case, the iceberg is a ‘terror factory’ that starts from the hate preaching in mosques and sites where the Islamic holy war is promoted, the practice of brainwashing which transforms the faithful into robots of death, leads to enlistment and training to arms, and culminates in a terror attack”, Allam argued, and questioned: ““What sense does it make to raise the level of alert in our ports if we continue to have open borders that bring hundreds of thousands of illegal immigrants without papers and without identification?”

Islamophobia en France défendu par des Gauchistes ignorants.jpg

Campaigns against islamophobia are generally held by islamic migrants who may themselves be ignorant about the atrocities of their religion on non-muslim civilisation or simple-minded leftist movements who do not understand islamic doctrines and their history of wars against classic civilisation and have become brainwashed puppets in encouraging speech suppression techniques on constructive criticism of Islam. Islamophobia or Islamorealism?

There is no such thing as Islamophobia for non-muslims but rather “Islamorealism”. Any non-muslim who is not Islamophobic yet is either ignorant [brainwashed by leftist media who are ignorant and have not studied Islamic literature], stupid or suicidal towards his own civilisation. If non-muslims read and understood the Koran, then they should all logically be Islamophobic, because there is no reason or long term benefit for a non-muslim to support or protect Islam. Islam is about war, and about the destruction all non-muslim civilisations by every possible means for a total Islamic world, that is the goal, and indeed the most guaranteed way to reach heaven according to Islam, is to die in the war for its expansion; those who die of natural causes are not ensured a place in heaven as those who die fighting the Jihad war, as we can quote on reaching heaven: “Those who kill and are killed for the sake of Allah (Sura 3:156; 9:111)” and those who “emigrate (participate in hijra) for the purpose of ‘cultural jihad’ (Sura 4:100)”. Muhammad was a ruthless murderer of non-Muslims that Islam depicts as the perfect Muslim who dedicated his life to expanding the Islamic empire that all Muslims should imitate since his every actions are perfection, i.e. “Sunna”.

Jihad violence, beheadings and sexual slavery is not extreme to Islam, it is part of a bedouin-styled warrior tradition where the killing of non-muslims is commonplace and promoted as the ‘perfect’ Islamic path based on the life of Muhammad in ensuring the islamisation of the world while cleaning the world of the impure “Kaffirs” [non-muslims] and subjugate the unbelievers (Koran 9:29). We can come to this deduction from the statement of the French islamist Mohammed Merah’s mother at a family meeting after her son, in three expeditions murdered seven people: « Mon fils a mis la France à genoux. Je suis fière de ce que mon fils vient d’accomplir ! » [French for: “My son brought France to its knees. I am proud of what my son has just accomplished! »]. According to one of Mohammed Merah’s brothers Abdelghani, the radicalisation of his brothers Abdelkader and Mohammed and his sister Souad is the result of the “fertile ground” spread by his parents; his mother taught them, for example, that “Arabs were born to hate Jews”.

“Whoever changes his Islamic religion, then kill him.” (Sahih Bukhari Vol 9, Book 84, Number 57)

“I have been made victorious with terror.” – Prophet Muhammad (Sahih Bukhari 4.52.220)

Hence, the idea that terror has no religion would have some as a bit of a surprise to a certain prophet. As Sam Harris also pointed out, “When it says in the Qu’ran (8:12), ‘Smite the necks of the infidels’, some people may read that metaphorically… nowhere in these books does God counsel a metaphorical or otherwise loose interpretation of his words.”Quran (5:33) says that I can be crucified. Should I fear crucifixion? Or, is that phobic?” asked Bill Warner. “We must stop the stupid blindness to jihadism, which consists in saying that it has nothing to do with Islam“, declared Salman Rushdie. “Islam is not a race… islam is an ideology or simply a set of beliefs and it is not islamophobic to declare that it is incompatible with liberal democracy,” observed Ayaan Hirsi Ali, who also added, “there is a huge difference between being tolerant and tolerating intolerance.”

Sharia is the supreme code of ethics [justice system] in Islam, while in the societies of the civilised world, we tend to have a constitution. But to Islam, our constitution is considered as “Jahiliyah”, which is ignorance, which means that it is not Muslim, it is not Islam, it is not Allah, it is man-made so it must be destroyed and taken down. This process of course does not happen overnight, but it is a continuous and gradual process. The Sharia does not accommodate the Kaffir (non-muslim) other than to subjugate the Kaffir; in the Sharia all non-Muslim are less than Muslims, the Kaffir is to be a “Dhimmi”, a sort of third-class citizen.

When one civilisation invades another, and when the Islamic civilisation is a supremacist civilisation, it means that the land they emigrate to must become Islamicised. For example, Muslim refugees with health problems demand that the Sharia law be obeyed, and that a woman not be seen or touched by a male physician [and vice-versa]; this is the process of Sharia Law and a process of subjugating, where a civilisation is struggling against another in order to prevail. We have also spectated for the first time in history a mass movement of Muslims into non-Muslim civilisation, and it must be clearly understood that migration (hijra) is a fundamental part of Islam as it is considered as “Sunna” [sacred & perfect] since it was the path of the prophet Muhammad and thus, it is a strong example to be repeated by all good Muslims. Hijra is indispensable to Islam’s goal and central to the unrelenting war of jihad for 1400 years, a war that has laid waste to entire nations, cultures and civilisations. Since 2014, we have seen about than 2.5 million Muslim refugees being resettled in Germany and Europe [an amount that constitutes the average population of a small country, e.g. Lithuania] and this will transform Europe forever as the population breeds and expands [as Islam preaches], overtaxing the welfare economies of its wealthiest nations and altering the cultural landscape beyond recognition. We may be witnessing the demise of Europe, and are in a position where we can observe what is happening and refrain from repeating the same mistakes.

Arrivals Refugees & Migrants to Europe.jpg

As of the 21st of November 2019, a total of 2, 059, 048 (i.e. 2m+) Refugees and Migrants have been resettled into Europe / Source: UN Mediterranean sea and land arrivals

According to the Koran, immigration (“hijra”) and “jihad in the cause of Allah” are two sides of the same coin, and we can quote “Those who have believed and those who have emigrated and fought in the cause of Allah – those expect the mercy of Allah” (Koran 2:281); “Indeed, those who have believed and emigrated and fought with their wealth and lives in the cause of Allah and those who gave shelter and aided – they are allies of one another” (Koran 8:72). In Islam, the main purpose of migration (hijra) is to start the Jihad war on Kaffir (non-muslim) civilisation and impose the Sharia law. Under Sharia law other religions are subjected to taxes, domination and humiliation, eventually after enough time, everyone becomes a Muslim as Islam overcrowds the environment. This may take time, even centuries but the beginning of the annihilation of our non-Muslim civilisation has begun due to the deference we pay to Islamic migration and Sharia by refusing to acknowledge the true goals of Islam – complete domination of all aspects of society. For example, in North Africa, Egypt, they were all Christians but today they are Islamic with a few Christians left who will also disappear over time too since we have a clash of civilisations.

Low, or unskilled mass migration encouraged by miscalculated policies leads to an organised replacement of the Western working class population and creates competition and social instability among these classes. It also threatens to completely reshape the landscape and culture when the foreign population has a higher growth and birth rate. Western Europe is already struggling to assimilate the unskilled mass who are already here, hence the result of the continued imposition of mass immigration simply means endless systemic and social instability; it is the first time in history that we have seen such a massive shift of population from Islamic lands to what they consider as the Kaffir (non-muslim) lands of the Christian West, and this will lead to a struggle over the centuries but Islamisation must go forward if Islam is to fulfil its mission as instructed by their prophet Muhammad. The Kaffir (non-muslim) is the unbeliever, the infidel, and everything about the Kaffir is bad according to Islam and must be taken down. As we know, Jews are taught from the Talmud that non-Jews are inferior, worthless and disposable; the Koran also teaches muslim men that they are superior to the Kaffir, and that Kaffir (non-Muslim) women are worth less than cattle and Allah has permitted them to do what they please with Kaffir women, what could possibly go wrong?

During the New Year’s celebrations on 31 December 2015, a wave of collective sexual assaults, robberies, and at least two cases of rape – all directed against women – are reported across Germany, mainly in Cologne, but also in Finland, Sweden, Switzerland and Austria. In Germany, in addition to Cologne, eleven cities are affected: Hamburg, Stuttgart, Bielefeld and Düsseldorf mainly. 12 of the 16 Länder [Federal States] were affected in an upwardly revised balance sheet on 24 January 2016. The number of aggressors is estimated at 1,500 in Cologne alone. The attacks are coordinated and committed by groups of 2 to 40 men, described as North African or Arab. The suspects are mainly asylum seekers and/or illegal immigrants. The number of complaints in Cologne increased steadily from 4 to 21 January, reaching 30 on Monday 4 January to 1,088 on 17 February involving more than 1,049 victims. The silence of the police and the media, the police laxity, the statements by the Mayor of Cologne blaming German women and the delay in reporting the facts by the media, especially the public service broadcasters (ARD, ZDF and others), were strongly criticised in the days that followed. Then, six weeks after the facts, the German police made an update on the investigation. In Cologne, of the 1,088 complaints filed, 470 concerned sexual assaults and 618 robberies, assaults or injuries. According to the alleged victims and Cologne police chief Wolfgang Albers, who was forced to retire on 8 January 2016, the men responsible for the attacks are “Arab or North African in appearance”, aged between 15 and 35 years, and do not speak German. The police report on the investigation of North African offenders states: “Since 2011, offenders from North African countries, particularly Morocco, Algeria and Tunisia, have accounted for a significant proportion of pickpocketing in Cologne. This group is prone to violence and frequently uses weapons, such as knives or tear gas. As of the evening of 21 January 2016, the 30 suspects identified are all North African. As the investigation progresses, the German Federal Police identify 73 suspects, 18 of whom have asylum seeker status, the others being in an illegal situation. This group includes 30 Moroccans, 27 Algerians, 3 Tunisians, 1 Libyan, 1 Iranian, 4 Iraqis, 3 Syrians and 3 Germans3. Only 12 of these suspects are suspected of sexual assault. On 5 April 2016, according to a report published by the local authorities, of the 153 people suspected of having committed assaults, particularly sexual assaults during the New Year 2016, 103 are of Moroccan or Algerian nationality. 68 of them have asylum seeker status and 18 are in an illegal situation in Germany. [See: Agressions sexuelles du Nouvel An 2016 en Allemagne]

So, we can ask ourselves the question whether the clueless politicians who represent non-muslims will likely encounter horrific surprises when they choose to fully welcome thousands of Muslim refugees constituted by mostly men; whom many have suggested are a muslim “army” of migrants looking for opportunities on the Western social security (free money, free housing, free education and free healthcare) and to carry their Islamic duty since they know that they will find a place in the Islamic communities that are already established across the major cities of Europe, and for the most are not refugees facing a serious humanitarian crisis since the number of males are significantly higher than women.

Pavillon Etat Islamique à Paris.jpg

Muslim walking with the Islamic State flag in broad daylight, Paris, France.

Two of the terrorists involved in the Paris attacks entered France as “Syrian refugees”, while an Islamic State (ISIS) commander was arrested in Germany while posing as a Syrian refugee. Letters from jihadists also revealed plans to hide terrorists among refugees, and in recent times ISIS threatened to release 500 000 migrants who have sworn allegiance to Islamic State to cause chaos in Europe. It is also important to consider that the refugees crisis was ignored by neighbouring countries in the Islamic world; Qatar, United Arab Emirates, Saudi Arabia, Kuwait and Bahrain have not offered any resettlement places to Syrian refugees when Saudi Arabia had about 100 000 air conditioned tents that could house 3 million people that are empty, but the Saudi Arabian King Salman instead offers to build 200 mosques in Germany. As we know hijra and jihad work together, there are also other forms of jihad except from the jihad of violence, we have the jihad of speech [e.g. Islam means peace], the jihad of writing [e.g. accusations of islamophobia], and the jihad of money [e.g. Saudi Arabian prince donated millions to major educational businesses such as Harvard, Yale and other core US institutions for the purpose of cultural jihad, i.e. to never criticise Islam and indirectly support the progression of Sharia]. The cultural jihad is composed of the jihads of speech, writing and money and are is much more powerful than the jihad of violence since it is what brings a civilisation closer to Sharia; and Sharia annihilates a civilisation.

Law of Islamic Saturation.png

This graph shows how over centuries [700 years approx.] Islam grew and drowned the initial Christian population of Turkey. Note that this is a graph from facts of Islamic history and an example of one of the many societies and people Islam erased.

Nowadays, muslims do not remain in Islamic territory, but migrate to Kaffir lands and involve themselves in various forms of militant political action to bring Sharia to Kaffir culture. In Islam, Migration is not as we Westerners see it since for us migration may simply mean an individual gain – a better job for instance. But for Islam, migration [known as “Hijra”] was the beginning of Muhammad’s success, since it is through hijra [migration] that he conquered so much land and spread Islam. Our calendars are maked with B.C. and A.D., but Muslim calendars are marked with HJ (in the year of Hijra). Muslim calendar does not begin with Muhammad’s birth or death, but starts with Muhammad’s hijra (migration) from Mecca to Medina [this shows the importance of migration is Islam to fight the Jihad war on Kaffir (non-muslim) civilisation]. Hijra [migration] is so important in Islam that the calendar of Muslim’s start with it; because it was hijra [migration] that led to the creation of Jihad in Medina, and it was Jihad that made Islam triumphant. If it was not for Hijra (migration), there would no Islam today; hijra turned Islam as the fastest growing religion in the world.

Muhammad preached islam for 13 years and converted 150 Arabs to Islam. After he migrated to Medina, he became a politician and a great jihadist (warlord) which led to every Arab in Arabia to convert to Islam and hence become muslims. As we said, the process of the Islamic conquest does not happen overnight. Islam crushed Anatolia, which is now known as Turkey in 1453, but it took centuries for all of the Sharia law to dominate Turkey and turn it completely Islamic; so it is a slow process but it is a process that has always worked. For example, the Middle-East used to be Christian, then it was conquered by Islam, the Sharia Law was implemented, the Christians became “Dhimmis” and were eliminated over a couple of centuries. Syria, Lebanon and all the nations of Northern Africa (incl. Egypt) were Christian nations before Christianity was replaced with Islam. Afghanistan was Buddhist, Iran was Zoroastrian, and Pakistan was Hindu before their civilisations and cultures were consumed by Islam as a result of jihad by hijra (migration).

Hijra, Islamic Migration

Those who call themselves “moderate” Muslims may seem normal to Westerners, but it is important to understand that it takes only a few to be leaders, which does not mean that every single Muslim we encounter is unfriendly or is all about Sharia Law, many may not even know what it means. However, their Imam and their leaders in the Muslim brotherhood know, and they are the people who influence the mass; the point people who drive the dialogue in the media and influence politics for migration and Islamic expansion to create “Eurabia”. Hence, although a Muslim may be friendly to non-muslims, all Muslims accept and abide by the Sharia Laws, otherwise they would not be Muslims; because Sharia is the codification of the Koran and is the path (Sunna) of their prophet Muhammad, hence if a Muslim rejects Sharia, then he is rejecting the “Sunna” of Muhammad and the Koran.

Sheikh Muhammad Ayed ordered Muslims fleeing Iraq, Syria and northern Africa to show the world what a fertile culture looks like. “They have lost their fertility, so they look for fertility in their midst We will give them fertility!” the imam said during a sermon at Jerusalem’s Al-Aqsa mosque. “We will breed children with them, because we shall conquer their countries – whether you like it or not, oh Germans, oh Americans, oh French, oh Italians, and all those like you. Take the refugees!” “We shall soon collect them in the name of the coming caliphate. We will say to you: These are our sons. Send them, or we will send our armies to you”, Ayed said. So, it does not seem unlikely for terrorists to exploit any refugee crisis because it is a chance that may never be repeated. This was translated by the Middle East Media Research Institute [MEMRI], a non profit organisation started in 1998 to monitor Arab media. Migration [hijra] is a tactic part of the Jihad war that Muhammad preached to Muslims, and hence it is a sacred path (Sunna) to be followed by Muslims in the Islamic conquest, i.e. the process of “hijra” [which simply means migration]. Therefore, we see that Jihad does not only exist in a violent form but also in the form of migration [and mass breeding and other political and financial ways to ease Islamisation] which also annihilates a civilisation gradually as it outnumbers the initial resident population; once Muslims are the majority, it becomes easier to impose their rules and dominate the society through various means; this can be a very slow process, starting from a small area where Islam imposes itself [e.g. Mosques and other Islamic cultural centres], but Islam has never lost its territorial gains and the growth is never ending and eventually it drowns the native population as it has done for 1400 years of migration, conquest, conversion and eventually complete take over. 

Islamisation of the West.jpg

Marwan Muhammad, spokesperson for the Collectif Contre l’Islamophobie en France (CCIF) said: “Qui a le droit de dire que la France dans 30 ou 40 ans ne sera pas un pays musulman? Qui a le droit? Personne dans ce pays n’a le droit de nous enlever ça. Personne n’a le droit de nous nier cet esport là. De nous nier le droit d’esperer dans une société globale fidèle à l’Islam. Personne n’a le droit dans ce pays de définir pour nous ce qu’est l’identité Française.” [French for: Who has the right to say that France in 30 or 40 years will not be a Muslim country? Who has the right? No one in this country has the right to take that away from us. No one has the right to deny us this hope. To deny us the right to hope in a global society faithful to Islam. No one in this country has the right to define for us what French identity is.] This is a statement that shows complete indifference and even lack of concern or respect for the values and identity of the societies that allows Islam on their territory and in their societies; this shows that Islam is a supremacist movement that does not aim to and cannot assimilate. When a French muslim feels that he first belongs to his foreign religious origins he seems to indirectly suggest that the game of “secularism” and “living together” [vivre ensemble] should be over, and with veils, burkinis, religious laws and sometimes weapons Islamist groups simply send the message that they remain Muslims first and have decided to pay no attention to the culture and values of the nations that “accepted” them.

We know from Islam’s history that when it migrates to another nation, that nation starts to be eaten away by a long and slow process of the Sharia, and over time [even centuries], the Kaffir (non-Muslim) nation falls as we have learned from history as the society eventually becomes Islamic since Islam is supremacist and does not aim to assimilate but to impose itself and dominate because of its Sharia laws. Mohammed Mahdi Akef, the head of the Muslim Brotherhood from 2004 to 2010 said, “I have complete faith that Islam will invade Europe and America, because Islam has logic and a mission. The jihad will lead to smashing Western Civilisation and replacing it with Islam which will dominate the world.”

In a study conducted by the Berlin Social Centre in 2015, 73% of Muslims in France consider religious Sharia laws to be above those of the State. To reach this conclusion the people surveyed responded “YES” to the 3 questions: (i) Muslims must return towards the roots of faith; (ii) There is only one interpretation of the Koran. Every Muslim must abide to it; (iii) Religious rules are more important than the law.

A wise Arab tells Muslims the truth about themselves

An unconventional and smart Arab critises the Islamic world

Billet Retour à Bagdad : un léger vent d’espoir après 15 ans de violences

 

The 20th century has been seeing many intellectuals and religious scholars study the Islamic texts deeply to assess the claims made and considered as divine authority for Muslims, and also the legitimacy of Muhammad as Allah’s [God] prophet. Many questionable statements and contradictory parts can be found in islamic doctrines. On the question of man’s creation by Allah, at (96:1-2) it is said that Allah created man from blood, then water (25:54); then clay (15:26), then dust (30:20), and also from nothing (3:47). On Kaffirs: They lost their own souls, who will not believe (6:12), then (Allah) causes to stray whom He wills (16:93) [This seems to suggest that Allah could guide someone out of the rules of Islam for a higher purpose]. Does Allah command to do evil? The answer is No (7:328) and also Yes (17:16). Will intercession be possible at the Day of Judgement? We are told “No” (2:122-123, 254) and also “Yes” (20:109). On whether the slander of chaste women be forgiven, we are told yes (24:4-5) and also no (24:23). It is also said that Earth was created before heaven (2:29), then we are told the opposite, i.e. heaven created before Earth (79:27-30). Koran 3:20, we are told that if unbelievers turn reject the message leave them be, your duty is to “convey the message; then we are also told that if unbelievers reject the message fight them until all religion is “for Allah” (8:38-39). On the act of creation, we are told that it is an act of “bringing together” (41:11), but also that creation was an act of “splitting apart” (21:30). Regarding the identity of the first muslim we are told that it was Muhammad (6:14, 6:163, 39:12), then Moses (7:143) and also some Egyptians (26:51).

Ibn Umar reported Allah’s messenger as saying that a non-Muslim eats in seven intestines while a Muslim eats in one intestine (Sahih Muslim vol.III, no. 5113 Chapter DCCCLXII). Abu Huraira reported Allah’s apostle saying, “People should avoid lifting their eyes towards the sky while supplicating in prayer, otherwise their eyes would be snatched away (Sahih Muslim vol.I, no. 863 Chapter CLXXIII). Abu Haraira: “Allah’s apostle said, if a fly fall in the vessel of any of you, let him dip all of it into the vessel and then throw it away, for in one of its wings there is disease and in the other wing there is healing” (Sahih Al-Bukhari vol. VII, no. 673). The prophet ordered them to go to the herd of camels and drink their milk and urine (Sahih Al-Bukhari vol.I no. 234). On the topic of alcohol we can also find contradictory comments. Most non-Muslims are aware that Muslims are not supposed to drink alcohol and from the Koran the case seems both open and shut. In Koran 5:90, it is said: “O you who believe! Strong drink and games of chance and idols and divine arrows are only an infamy of Satan’s handiwork. Leave it aside that you may succeed.” So, we can deduce here that alcohol is an infamy of “Satan’s handiwork”, but in the Koran 4:43, we see that Islam does not take believers to task for drinking but only say that they should not come to pray when they are drunk. In Chapter 16 of the Koran, Allah reminds people of all the blessings that he bestows on humanity. He also lists: “And from the fruit of the date-palm and the vine, ye get out wholesome drink and food: behold, in this also is a sign for those who are wise.” (Koran 16:67). It is important to consider that the “wholesome drink” here is not grape juice; the Arabic word is “sakaran” and a version of the same word is used in Koran 4:43, “sakura” to describe drunkenness; so it can be translated as “intoxicating drink” which is described as Allah’s blessing to humanity but which is also “Satan’s handiwork” – this is contradictory. To make things even more complicated, Muslims are told that they will drink wine (Satan’s handiwork?) in paradise (Koran 47:5, 83:22).

If the following comments were made by myself or any other Westerner, it would be considered as completely unacceptable, we would most likely be accused of “hate speech”, be described as Islamophobic imbeciles or racists, and end up in a range of legal troubles in many parts of the so called “civilised” world, e.g.: (i) Muslims are the worst kind of animals; (ii) Be merciful to one another but hard towards Muslims; (iii) Muslims are perverse; (iv) Strike terror into the hearts of Muslims and strike off their heads and fingertips; (v) Fight the Muslims who are near you; (vi) When Muslims make mischief against you murder and crucify them. Yet, we should now ask ourselves whether these same comments if made against non-Muslims would be considered as “hate speech”, because these exact statements can be found in the Koran towards those who reject Allah and his prophet Muhammad: (i) Surely the vilest of animals in Allah’s sight are those who disbelieve (8:55); (ii) Muhammad is the messenger of Allah. And those with him are hard (ruthless) against the disbelievers (Kaffirs) and merciful among themselves (48:29) [according to some theologians, the second most important teaching of Islam whic means that Muslims are to love what Allah loves, i.e. Islam and Muslims, and hate and despise what Allah hates and despises, i.e. Kaffirs; we have a dual-system here where Muslims are to be treated in one way and non-Muslims in another, hence the separation of civilisations]; (iii) And the Jews say: Ezra is the son of Allah, and the Christians say: The Messiah is the son of Allah… Allah (himself) fights against them. How perverse are they! (9:30); (iv) I will cast terror into the hearts of those who disbelieve. Therefore, strike off their heads and strike off every fingertip of them (8:12); (v) O you who believe! Fight those of the unbelievers who are near to you and let them find in you hardness (9:123); (vi) The punishment of those who wage war against Allah and his messenger and strike to make mischief in the land is only this, that they should be murdered or crucified or their hands and their feet should be cut off on opposite sides (5:33).

As we can see, Islam has a treatment for Muslims and another for non-Muslims. When Muhammad cut off the heads of 800 Jews in Medina, to Muslims this was a great victory for Islam, to Kaffirs [i.e. non-Muslims] it was an evil act of terror. The intellectual, Bill Warner, argued that Islam wants to win the race to be the supreme people/civilisation and the non-Muslim civilisation just want to tie, and in the sports field the side who wants to tie is crushed, and unless the non-Muslim civilisation decides that it wants to win at all cost and prevail in the future it will be crushed eventually and its people will become “Dhimmis” since Islam works that way as it can be seen from its history of 1400 years of ruthless Islamic conquest.

Muhammad was an incredibly successful and talented speaker, warlord and military tactician who expanded his population and empire while imposing his ideology and taught his followers [muslims] to put Islam before everything, including their own lives & to deceive if necessary to protect and propagate it.

Victims of Terrorist Attacks in Western Europe.jpg

Victims of Terrorist Attacks in Western Europe since 1970 / Source: Statista

Hence to be able to counter the islamisation of the West founded on Christian heritage and thought, people must know Islam, use fact-based reasoning from reliable sources [e.g. the Islamic religious texts and their history], not subjective opinions that do not affect Islam’s foundation, and also know Islam’s history of persecution and slavery, refrain from the vague and questionable concept of “political correctness” [which is simply a set of rules implemented by ignorant bureaucrats] and discuss rational solutions to defend and prioritise our civilisation and ensure its supremacy and continuity. To counter and discourage the promotion of Islamic ideology in Switzerland, many areas have implemented a ban on the “burqa” [an enveloping outer garment worn by women in some Islamic traditions to cover themselves in public, which hides the body and the face] with fines reaching up to £ 8000. The cult of Muhammad, Islam, has claimed 270 million lives in 1400 years, this is 528 people per day and about 22 people every hour, this is 9 times more than Stalin and the German Reich combined. The university professor, islamologist and historian Marie-Thérèse Urvoy denounced the pathos used to promote a “theology of peace” that denies Islam’s violent potential stating: “Violent ou modéré, le devoir de tout musulman est de faire triompher l’islam.” [French for: “Violent or moderate, the duty of every Muslim is to make Islam triumph.”] To counter Islamisation and defend our civilisation, it is important to foster debates based on critical thought and not supress them, because it is only through all points of views debated that we can work out the truth and find a solution. We could also be asking ourselves why isn’t the history of persecution of non-Muslims by Islam taught at schools on a similar level to the horrors of World War II?

***

Hinduism

Hinduism does not trace its origin to a particular founder, does not have any prophets, no set creed, and no institutional structure, but instead focuses on the ‘right way of living’ (dharma) rather than a set of doctrines. It embraces a variety of religious beliefs and practices. Variations exist across different parts of India where it was founded, differences in practice can be found even from village to village in the deities worshipped, the scriptures used, and the festivals observed. Those of the Hindu faith may be either theists or non-theists, and revere one or more gods or goddesses, or none, and instead represent the ultimate in personal (e.g. Brahma) or impersonal (e.g. Brahman) terms. Over 500 million Hindus exist today.

hinduism

Beliefs

Most forms of Hinduism assume and promote the idea of reincarnation or transmigration. The process of birth and rebirth continuing for life after life is a process referred to and termed ‘samsara. The state of rebirth (pleasant or unpleasant) is believed to be the results of karma, the law by which the consequences (good or bad) of actions reflect when life is transmigrating from one form to another which influences its character. Hindus’ ultimate spiritual goal is maksha – release from the cycle of samara.

 Literature

No specific text is regarded as specifically authoritative unlike any other religion, Hinduism is based on a rich and varied literature with the earliest dating from Vedic period (c.1500-c500BC), known collectively as the Veda. Later (c.500BC-AD500) the religious law books (dharma sutras and dharma shastras) surfaced; they codified the classes of society (varna) and the four stages of life (ashrama), and formed the basis of the Indian caste system. The great epics were added to these, notably the Ramayana and the Mahabharata which includes one of the most influential Hindu scriptures, the Bhagavad Gita.

Caste

The concept of Hinduism is founded centrally on the caste system which is believed to have been structured since the first Aryans came to India and brought a three-tiered social structure of priests (brahmanas), warriors (Kshatriyas), and commoners (vaishyas), to which they added the serfs (shudras), the indigenous population of India which may have been hierarchically structured. The Rig Veda (10.90) gives sanction to the class system (varna), describing each class as coming from the body of the sacrificed primal person (purusha). Orthodox Hindus regard the class system which is derived from the caste system as a sacred structure in harmony with natural or cosmic law (dharma). The system of class developed into the caste (jati) system which exists today and there are thousands of castes within India based on inherited profession and concepts of purity and pollution. The upper castes are generally regarded as ritually and philosophically purer than the lower ones. While this practice was outlawed in 1951, a number of castes are still considered so ‘polluting’ that their members are known as ‘untouchables’ [too ‘polluting’ to be touched or meddled with], thus marriage between castes is forbidden and transgressors have been known to be harshly punished.

Gods

Shiva, Vishnu and Brahma are the main chief gods in Hinduism, and together form the triad (the Trimurti). Many lesser deities also exist, such as the goddesses Maya and Lakshmi. It is common to most Hindus to go on pilgrimages to local and regional worship sites with an annual cycle of local, regional and all-Indian festivals.

Shiva: The Almighty

seigneur shiva

Shaivism is the main religious school in Hinduism and is devoted primarily to the worship of the god Shiva, who is thought to be the creator, the preserver, the transformer, the concealer and the revealer [through his blessings]. In the Smriti tradition, he is considered as one of the five primordial forms of God. Shiva is often revered in the abstract form of Shiva-Lingam, and is also represented in deep meditation, or dancing the tandava in the form of Nataraja. The theonym Shiva comes from an epithet of Rudra, the adjective Shiva “kind, lovable” euphemistically used for the god, who in the Rig-Veda also carries the epithet ghora “terrible”.

Shiva is the god of destruction, illusion and ignorance. He represents destruction but the aim of it is for the creation of a new world: Shiva transforms, and leads the manifestation through the “stream of forms”. Shiva’s emblem is the lingam [phallic representation], a symbol of creation associated with yoni, a stone slab representing the female organ: the matrix of the world. By the union of lingam and yoni, the absolute unfolding un the world proves that it overcomes male-female or spiritual-material antagonism.

shiva-lingam hinduism

The Lingam is often anointed with buffalo milk, cow milk or coconut milk and ghee (clarified butter) or surrounded by fruits, sweets, leaves and flowers as offerings of appeasement to Lord Shiva for all the pain he endured for humanity. The immensely powerful god is known for his unpredictable nature with a short, punitive and devastating temper in the face of evil and wrong, but he can also be incredibly affectionate, kind and generous to his worshippers, especially if they are righteous and devout.

Lingam also represents the cosmos, but also the power to know the conscience as the axis of reality. No longer oriented towards the natural end of life force and incarnation, the phallus erected towards the sky represents the gathering of the energies of the yogi on the sensible plane and their conversion to a subtle level. In Brahmanic Shaivism, the fundamental phallic characters of the lingam are always found clearly, both in the legends explaining the origin of this cult and in the bodily qualities occasionally attributed to the God. As portrayed in deep meditation, he has his eyes half-closed, for he opens them when the world is created and closes them to end the universe and begin a new cycle.

According to legend, Shiva and Vishnu went to a forest to fight 10 000 heretics. Furious, they sent a tiger, a snake and fierce black dwarf armed with a club. Shiva killed the tiger [he is traditionally seen sitting on a tiger’s skin], since “master of creatures”, “master of the herd” and “master of nature” [Pashupati], he tamed the snake and placed it around his neck as a collar [a symbol of control of passions] and placed his foot on the black dwarf and performed a dance developing with such power that the dwarf and heretics recognised him as their lord. Shiva dancing represents the universal and eternal soul radiating all the energy (shakti), in particular by the symbol of destructive and creative fire. This continuous dance generates the succession of days and nights, the cycle of seasons and that of birth and death. Eventually, his energy will cause the destruction of the universe, but he will then recreate it. This creative dance of the world symbolises the eternal process.

Shiva and Dionysus

shiva and dionysus

Shiva & Dionysus

According to the French orientalist, Alain Daniélou (October 4, 1956 – January 27, 1963), also known as “Shiva Sharan” (the protégé of Lord Shiva), a member of the French Institute of Indology and the French School of the Far East (1963 – 1977) and director of the International Institute of Comparative Sciences of Music in Berlin and Venice, Shiva and Dionysus lead to the worship of a common cult in Europe and maintained that we would be swept away by India.

alain daniélou - d'purb - dpurb website

Alain Daniélou (1956 – 1963) / Source: alaindanielou.org

“In India, we can revive and understand sometimes almost completely the rites and beliefs that were those of the Mediterranean world and the Middle East in antiquity.”

– Alain Daniélou, Shiva and Dionysus, Fayard 1979

Daniélou opposes two types of religions (one agricultural and the other urban) based on the work of Mircea Eliade. In this logic, he argues that the cult of a naturist and phallic  god, assimilated to the the bull, would be a universal model but that this belief would have been marginalised by the expansion of monotheistic urban culture. According to Daniélou always, not only the two divinities, Greek and Indian, share many myths in common, but in addition their epithets have comparable meanings.

“[…] Dionysos is the Protogonos (the Firstborn) as Shiva is Prathamaja (Firstborn), the” oldest of the gods “, also called Bhaskar (Bright) or Phanes (the illuminator) in the tradition Orphic. This god who teaches the fundamental unity of things is called Shiva (benevolent) or Meilichios (benevolent). He is Nisah (Bliss), the god of Naxos or Nysa. The very name of Dionysus probably means the “god of Nysa” (the sacred mountain of Shiva) as Zagreus is the god of Mount Zagron. Shiva-Dionysus is also Bhairava (the Terrible) or Bromios (the Noisy), Rudra or Eriboas (the Howler). […] »

Alain Daniélou, Shiva and Dionysos, Fayard 1979

cgr5o7nwiaqkkse

Like Christianity & the other major religions, Hinduism too gradually spread in influence across the globe. However, 94% of people who practice Hinduism  are the native Hindi-speaking population of India

Inde : Quand les Millionnaires se Font Moines

Some Western religious scholars have proposed a possible connection between Christianity and its founding philosophies with the origins of Hindu dharma. Many Christian rites have similarities from Vedic literature, hence the position of some scholars [See: Western Historians believe Christianity might have roots in Hindu dharma]. Others have pointed the kernel of scientific truth in a number of rituals from Hinduism, although solid empirical evidence is lacking [See: 20 reasons why Hinduism is a very scientific religion], and how Hinduism predicted many recent scientific practices through its mythological stories, such as cloning and embryo transfer [See: What are proven scientific facts that are said in Hindu mythology?]

***

judaism

Judaism

Judaism is the religion of the Jews where the central belief in one God is the foundation. The primary source of Judaism is the Hebrew Bible, with the next important document being the Talmud, which consists of the Mishnah (the codification of the oral Torah) along with a series of rabbinical commentary. Jewish practice and thought however would be shaped by later documents, commentaries & the standard code of Jewish law and ritual (Halakhah) produced in the late Middle Ages.

Communal Life

UglyLeeches

Peinture: Sandrine Arbon

Most Jews see themselves as members of a group whose origins lie in the patriarchal period – however varied the Jewish community may be. There is a marked preference for expressing beliefs and attitudes more through rituals that through abstract doctrine. In Jewish rituals, the family is the basic unit although the synagogue too has developed to play an important role in being a centre for community study and worship. The Sabbath, a period starting from sunset on Friday and ending at sunset on Saturday is a central part of religious observance in Judaism with a cycle every year comprising of festivals and days of fasting, the first of these being Rosh Hashanah, New Year’s Day; in the Jewish year, the holiest day is Yom Kippur, the Day of Atonement – others include Hanukkah and Pesach, the family festival of Passover.

Divisions

Rabbinic Judaism is the root of modern Judaism with a diverse historical development. Most Jews today are the descendants of either Ashkenazim or Sephardim, while many other branches of Judaism also exist. The preservation of ‘traditional’ Judaism is generally linked to the Orthodox Judaism movement of the 19c. Other branches, such as Reform Judaism attempt to interpret Judaism in the light of modern scholarship and knowledge, a process pushed further by Liberal Judaism – unlike Conservative Judaism which attempts to emphasise on the positives of ancient Jewish traditions in attempts to modify orthodoxy.

Modern Controversies

Waves of anti-Semitic prejudice and persecution during World War II have been regular features of Western media outlets’ [mostly Jewish owned] focus, who throughout history have clashed with the Christian influenced heritage of European civilisations, and this ongoing tension between Semitic traditions/philosophies/beliefs and Western Christian-influenced cultures was to take a turn when the rise of a form of “patriotic socialism” [neither left or right, but all encompassing] nationalism across Europe was marked by the spectacular election of the talented Adolf Hitler, who had been the leader of the National Socialist party [Nationalsozialismus later tarnished as “NAZI” by a jew known as Konrad Heiden from the Social Democratic Party of Germany (Sozialdemokratische Partei Deutschlands)] in Germany, and implemented the core ideologies of National Socialism [a focus on self-sustainability and socio-cultural and economic independence while creating a healthier – psychologically & physically – nation] with Darwinian influence on policies, along with developing the arts and a philosophy centred around science and research.

Exaggeration surrounding the event known as “the holocaust” based on Communist propaganda, Global Zionist interests, along with the credulity of mediocre politicians across the globe, has today been implanted in the minds of the ignorant mass media consumer as being the “dark legacy” of Adolf Hitler when no solid evidence has ever been found of him giving any order to exterminate the jews. This exaggerated picture that the media had already been circulating to the disapproval of some leading world figures such as John Kennedy and Gandhi [Article: Quand Gandhi écrivait à son « cher ami »… Adolf Hitler], is still being reviewed by a wave of daring, talented and modern historians of whom many have questioned and challenged the credibility of the facts used for claims of gas chambers used to exterminate the Jews; revisionist have claimed that gas chambers were not present or inadequate to be used as gas chambers on most of German soil. More testimonies of camp survivors gave notes of swimming pool, orchestras, shower rooms and even a canteen, without ever mentioning gas chambers. Others explained how the media propaganda videos of mass deaths with emaciated bodies were due to the outbreak of Typhus carried by lice which was caused by low hygiene due to the Allied bombing of train tracks which restricted many cities from supplies of food, medicines & sanitation; causing the starvation and death of not only camp detainees but many German men, women and children who were scavenging the streets for food. A large amount of shower rooms in the camps on German soil were also documented as working shower rooms that were vital for hygiene and the delousing process.

English historian David Irving was jailed for his revision of events linked to Adolf Hitler while other ground breaking documentaries such as ‘The greatest story never told’ by Dennis Wise keep spreading lesser known facts that are never part of mainstream media to the new generation of the internet era who seek factual analysis over historical controversies, such as the 150 000 Jews who gave up their heritage and had firmly assimilated German society in Adolf Hitler’s Third Reich and served loyally against Bolshevism & Communism until the very end. One of the most shocking statement comes from the Jewish Rabbi Yosef Tzvi Ben Porat who thought that Hitler was right to hate the jews for what they “do” [i.e. cause instability through their various business ventures on the various systems of the countries they migrated to, e.g. media control to trigger tension and friction in fields that support their monetary and other interests].

JewExpelled.jpg

The 1290 Edict of Expulsion from England, the expulsion from France in 1306 to name a few & the Chart showing all the times throughout human history that the Jews have been expelled from the locations they had migrated to. Many books over some despicable practices regarding human sacrifices have been written by a range of  non-Jewish intellectuals and thinkers who opposed such vile ancient traditions.

Jews have long been accused of violent religious sacrifices to their blood thirsty gods that involve the sacrifice of Christian children, which they have been accused of doing over the centuries throughout history, with many mutilated corpses of young Christian children found across Europe drained of all their blood – this myth is still alive in the 21st century, as a recently published article in the Times of Israel also suggests [See: Accusation antisémite de meurtre rituel]. This is perhaps one of the many reasons why the Jews are the only group who throughout human history has been persecuted and banned from so many countries. Even after the Holocaust, there were pogroms against Jewish survivors in Poland in which the blood libel was regurgitated by the local Catholic population. A particularly notable example of this was the assault on the Jewish survivors in the Polish town of Kielce, where an outbreak of anti-Jewish violence resulted in a pogrom in which thirty-seven Polish Jews were murdered out of about two hundred survivors who had returned home after World War II. As the International Emergency Conference to Combat Antisemitism discovered, that type of incident had “something of a religious character about them.”

Studying the teachings of the Talmud may perhaps offer some hints why the Jews have been persecuted in so many Christian countries and hated by the Pope Innocent III himself. As in our languages Christians take their name from Christ, so in the language of the Talmud Christians are called Notsrim, from Jesus the Nazarene. But Christians are also called by the names used in the Talmud to designate all non-Jews: Abhodah Zarah, Akum, Obhde Elilim, Minim, Nokhrim, Edom, Amme Haarets, Goim, Apikorosim, Kuthrim.

The Talmud is the central book of modern Judaism (that is, the one that was built after the coming of Christ). It is probably the most hateful and racist religious text ever written in the history of humanity. Anything is allowed against goyim (“non-Jewish”, in Hebrew, in the singular form, “goy”) who are lowered to the rank of beasts. Christ is insulted and his name blasphemed in the most despicable ways and the Blessed Virgin described as a prostitute. Going by the ignoble mentality transmitted by such a text, it seems to reveal the reason why Ovadia Yosef, Chief Rabbi of Israel, not long ago said: “The Goïm were born only to serve us. Without it, they have no place in the world. » In the Middle Ages, when Christian societies discovered the contents of this book with horror (thanks in particular to converted Jews, see: A List of Publicly known Jews who converted to Christianity), the text was banned and burned (especially under St. Louis). Edited versions were then published by the rabbis for the “general public”. These are still the ones that can be found behind shop windows but they do not reveal the truth about Judaism as seen from the leaders of their community.

Here is a collection of some controversial extracts from the original version of the Talmud:

Hilkhoth X, 2: Baptized Jews must be put to death.
The jews teach that since Christians follow the teachings of that man [Jesus], whom the Jews regard as a Seducer and an Idolater, and since they worship him as God, it clearly follows that they merit the name of idolaters, in no way different from those among whom the Jews lived before the birth of Christ, and whom they taught should be exterminated by every possible means.
In the same book Sanhedrin (107b) we read:
« Mar said: Jesus seduced, corrupted and destroyed Israel. »
The book Zohar, III, (282), tells us that Jesus died like a beast and was buried in that « dirt heap…where they throw the dead bodies of dogs and asses, and where the sons of Esau [the Christians] and of Ismael [the Turks], also Jesus and Mahommed, uncircumcized and unclean like dead dogs, are buried. »(25)
In Iore Dea (81,7, Hagah) it says: « A child must not be nursed by a Nokhri, if an Israelite can be had; for the milk of the Nokhrith hardens the heart of a child and builds up an evil nature in him. »
In Iore Dea (153,1, Hagah) it says: « A child must not be given to the Akum to learn manners, literature or the arts, for they will lead him to heresy. »
In Zohar (1,25b) it says: « Those who do good to the Akum . . . will not rise from the dead. »
Hilkhoth X, 6: We can help goyim in need, if it saves us trouble later on.
In this way they explain the words of Deuteronomy (VII,2) . . . and thou shalt show no mercy unto them [Goim], as cited in the Gemarah. Rabbi S. Iarchi explains this Bible passage as follows: « Do not pay them any compliments; for it is forbidden to say: how good that Goi is. »
Rabbi Bechai, explaining the text of Deuteronomy about hating idolatry, says: « The Scripture teaches us to hate idols and to call them by ignominious names. Thus, if the name of a church is Bethgalia— »house of magnificence, » it should be called Bethkaria—an insignificant house, a pigs’ house, a latrine. For this word, karia, denotes a low-down, slum place. »
JESUS is ignominiously called Jeschu—which means, May his name and memory be blotted out. His proper name in Hebrew is Jeschua, which means Salvation.
MARY, THE MOTHER OF JESUS, is called Charia—dung, excrement (German Dreck). In Hebrew her proper name is Miriam.
CHRISTIAN SAINTS, the word for which in Hebrew is Kedoschim, are called Kededchim (cinaedos)—feminine men (Fairies). Women saints are called Kedeschoth, whores.
A CHRISTIAN GIRL who works for Jews on their sabbath is called Schaw-wesschicksel, Sabbath Dirt.
Eben Haezar 44, 8: Marriages between goyim and Jews are void.
Since the Goim minister to Jews like beasts of burden, they belong to a Jew together with his life and all his faculties: « The life of a Goi and all his physical powers belong to a Jew. » (A. Rohl. Die Polem. p.20)
It is an axiom of the Rabbis that a Jew may take anything that belongs to Christians for any reason whatsoever, even by fraud; nor can such be called robbery since it is merely taking what belongs to him.
In Babha Bathra (54b) it says: « All things pertaining to the Goim are like a desert; the first person to come along and take them can claim them for his own. »
In Babha Kama (113b) it says: « It is permitted to deceive a Goi. »
The Babha Kama (113b) says: « The name of God is not profaned when, for example, a Jew lies to a Goi by saying: ‘I gave something to your father, but he is dead; you must return it to me,’ as long as the Goi does not know that you are lying. »
(4) cf. supra, p.30, A similar text is found in Schabbuoth Hagahoth of Rabbi Ascher (6d): « If the magistrate of a city compels Jews to swear that they will not escape from the city nor take anything out of it, they may swear falsely by saying to themselves that they will not escape today, nor take anything out of the city today only. »
In Zohar (I, 160a) it says: « Rabbi Jehuda said to him [Rabbi Chezkia]: ‘He is to be praised who is able to free himself from the enemies of Israel, and the just are much to be praised who get free from them and fight against them.’ Rabbi Chezkia asked, ‘How must we fight against them?’ Rabbi Jehuda said, ‘By wise counsel thou shalt war against them’ (Proverbs, ch. 24, 6). By what kind of war? The kind of war that every son of man must fight against his enemies, which Jacob used against Esau—by deceit and trickery whenever possible. They must be fought against without ceasing, until proper order be restored. Thus it is with satisfaction that I say we should free ourselves from them and rule over them. »
In Choschen Ham. (425,5) it says: « If you see a heretic, who does not believe in the Torah, fall into a well in which there is a ladder, hurry at once and take it away and say to him ‘I have to go and take my son down from a roof; I will bring the ladder back to you at once’ or something else. The Kuthaei, however, who are not our enemies, who take care of the sheep of the Israelites, are not to be killed directly, but they must not be saved from death. »
And in Iore Dea (158,1) it says: « The Akum who are not enemies of ours must not be killed directly, nevertheless they must not be saved from danger of death. For example, if you see one of them fall into the sea, do not pull him out unless he promises to give you money. »
Lastly, the Talmud commands that Christians are to be killed without mercy. In the Abhodah Zarah (26b) it says: « Heretics, traitors and apostates are to be thrown into a well and not rescued. »
And in Choschen Hamm. again (388,15) it says: « If it can be proved that someone has betrayed Israel three times, or has given the money of Israelites to the Akum, a way must be found after prudent consideration to wipe him off the face of the earth. »
Even a Christian who is found studying the Law of Israel merits death. In Sanhedrin (59a) it says: « Rabbi Jochanan says: A Goi who pries into the Law is guilty to death. »
In Hilkhoth Akum (X, 2) it says: « These things [supra] are intended for idolaters. But Israelites [Jews] also, who lapse from their religion and become epicureans [Christians], are to be killed, and we must persecute them to the end. For they afflict Israel and turn the people from God. »
In Choschen Hamm. (425,5) it says: « Jews who become epicureans [Christians], who take to the worship of stars and planets and sin maliciously; also those who eat the flesh of wounded animals, or who dress in vain clothes, deserve the name of epicureans; likewise those who deny the Torah and the Prophets of Israel—the law is that all those should be killed; and those who have the power of life and death should have them killed; and if this cannot be done, they should be led to their death by deceptive methods. »
Rabbi David Kimchi writes as follows in Obadiam: « What the Prophets foretold about the destruction of Edom in the last days was intended for Rome, as Isaiah explains (ch. 34,1): Come near, ye nations, to hear . . . For when Rome is destroyed, Israel shall be redeemed. »
A JEW WHO KILLS A CHRISTIAN COMMITS NO SIN, BUT OFFERS AN ACCEPTABLE SACRIFICE TO GOD / In Sepher Or Israel (177b) it says: « Take the life of the Kliphoth and kill them, and you will please God the same as one who offers incense to Him. »
And in Ialkut Simoni (245c. n. 772) it says: « Everyone who sheds the blood of the impious is as acceptable to God as he who offers a sacrifice to God. »
AFTER THE DESTRUCTION OF THE TEMPLE AT JERUSALEM, THE ONLY SACRIFICE NECESSARY IS THE EXTERMINATION OF CHRISTIANS
In Zohar (III,227b) the Good Pastor says: « The only sacrifice required is that we remove the unclean from amongst us. »
Abhodah Zarah 22a: Do not associate with the goyim; they shed blood.
Rashi Erod.22 30: A goy is like a dog. The Scriptures teach us that a dog deserves more respect than a goy.
Kerithuth 6b p. 78: Jews are humans, not goyim, they are animals.
In Kallah (1b, p.18) it says: « She (the mother of the mamzer) said to him, ‘Swear to me.’ And Rabbi Akibha swore with his lips, but in his heart he invalidated his oath. »(4)
Every Jew is therefore bound to do all he can to destroy that impious kingdom of the Edomites (Rome) which rules the whole world. Since, however, it is not always and everywhere possible to effect this extermination of Christians, the Talmud orders that they should be attacked at least indirectly, namely: by injuring them in every possible way, and by thus lessening their power, help towards their ultimate destruction. Wherever it is possible a Jew should kill Christians, and do so without mercy. Jews must spare no means in fighting the tyrants who hold them in this Fourth Captivity in order to set themselves free. They must fight Christians with astuteness and do nothing to prevent evil from happening to them: their sick must not be cared for, Christian women in childbirth must not be helped, nor must they be saved when in danger of death.
Zohar I, 28b: The goyim are the children of the Genesis serpent.
Yebamoth 98a: All children of goyim are animals
Abhodah Zarah 35b: All daughters of unbelievers are niddah (dirty, impure) since birth.
Sanhedrin 52b: Adultery is not forbidden with the wife of a goy, because Moses only forbade adultery with “the wife of your similar”, and a goy is not a Hebrew’s similar.
Abhodah Zarah 4b: You can kill a goy with your own hands.
Hilkhoth goy X, 1: Do not make any agreement with a goy, never show mercy to a goy. You must not have pity on the goyim because it says: “You shall not look at them with pity”.
Hilkkkoth X, 1: do not save the goyim in danger of death.
Orach Chaiim 57, 6a: No more compassion should be shown for goyim than for pigs, when they are sick of the intestines.
Jalkut Rubeni Gadol 12b: The souls of the goyim come from impure spirits called pigs.
Babha Kama 113a: Jews can lie and perjure themselves, if it is to deceive or convict a goy.
Choschen Ham 26, 1: A Jew should not be prosecuted before a goy court, by a goy judge, or by non-Jewish laws.
Babha Kama 113a: Unbelievers do not benefit from the law and God has made their money available to Israel.
Pesachim 49b: It is permissible to behead goyim on the day of atonement for sins, even if it also falls on a Sabbath day.
Rabbi Eliezer: “It is lawful to cut off the head of an idiot, a member of the people of the Earth (Pranaitis), that is, a carnal animal, a Christian, on the day of atonement for sins and even if that day falls on a Sabbath day”. His disciples replied, “Rabbi! You should rather say “sacrifice” a goy. “But he replied: “In no way! For when a sacrifice is made, it is necessary to pray to ask God to accept it, whereas it is not necessary to pray when you behead someone.”
Sanhedrin 58b: If a goy hits a Jew, he must be killed, because it is like hitting God.
Chagigah 15b: A Jew is always considered good, despite the sins he may commit. It is always his shell that gets dirty, never his own bottom.
Zohar I, 131a: Goyim defile the world. The Jew is a superior being
Chullin 91b: Jews possess the dignity that even an angel does not have.
Iore Dea 151, 11: It is forbidden to give a gift to a goy, it encourages friendship.
Orach Chaiim 20, 2: Goyim dress up to kill Jews.
Shabbath 116a (p. 569): Jews must destroy the goyim books (New Testament).
Sanhedrin 90a: Those who read the New Testament (Christians) will have no place in the world to come.
THOSE WHO KILL CHRISTIANS SHALL HAVE A HIGH PLACE IN HEAVEN
In Zohar (I,38b, and 39a) it says: « In the palaces of the fourth heaven are those who lamented over Sion and Jerusalem, and all those who destroyed idolatrous nations … and those who killed off people who worship idols are clothed in purple garments so that they may be recognized and honored. »
JEWS MUST NEVER CEASE TO EXTERMINATE THE GOIM; THEY MUST NEVER LEAVE THEM IN PEACE AND NEVER SUBMIT TO THEM
In Hilkhoth Akum (X, 1) it says: « Do not eat with idolaters, nor permit them to worship their idols; for it is written: Make no covenant with them, nor show mercy unto them (Deuter. ch. 7, 2). Either turn away from their idols or kill them. »
Ibidem (X,7): « In places where Jews are strong, no idolater must be allowed to remain… »

Now, we can ask ourselves a few simple questions here, which is “Could all the people who have banned the Jews be without any reason to do so?” and “Could people simply walk around and suddenly without any reason decide to hate Jews?” and also “If this has happened to them for so many years, is it not likely that the problem is in fact with the Jews themselves?” I believe it is best to leave the audience to answer these questions and reflect on them alone. Quite surprisingly, there were strong ancient Aryan religious & mythological warrior values and motives embedded in the mind of Heinrich Himmler (the Reichsführer of the SS), the person believed to have taken the decision to exterminate the jews, i.e. the engineer of the “Holocaust” (remember the term itself is related and applicable to the human sacrifices of Jews to their god, Baal). Heinrich Himmler told his personal masseur & physician Felix Kersten that he always carried a copy of the ancient Aryan scripture, the Bhagavad Gita [See Aryan Race & Race Aryenne] with him because it relieved him of the guilt about what he was doing – he declared that he felt like the sacred warrior Arjuna, who was simply doing his duty for his people and their future without attachment to his actions [See the Documentary released in 2014: Himmler: The Decent One, which is made from a collection of letters, notes and journal entries that challenge viewers to see from the perspective of the mind of Himmler and his motivation]. We can also have a range of perspectives from the excellent documentary, Dans la tête des SS [Click here to view Part I and  Click here to view Part II] which came out in 2017 and gave a voice to SS veterans to try to “understand the incomprehensible”.

Hitler’s Shadow: In The Service Of The Führer

However, nowadays, the mainstream mindset about World War II remains stuck on the ‘extermination of the Jews by Hitler’ for most, while no evidence has ever been found of Hitler ordering the extermination of the Jews. Global urgency is given to the Zionist movement, established by the World Zionist Organisation for the creation of a Jewish homeland, which is still pivotal in most relations between Jews and non-Jews to this day, with over 14 million Jews scattered around the world.

Ultra-orthodoxes : ces Juifs français devenus religieux

History of the Jews – summary from 750 BC to Israel-Palestine conflict

Israel-Palestine conflict – summary from 1917 to present

As Michel Onfray, the post-modern and perceptive French philospher noted, nowadays, many seem to divide every topic of civilisational discussion as a matter or “right” or “left”, which comes as outdated: if we mention the term “Islam”, people will suggest that it is a question of the right and look at us suspiciously; if I shift my focus on the “Jewish question” [Oh la la!], then this once again will be a question of the right [for e.g. if we were to ask the question whether the value of French secularism that bans the display of religious signs in public institutions such as the law on the Islamic veil should also apply to them].

La Question Juive et la Kippa

Des juifs en Europe et en France portant la kippa / Jews in Europe and France wearing the yarmulke

On the same line of thought as myself, French philosopher, Michel Onfray explains that this sort of stigmatisation that forbids the freedom to think and to formulate questions is problematic when it is a frame of mind embodied by the mass mainstream media, which are considered as the “dominant” media and the State’s news outlet [being partially funded and/or owned by it], not for the quality of their writers, writing, journalism and/or literary or intellectual value, but simply because they are designed to appeal to the majority of average reading brains. But fortunately, the internet is also evolving as an outlet, and with us and smart active readers out there, those boring media groups and their sympathisers will not stop us from questioning or from questioning their answers, whoever it may be from.

La France sous l'occupation allemande

La France sous l’occupation allemande / France under German occupation

The Hitler regime was not the first regime to ban and persecute the Jews, the Jews have even been banned from England in 1290 by Edward I, and also in 1306 from France by Philippe IV and these are only 2 examples. The Jews have been banned throughout a wide range of societies they moved to due to their insolence, their disrespect to the nation and the values of their heritage that encouraged the systematic destruction and enslavement of all non-Jewish civilisations, their habit of monopolising press business to distort perception and they have also been widely accused  across centuries for occult and violent rituals involving the killing of young Christian children to offer their blood to their violent pagan god. Jews have been banned in a wide range of countries since 1200 B.C until 2014 where they have recently been banned from Guatemala, which leads to about 3213 years of constant persecution and bans from countries they migrated to. In fact, they have been banned from Carthage, Rome, Egypt, Spain, Italy, Switzerland, Hungary, Belgium, Austria, Netherlands, Poland, Czech Republic, Lithuania, the Baltic States, and Russia to name a few. If people want to know the full list, they can use the internet and search “Countries where Jews were banned/expulsed” [also: Resolutions aganst Israel].

A lot of disgust and resentment towards the Jews came from Christian nations. The translation and readings of the Talmud, played a huge part in revealing why the Jews have been persecuted in so many Christian countries and hated by the  Pope Innocent III himself. In the accusations that had been multiplying in Christianity, many unexplained disappearances of children and infanticides were explained as Jewish ritual murder. A theological explanation was even put forward by Thomas de Cantimpré (around 1260), stating that the blood of Christians, particularly that of children, was coveted by the Jews for its ‘curative properties’. According to him, “it is quite certain that the Jews of each province draw lots annually to determine which community or town will send Christian blood to the other communities”.

The Martyrdom of St. Simon of Trento - Giovanni Gasparro

Image: Martyre de Saint Simon de Trente par meurtre rituel juif (The Martyrdom of St. Simon of Trento for Jewish ritual murder) par Giovanni Gasparro (2020)

Thomas added that he frequently spoke with a ‘very learned Jew, who had since converted to the Christian faith’ (perhaps Nicolas Donin of La Rochelle, who in 1240 initiated a dispute on the Talmud with Yehiel of Paris, which led to the cremation in 1242 of a large number of Talmudic manuscripts in Paris). This convert suggested to him that “one of their own, enjoying the reputation of a prophet, towards the end of his life” had predicted to them that haemorrhages (from which the Jews were supposed to suffer since the time when they called out to Pontius Pilate, “His blood be upon us, and upon our children“, a passage of the gospel attributed to Matthew called the “libellus of blood”), could only be relieved by “Christian blood” (“solo sanguine christiano“). According to Thomas of Cantimpré, the Jews, “always blind and impious”, took the words of their “prophet” literally and instituted the custom of sprinkling “Christian blood” in every province every year in order to cure their illness. However, Thomas adds, they misunderstood his words: by “solo sanguine Christiano“, the “prophet” did not mean the blood of every Christian, but that of Jesus the Christ; the only true remedy for all the physical and spiritual sufferings of the Jews would therefore be conversion to Christianity.

***

christianity

Christianity

“Mais moi, je vous dis: Aimez vos ennemis, bénissez ceux qui vous maudissent, faites du bien à ceux qui vous haïssent, et priez pour ceux qui vous maltraitent et qui vous persécutent…”

Matthieu 5:44

Traduction(EN): “But I say to you: Love your enemies and bless those who curse you, do good to those who hate you, and pray for those who mistreat you and persecute you…”

[Matthew 5:44]

“…afin que vous soyez fils de votre Père qui est dans les cieux; car il fait lever son soleil sur les méchants et sur les bons, et il fait pleuvoir sur les justes et sur les injustes.…”

Matthieu 5:45

Traduction(EN): “…that you may be sons of your Father in heaven; for he makes his sun rise on the wicked and on the good, and he makes it rain on the just and on the unjust….”

[Matthew 5:45]

Le Monde Chrétien.jpg

Christianity is a religion that developed out of Judaism, centred on the life of Jesus of Nazareth in Israel. Jesus is believed to be the Messiah or Christ promised by the prophets in the Old Testament, and in a unique relation to God, whose Son or ‘Word’ (Logos) he was proclaimed to be. He selected 12 men as his disciples during his life, who after his death by crucifixion and his resurrection, formed the very nucleus of the Church as a society of believers. Christians gathered together to worship God through the risen Jesus Christ, in the belief of his return to earth and to establish the ‘kingdom of God’.

Despite sporadic persecution, the Christian faith saw a quick progression and spread throughout the Greek and Roman world through the witness of the 12 earliest leaders (Apostles) and their successors. In 315 Christianity was declared by Emperor Constantine as the official religion of the Roman Empire. The religion survived the Empire’s split and the ‘Dark Ages’ through the witness of groups of monks in monasteries, and made up the basis of civilisation in Europe in the Middle Ages.

The Bible

Christian scriptures are divided into two testaments:

  • The Old Testament (or Hebrew Bible) is a collection of writings originally composed in Hebrew, except for sections of Daniel and Ezra which are in Aramaic. The contents depict Israelite religion from its roots to about the 2c.
  • The New Testament, composed in Greek, is called so in Christian circles because it is believed to represent a new ‘testament’ or ‘covenant’ in the long history of God’s interactions with his people, focussing on Jesus’s ministries and the early development of the apostolic churches.

Denominations

Differences in doctrines and practices however have led to major divisions in the Christian Church, these are the Eastern or Othodox Churches, the Roman Catholic Church, which recognises the Bishop of Rome (the pope) as head, and the Protestant Churches stemming from the break-up with the Roman Catholic Chuch in the Reformation. The desire to convert the non-Christian world and spread Christianity through missionary movements led to the establishment of numerically strong Churches in developing economies such as Asia, Africa and South America.

Passion_Of_The_Christ

Image: Jim Caviezel as “the Lord Jesus Christ” in Mel Gibson’s “Passion of the Christ (2004)” [An extract from the incredible depiction of Jesus Christ’s journey can be viewed here]

Ne vous conformez pas au monde actuel, soyez transformés par l'intelligence - Romains 12-2 d'purb dpurb site web

Romain 12:2 : Ne vous conformez pas au monde actuel, mais soyez transformés par le renouvellement de l’intelligence afin de discerner quelle est la volonté de Dieu, ce qui est bon, agréable et parfait. // Traduction(EN): Romans 12:12 : Do not conform to the pattern of this world, but be transformed by the renewing of your mind. Then you will be able to test and approve what God’s will is—his good, pleasing and perfect will.

_______________________________________________________

Part III: Science

Science.jpg

‘Science’ derives from the Latin Scientia, ‘knowledge’, from the verb scire, ‘to know’. For many centuries ‘science’ meant knowledge and what is now termed science was formerly known as ‘natural philosophy’, similar to Newton’s work of 1687, Naturalis Philosophiae Principia Mathematica (‘The Mathematical Principles of Natural Philosophy’). In can be argued that the word ‘science’ itself was not widely used in its general modern meaning until the 19c, and that usage came with the prestige that the scientific method and scientific observation, experimentation and development had by then acquired.

Early Civilisations

The first exact science to emerge from ancient civilisations is astronomy. Astronomical purposes were the guiding force that led to studying the heavens – so that the ‘will of the gods’ may be foreknown – and in order to make a calendar [which would predict events], which had both practical and religious uses. The seven-day week for example is derived from the ancient Egyptians who although not known as excellent mathematicians, had wanted to predict the annual flooding of the Nile. Chinese records and observations provide valuable references in modern times for eclipses, comets and the positions of stars. In India and even more so in Mesopotamia, mathematics was applied in creating a more descriptive form of astronomy. The ancient Mesopotamian number system was based on 60, thus from it the system of degrees, minutes and seconds was developed.

camel

The Ancient Greeks

It is to be noted that in all these civilisations, the emphasis had been on observation and description, as the tendency was to explain phenomena as being ‘the nature of things’ or the ‘will of the gods’. The Greeks, who had been looking for more immediate explanations, instead relentlessly examined phenomena and the theories propounded by other earlier thinkers critically. Thales of Miletus initiated the study of geometry in the 6c BC.

thales

Thales de Miletus (c.620-c.555BC)

At the similar period, Pythagoras had been discovering the mathematical relationship of the chief musical intervals, crucially relating number relationships to physically observed phenomena. Early Greek natural philosophers (today known as ‘scientists) passed on two major concepts to their successors: the universe was an ordered structure, and the ordering of it was organic not mechanical; all things had a purpose and were imbued with the propensity to develop in accordance with the purpose they were fated to serve.

The main voice for such ideas to later ages was Aristotle (384-322BC), who provided a cosmology with the earth at its centre in which everything above the moon was subject to circular motion, and everything beneath it [on earth] was composed of one of the four elements: earth, air, fire or water. The whole system was believed to be set in motion by a ‘prime mover’, usually identified with God.

aristotle

This concept was later given a Mathematical basis by Ptolemy (c.90-168AD), an astronomer and geographer working in Alexandria, whose main work [a solar system with the earth at its centre], the Amagest, was revered until the 17c. Aristotle also taught that living creatures were divided into species organised hierarchically throughout creation and reproducing unchangingly after their own kind – an idea that remained unchallenged until the great debate on evolution in the 19c. For Aristotle, scientific investigation was a matter of observation. Experimentation, by altering natural conditions, falsified the ‘truth of things’.

Archimedes (c.287-212BC) was Ancient Greek’s most famous and influential mathematician, who founded the science of hydrostatics, discovered formulae for areas and volume of spheres, cylinders and other plane and solid figures, anticipated calculus, and defined the principle of the lever. His principal contribution to scientific advancement lies perhaps in demonstrating how physical properties can be rendered in terms of mathematics and how formulae thus produced can be subjected to mathematical manipulation and the results translated back into physical terms.

Archimedes.jpg

Archimedes Thoughtful by Domenico Fetti (1620)

The Middle Ages

The pursuit of mathematical theory and pure science was not of great importance to the Romans, who preferred practical knowledge and concentrated on technology. After the fall of the Roman Empire, ancient Greek texts were preserved in monasteries. There the number system, derived from ancient Hindu sources, had given more flexibility to mathematics than was possible using Roman numerals. It was combined with an interest in astronomy and astrology, and in medicine.

Aristotelian thought made an emergence in Christian West in large measure through the work of St Thomas Aquinas in the 13c. Christianity assimilated what it could from Aristotle, as Islam had done some centuries before. Scientific knowledge was still regarded as part of a total system embracing philosophy and theology: a manifestation of God’s power, which could be observed and marvelled at, but not altered. Eventually, Aristotle was proclaimed as the ultimate authority and last word in natural philosophy. His enormous prestige combined with the conservatism of academics and of the Church laid something on the progress of science for several centuries. In the later medieval era and the Renaissance period however, ancient Greek scientific thought was refined, and advances were made both in the Christian Mediterranean and in the Islamic Ottoman Empire. The European voyages of exploration and discovery stimulated much precise astronomical work, done with the intention of assisting navigation. Jewish scholars who could move between the Christian and Muslim worlds were often prominent in this work.

The Scientific Revolution

The Scientific Revolution of the 16c and 17c remain up until this day the most defining era in science, and it happened just after the renaissance, where the conduct of scientific enquiry in the West underwent an incredible change. Nicolaus Copernicus (1473-1543) refuted many aspects of the already established Ptolemaic model of the solar system where the earth is at the centre of everything in astronomy – where he redefined the system with sun instead at the centre.

copernican-model-of-the-solar-system

A German mathematician, Johannes Kepler (1571-1630), who was also influenced by his work concluded that the movements of planets’ orbits around the sun are elliptical rather than circular. Galileo Galilei who is now championed by many intellectuals as the father of modern science was an Italian philosopher, mathematician and scientist in those days who improved on the telescope that had been invented in Holland, and used it to make observations that included the Milky Way and Jupiter’s satellites. Later, his further research convinced him of the truth in the new Copernican system [with the sun at the centre], but under threat from the Inquisition he recanted.

In England, William Gilbert (1544-1603) established the magnetic nature of the earth and was the first to describe electricity; William Harvey (1544-1603) explained the circulation of blood; and Robert Boyle (1627-91) studied the behaviour of gases under pressure – all in the early 17c.

newton

Isaac Newton (1642-1727)

Isaac Newton (1642-1727), who was to replace Aristotle as the leading authority in natural philosophy for the next two centuries also came from England. He established the universal law of gravitation as the key to the secrets of the universe. In 1687, he published his ground breaking work entitled Principia, which stated his three laws of motion. Alongside Gottfried Leibniz (1646-1716) he invented calculus, and he also did incredibly influential work on optics and the nature of light.

Cooperation and discourse among scientists and intellectuals had been fostered by the creation of societies where meeting and discussions about their work could take place: for example, the Royal Society in London established in 1662, and the Académie des Sciences in Paris, founded in 1666. Discoveries made by various scientists were used by others in science to advance faster to new theories, leading to science obtaining more status and prestige as a driving force in society.

The 18-19c

The 18c Enlightenment saw its writers play a major part in bringing the scientific advances of the previous century to the wider public and further enhancing the prestige of science as a reliable driving force of civilisation. The scientific method – observation, research, even experimentation and the use of reason, unfettered by preconceptions or dogma to analyse the findings – was applied to almost all aspects of human life.

Chemistry saw significant advances in the latter part of the century – notably the discovery of oxygen by Lavoisier in France, Priestley in Britain and Scheele in Sweden. The Industrial Revolution was a substantial contribution of scientific knowledge’s impact on society and a variety of minds from various fields with various intentions. The discovery of the dye, aniline led to a ‘revolution’ in the textile industry – an example of science’s usefulness to the ‘eyes of the public’, which gradually led to more public support and hence government funding. The École Polytechnique was founded in France in 1794 to propagate the benefits of scientific discovery throughout society. Elsewhere, technical institutions followed that were funded for scientific work – the new era of the professional in science had begun.

Throughout the 18c, botany also advanced when Linnaeus invented his system of binomial nomenclature (1735), while ever growing interest was aroused by the great variety of new species of plants and animals being discovered by explorers, particularly by Captain Cook.

lamarck

The French naturalist Jean-Baptiste Lamarck’s (1749-1829) work foreshadowed Charles Darwin’s theories of evolution and made the first break with the notion of immutable species proposed by Aristotle. That particular moment in time also saw geology develop into a science: William Smith (1769-1839), ‘the father of English geology’, was drawn to investigate strata while working as an engineer on the Sommerset coal canal to eventually become the first to identify strata by the different fossils found in them. The epoch-making conclusions of Darwin’s (1809-1882) work on his theory of evolution was accepted by almost all biologists upon its publication as The Origin of Species in 1859, which however did clash with the ideologies promoted by the church. The laws of heredity that had been the work of Gregor Mendel (1822-84) was unfortunately not appreciated in his lifetime – to only later become the founding stone for genetic research. The germ theory of disease was also shaped by the contributions of the iconic French chemist, Louis Pasteur (1822-1895) who moved into biology. The germ theory of disease states that every human disease is caused by a microbe [or germ] which is specific for that disease, and one must be able to isolate this microbe from the diseased human being to cure the latter.

Louis Pasteur d'purb dpurb site web

Image: Louis Pasteur (1822 – 1895), the French chemist who is considered as one of the giants of modern medicine for his research and discoveries on vaccination, and to whom this famous quote is from: « Science has no homeland, because knowledge is the heritage of humanity, the torch that lights up the world. »

Physics also evolved from tremendous advances in the 19c, as the Italian, Alessandro Volta (1745-1827) developed the current theory of electricity, and invented the electric battery and electrolysis [a study which he formulated in French and sent as a letter to the Royal Society later]. Michael Faraday (1791-1867) carried out experiments with magnetism and electricity, and enabled the building of generators and motors. James Clerk Maxwell (1831-79) proposed the field theory of electromagnetism which mathematically related the phenomena of electricity, magnetism and light. The existence of radio waves was also predicted by him, which was eventually demonstrated by Heinrich Hertz (1857-1894).

Although science itself had not been of major importance in the very early stages of the Industrial Revolution in 18c Britain, technology by the end of the 19c – influenced by the works of scientists – had led to the development of most of the machines and tools that were to transform life for most of humankind in the developed world in the following century. Germany as a single nation excelled and innovated for the time between 1870 and 1914, where scientific education and applied science became major parts of the educational system, all the way up to the tertiary level. A research culture, with the ability to generate change became instilled and institutionalised to become part of German education, culture & philosophy.

800px-Reichsadler_der_Deutsches_Reich_(1933–1945)

The Reichsadler or Emblem of the Deutsches Reich (1933–1945) with the Swastika symbol

Atomic physics and relativity

The theory that all matter is made up of minute and indivisible particles known as atoms was proposed by the ancient Greeks, and various early 19c scientists such as Newton, John Dalton (1766-1844), Amedeo Avogadro (1776-1856) and William Prout (1785-1850) made significant contributions in refining the concept of the atom and the molecule, and in 1869 Dmitri Mendeleyev (1834-1907) conceived the periodic table classifying the chemical properties of each known element to their atomic weight.

theatom

An Atom

Albert Einstein’s (1879-1955) theoretical work gave way to the development of the quantum theory in the early 20c. Einstein’s theory of relativity would incorporate Maxwell’s electromagnetic theory and Newton’s mechanics, while also predicting departures from the classical behaviour of materials at velocities approaching the speed of light. The century’s most famous formula was also provided by Einstein – E = mc 2 – to define the mass equivalence of energy. The postulation of the existence of subatomic particles, the building blocks of atoms and their nuclei, was also made after a series of experiments with ionising radiations. The large energy release created by the splitting of the atomic nucleus predicted by Einstein was demonstrated by Ernest Rutherford (1871-1937) in 1919. Force fields and their subatomic particles were studied further in the second half of the 20c through the use of large particle accelerators [up to 27km/17mi in length] with a view to forming a unified theory that would describe all forces including gravity.

unifiedtheory

What the laboratory could not provide in terms of information was gained through astronomical observations which would lead to complementary information in understanding the universe on a microscopic and cosmic scale.

The understanding of the atom in terms of a heavy nucleus surrounded by light electrons has led to a deeper knowledge of the chemical and electronic properties of materials and ways of modelling them. Near the end of the 20c, such advancement enabled the ‘tailor-making’ of materials, substances and devices exploited in chemical, pharmaceutical and electronic products.

Genetics and beyond

The study of the basic building blocks of organic life was largely influenced by the study of the atom of the 20c. Research into understanding the nature of the chemical bond and molecular structure applied in biology led to the work on DNA. Investigation by Francis Crick (1916-2004), James Watson (1928- ) and Maurice Wilkins (1916-2004) in the early 1950s revealed the famous helical structure, which has a particular structural feature in that it is composed of four types of proteins, which proved the existence of a genetic code.

dna

A surge in genetic science was the reality of the latter second half of the century, suddenly unlocking the possibility of cloning and even more controversially, ‘tailor-making’ or ‘engineering’ living beings.

The pace of scientific development has definitely been progressing since the Renaissance and the ongoing Scientific Revolution started in the 16c and 17c. In the 20c, the revolution was exponential, and new information gained from research and experiment is still being used in the applied sciences and technology in the search for newer and more efficient modes of power, tools, and to meet the ever increasing demand for useful and smarter environmentally friendly materials to meet the demands of civilisation while maintaining the fragile balance of our environmental ecosystem due to excessive exploitation and fossil fuel use. The public perception of science is unfortunately only based on its practical applications in everyday life and not on the more life changing matters such as atomic physics or genetics – which are as remote from the average citizen as they have ever been.

Similarly to religion, science arose out of the desire to explain the world around us. The fierce clashes between both institutions have been hard fought, although by the 20c science was crowned as the dominant orthodoxy in guiding civilisation. Yet, with the existence of uncertainty factors and the development of chaos theory, science may be less dogmatic since the Renaissance.

The Scientific Revolution of the 16c & 17c: where science was established as a driving force

scientific-revolution

The Scientific Revolution could be qualified by many scientists, intellectuals and historians as an era born of a thirst of development and knowledge since it started just after the Renaissance, near the end of the 15c to give birth to science as it is known today. Perhaps its lasting appeal to the world is that it helped refine intellectual thoughts and establish the basis for the founding methods of investigation still used by all fields of science today. In fact, the Scientific Revolution is the name given to change in the nature of intellectual inquiry – the way in which civilisation thought about and investigated the natural world. This wave of scientific revolution began near the end of 15c Europe, and until it was accomplished or at least under way, it could be easily argued whether any of the thinkers, intellectuals and scholars of Christian Europe could properly qualify themselves as ‘scientists’.

The medieval mind set

Although the middle ages lacked the sophistication of today’s society, original thinkers did exist. It may be true however to say that scholasticism – the term given to theological and philosophical thought of the period operated within a tightly structured and closed system: the universe was God’s creation where the primary truths revealing its nature and workings were only found in the Bible. As knowledge, the Bible was also supported by the writings of selected authors of immemorial and unimpeachable authority, namely Galen, Aristotle and the Church Fathers. If one wanted to establish the truth in any matter, one would first seek support from such an authority, and if support was found, the case would be closed. The desires to critically challenge while pushing the boundaries was clearly not present as many may have believed. Most attempted rather to move closer to the supposedly ‘true meanings’ of the already authoritatively established or formulated. When Bishop James Ussher as late as the 1650s tried to investigate the age of the world, his attention went no further than the Holy Scripture, and by voraciously studying Biblical chronology, concluded at a precise date for the Creation – 4004BC.

TheCreationOfAdam.jpg

The Creation of Adam by Michelangelo (part of the Sistine Chapel painted in 1508-1512)

Moreover, it was also axiomatic for the times and the credibility of such a powerful voice as the church for no loose ends to be present in God’s original ‘perfect Creation.’ Although the Fall of Man had created feelings of uncertainty into the cosmos, evidence of the intended order was still arguable – there was an underling order, pattern and correspondence everywhere. Things could – in most cases – best be understood or described by analogy with another. Assuming that the one who governs the universe is God, the Sun would therefore be most powerful of all the planets circling the earth, so the king is chief ruler among men, so reason should rule over the inner life of humankind, and even more so the lion must be the king of beasts. Nowadays, it would simply not be revealing much about the lion to claim that its position on the scale of nature in the animal kingdom is equivalent to that of a king among men or the sun among the planets; in medieval times the conversation would be closed here without any space for questioning or clarifying.

The Renaissance and the Reformation

The process of modernising and opening up the workings of the closed system began with the Renaissance, the Reformation, and the voyages of exploration and discovery. Those living during the Renaissance had then possessed new knowledge or had new access to old sources. Many thinkers and intellectuals of the time believed themselves to be part of a movement that was making a significant break with the past to pave the way for a new era of modern knowledge. A process of secularising knowledge was started, prising it away from its basis in theology, and making the study of subjects such as science and mathematics a thing of value in its own right. In northern Europe the Reformers rejected the authority of the Church and instilled in believers the confidence to study the Word of God – and, by extension, His works – for themselves. Voyages of discovery finally made known the existence of new worlds entirely unsuspected by the ancients on earth, leading to the questioning of not only the value of geographical authorities but of other authorities as well.

Copernicus, Kepler and Galileo

The Polish astronomer Nicolaus Copernicus (1473-1543) completed his work De Revolutionibus Orbium Coelestium (‘On the Revolutions of the Heavenly Spheres’). It represented the mature expression of an idea expressed earlier in a brief commentary, namely, that the sun was the centre of the universe and the earth and the other planets revolved around it. The work was published as a book in Frankfurt in 1543 by a Lutheran printer, shortly after Copernicus’s death.

Copernicus’s theory, if accepted, not only destroyed the old earth-centred system devised by Ptolemy, but also made obsolete all the analogies based on that cosmology. The new model however was accepted by few, not even by the popular Tycho Brahe (1546-1601), who himself contributed hugely to astronomy during the 16c through his observations of the stars and their movements. De Revolutionibus was banned by the Roman Catholic Church and remained so until 1835 [292 years].

The Copernican theory was however accepted by Johannes Kepler (1571-1630), a German mathematician and astronomer who was Tycho Brahe’s assistant and on his death succeeded him as the imperial mathematician and court astronomer in Prague. Intensive works on planetary orbits done by Kepler helped develop the theory further and provided it with a mathematical foundation. Kepler’s findings on the laws of planetary motion, published in Astronomia Nova (‘New Astronomy’) in 1609 and Harmonice Mundi (‘The Harmony of the World’) in 1619, formed an essential foundation for the later discoveries of Isaac Newton (1642-1727). Further significant discoveries in optics, general physics and geometry was also made by Kepler. It may also be noted while considering the still fragile and transitional status of science in the 17c, that he was appointed as astrologer to Albrecht Wallenstein, the Catholic general who commanded the Thirty Years’ War. Newton too was a student of alchemy.

galileogalilei

Galileo Galilei (1564-1642)

The Copernican theory was also accepted by Johannes Kepler’s (1571-1630) older Italian contemporary, Galileo, who first took issue with Aristotle while studying in Pisa. When he was made Professor of Mathematics there in 1589, he disproved Aristotle’s theory regarding the assumption that the speed of an object’s descent is proportional to its weight – a presentation he made to his students to demonstrate the phenomenon, by releasing objects varying in weight from the Leaning Tower of Pisa. After his Aristotelian colleagues pressured him into giving up his professional chair, Galileo would make his way to Florence, by the same time he had also inferred the value of a pendulum for the exact measurement of time, created a hydrostatic balance, and written a treatise on specific gravity. From 1592 to 1610 when he was a Professor of Mathematics in Padua, Galileo modified and perfected the refracting telescope after learning of its invention in Holland in 1608 and used – a powerful tool denied to Copernicus and Tycho Brahe – to make remarkable discoveries, notably the four moons of Jupiter and the sunspots, which further confirmed his acknowledgement of the Copernican system which stated that the earth moved around the sun in an elliptical orbit, a system first formed in 1595.

solarsystem

However Galileo’s daring conclusions at the time lead to conflicts not only with traditionalist academics, but also more seriously with the Church due to his writings when he was employed as the court mathematician in Florence in 1613. A warning from Cardinal Bellarmine in 1616 instructed the mathematician that his support of the Copernican system should be dropped as the belief in a moving Earth contradicted the Bible. After several years of excruciating silence, in 1632 he published Dialogo sopra I due massimi sistemi del mondo (‘Dialogue on the Two Chief World Systems’) in which, in the context of a discussion of the cycles of tides, he concluded with supporting Copernicus’s system of the solar system. The savage religious laws of the times saw Galileo compelled to abjure his position and sentenced to indefinite imprisonment – a sentence commuted immediately to house arrest. After abjuring he is believed to have murmured ‘eppur si muove’ (‘it does move nonetheless’).

What will happen in the next billion years? Will humans survive?

More Progress

The 16c saw major strides in all branches of science, the Belgian Andreas Vesalius (1514-1564) became one of the first scientists to dissect human cadavers. Based on his professional observations, he published De Humani Corporis Fabrica (1543, ‘On the Structure of the Human Body’), the very same year that Copernicus’s De Revolutionibus appeared. The anatomical principles of Galen were repudiated, and paved way for William Harvey’s discovery of the circulation of the blood, explained in a book in 1628. The works of Galileo however had not only had an impact on knowledge itself but on many other intellectuals such as Evangelista Torricelli (1608-47), the inventor of the barometer [a vital equipment for experimentation], and the Dutch physicist Christiaan Huygens (1629-1693), the inventor of the pendulum clock, the discoverer of the polarisation of light and the first to put forward the idea of its wave nature

humani-corporis-fabrica

De Humani Corporis Fabrica by Andreas Vesalius (1543)

At the similar period, the Irish experimental philosopher and chemist, Robert Boyle (1627-1691), the formulator of ‘Boyle’s Law’, was studying the characteristics of air and vacuum by means of an air pump, created in partnership with his assistant Robert Hooke (1635-1703). The anti-scholastic ‘invisible college’ meetings of Oxford intellectuals, a precursor of the Royal Society, saw Boyle play an active part – his air pump became a powerful symbol of the ‘experimental philosophy’ promoted by the Royal Society since its founding in 1660. In 1662, Robert Hooke became the Royal Society’s first curator of experiments.

The Royal Society gradually provided a forum and focus for scientific discussions and a means of discussing scientific knowledge – its Philosophical Transactions became the first professional scientific journal. Together with other comparable institutions in other countries, such as the Académie des Sciences of Paris, founded in 1666, the systematisation of the scientific method and the way in which experiments and discoveries were reported were promoted. The importance of plain language in the detailed & systematic description of experiments for reproducibility was emphasised. The creation of prominent scientific associations also marked a cornerstone for the socio-cultural acceptance of science.

Newton

The Scientific Revolution’s culmination is believed to lie in the work of Isaac Newton, where his early mathematical studies led to the invention – simultaneously with Gottfried Leibniz (1646-1716) – of differential calculus. While focussing on the behaviour of light and prisms, he created the first reflecting telescope, a pivotal tool to the astrologers who followed. In 1684, Newton published his theory of gravitation, and in 1687 his famous Philosophiae Naturalis Principia Mathematica (‘Mathematical Principles of Natural Philosophy’), which stated his three laws of motion, would become the founding stone of modern physics – unchallenged until the arrival of Einstein in the early 20c.

Most importantly, Newton’s universal law of gravitation not only explained the movements of the planets within the Copernican system but it even gave an explanation to such humble events as the fall of an apple from a tree. But more surprisingly, it never excluded God from the universe since all of Newton’s work was undertaken within the framework of a devout Christian, though his private beliefs were complex and heterodox.

By the time of his death in 1727, the scientific method was firmly established, and the thinkers, intellectual and writers of the Enlightenment acknowledged that an era had dawned where observation, experiment and the free application of human reason were the foundation of knowledge. In fusing science with culture and spreading knowledge through various themes and outlet of the discoveries made from previous centuries, the writers of the Enlightenment helped to firmly establish the prestige that science and its affiliates and practitioners have inherited and enjoyed down to the present day.

_______________________________________________________

 

Part IV: Medicine

medicine

From the earliest times of human civilisation, all societies seem to have had a certain amount of knowledge of herbal remedies and to have practised some folk medicine. Most patients in the earliest days were treated with the objective of regaining the favour of the gods or to ‘release’ the evil from the body, therefore the cause of illness was believed to be rooted in supernatural causes. In early civilisations such as in Egypt and Mesopotamia, for example, salves were used as part of medical practice which included divination to obtain a prognosis and incantation to help the sufferer. In the East, many commonly occurring diseases were documented by doctors in India and where they used some drugs still exploited by modern medicine; they also performed surgery that included skin graft. In some parts of the world, some societies banned the cutting of dead bodies due to religious beliefs and policies fused with the law. Unsurprisingly however, knowledge of physical anatomy was incredibly basic. Early Chinese society also banned the desecration of the dead and this resulted in Chinese concepts of physiology not being based on observational analysis. A developed medical tradition flourished in China however from the earliest times to the present day, with special focus placed on the pulse as means of diagnosis.

yinyang

In Chinese medical philosophy, the objective is to balance the yin (the negative, dark, feminine, cold, passive element) and the yang (the positive, light, masculine, warm, active element), and the pharmacopoeia for achieving this: vegetable, animal and mineral. Similarly important is the practice of acupuncture, where needles are used to alter the flow of ch’i (energy) that is believed to travel along invisible channels in the body (meridians). Anaesthesia puts the efficacy of acupuncture to the test – being its most widespread use.

The sophistication of modernity in the West started to set a new course to medicine when it was partially rationalised by the Greek philosophers, since before this it was mainly an aspect of religion.

asclepius

Asclepius, the Greek god of medicine

In ancient Greece for example, people suffering from illness would go to the god Asclepius’s temple for incubation – a sleep during which the god would visit in a dream which would then be interpreted by the priests to reveal the diagnosis or advice for the cure. Empedocles later came up with the idea that four elements exist – fire, air, earth and water, which when applied to the human body turned into blood, phlegm, yellow bile and black bile – which must obey certain rules to be maintained in harmonious balance. That concept was further reinforced when it was adopted by Aristotle (384-322BC) and remained a founding pillar of Western medicine until the new discoveries of the 18c. From the viewpoint of a biologist, Aristotle observed the world, performing dissections of animals and learning more of anatomy and embryology.

hippocrates

After his death, the main learning centre in Greece became Alexandria, where principles expounded by Hippocrates (c.460-c.377BC) were upheld and obsolete ideas such as illnesses caused by the gods were rejected, instead he made and raised a new school of thought where his diagnosis and prognosis were made after careful observation and consideration. Today, Hippocrates is regarded as the ‘father of medicine’, and sections of the oath attributed to him are still used in medical schools to this present day.

Galen (c.130-c201), a Greek doctor, was the next major and defining influence on Western medicine who studied at Alexandria and later went to Rome. Galen gathered up all the existing writings of Greek doctors, and emphasised on the importance of anatomy to medicine. He used apes to find out about the ways the body worked since dissection of human bodies were then illegal. Although his daring efforts were justified for medicine, his reports contained many mistakes on anatomical points which included the circulation of blood around the body, which he described instead to have ‘ebbed and flowed’.

apetimes

Surprisingly, the point worth noting is that although the people then were living in the early times of human history, Rome had already developed an excellent culture with high regards for public health; more strikingly perhaps is also the fact that they even had clean drinking water, hospitals and sewage disposal – which was never developed or adopted by any civilisation until the 20c.

After the Roman Empire fell, the practice of medicine resided in the infirmaries of the monasteries. In the 12c century, medicine was developing as an important necessity in society from the lower to the upper end, and the first medical school was established at Salerno. Many other medical schools in Europe followed, namely: Bologna, Padua, Montpellier and Paris. Mondino dei Liucci (c.1270-1326) published the very first manual of anatomy after carrying out his own dissections in Bologna. The most major advancement in medicine however came from the Belgian Andreas Vesalius (1514-1564) who contributed through incredibly detailed sketches, descriptions and drawings published in 1543, correcting the errors of Galen. The Inquisition sentenced him to death for performing human dissections [once again an occasion where religious traditions came in the way of reason and research], however a new wave of inquisitive intellectuals had already surfaced abroad who could not be stopped.

A better and more precise knowledge of anatomy led to an improvement in techniques used in surgery, and surgeons, the long considered as inferior practitioners by physicians, began to be recognised as a major part in medicine and its procedures. The huge increase in the armies of Europe in the 16c and 17c created greater demands for effective surgery in the military departments. Ambroise Paré (1510-1590) reformed surgical practice in France, sealing and stopping the cauterising of wounds, while in the United Kingdom, more collectives of medicine intellectuals were formed which later became the College of Surgeons.

LavoisierAndHisWifeMarieAnne.jpg

French nobleman and chemist Antoine Lavoisier (1743-1794) and his chemist wife Marie-Anne (1758-1826)

In 1628, the theory of the circulation of blood was formulated by William Harvey’s experiments in the 17c, which was reinforced by Marcello Malpighi’s work. However, it took more than a hundred years for medicine to fully understand the purpose of circulating blood up until Antoine Lavoisier (1743-1794), a French chemist discovered oxygen which has to be transported to various parts of the human body through blood. A new approach to obstetrics was also invented at that time, along with the growth of microscopal studies, and by the end of the 18c Europe was introduced to vaccines which helped to eradicate previously deadly diseases such as smallpox in the 20c.

Marcello_Malpighi.jpg

Biologist and physician, Marcello Malpighi (1628-1694)

In the 19c scientific research generated new knowledge about physiology and medicine saw refinements to aid diagnosis, such as the invention and introduction of the stethoscope and chest percussions. The field of bacteriology was also born out of the work of Louis Pasteur (1822-1895) after the latter established the germ theory of disease transmission. This had a major impact and transformed safety for all patients, for example in the field of obstetrics where women had been dying regularly from puerperal fever before it was investigated to find out that doctors were transmitting bacteria from diseased patients to healthy ones. The first use of ether as a drug in the USA in 1846 and of chloroform in Scotland in 1847 made way for another major advance in surgery when their use as anaesthetic gases opened new doors to minute, longer and more complicated surgical sessions to be initiated.

The wave of cutting edge and precise research continued into the 19c with the recognition and detailed description of many conditions now available to medical education for the first time. Precautions were taken to halt the propagation of malaria and yellow fever after it was revealed that insect bites could transmit them.

At around the end of the 19c, the birth of psychology as the study of the ‘mind’ was taking place with Sigmund Freud’s work [See: Psychoanalysis: History, Foundations, Legacy, Impact & Evolution], and Rontgen’s discovery of X-rays along with Pierre and Marie Curie’s radium provided new diagnostic tools to medicine.

The 20c continued to flourish with progress when the haphazard discovery of bacteria-killing organism were made, most famously Alexander Fleming, the scottish Bacteriologist and Nobel prize winner who discovered Penicillin in 1928 and also served during the First World War in the Army Medical Corps. After qualifying with distinction in 1906, Fleming went straight into research at the University of London. One of the most important discoveries in medicine would eventually be made by a him in 1928 over a simple observation. Fleming observed that the mould that had accidentally developed on a set of culture dishes used to grow the staphylococci germ had also created a bacteria free circle around itself. After careful observation and research, the substance that repelled bacteria from the mould was named Penicillin. The drug would later only be developed further by two other scientists, Australian Howard Florey and Ernst Chain, a refugee from Nazi Germany [all three shared the Nobel Prize in medicine]. Although the first supplies of Penicillin were limited, by the 1940s the pharmaceutical industry had made it a top priority and it was mass produced by the American drugs industry.

The era also spectated the growth of advanced technology and the further development of various forms of drug treatments, such as sulfonamides when they were discovered, followed by streptomycin, the first effective antibiotic against tuberculosis which was fatal until then similarly to diabetes which was also explored and treated with the discovery of insulin, thus halting its former reputation as deadly into a controllable condition – a new breed of surgeons are claiming to have found surgical methods to completely reverse the Type-2 Diabetes that affects most.

Typhoid, tetanus, diphtheria, tuberculosis, measles, whooping cough and polio were mostly eradicated in the West as the 20c was marked by improved public health services, living condition and nutrition along with well devised campaigns with the sound backing of science to promote immunisation campaigns for children. The West was also freed of diseases such as rickets and scurvy as new discoveries were made on the role and importance of vitamins which also led to the mitigation of beriberi in Africa and Asia early in the century.

Malaria, yellow fever and leprosy were also found to curable, and now that with all the advancement in medicine most people live longer in developed economies [at the exception of some that have mediocre policies due to their mediocre management system, e.g. politics], the chief causes of death nowadays have so far been cancer and heart disease.

 

lifeexpectancy

Life Expectancy in the United Kingdom / Source: OurWorldinData.org

 

lifeexpectancyglobal

Life Expectancy Global / Source: OurWorldinData.org

Unleashing the power of genetics against cancer

Source: Cambridge University

In the field of cancer research, advancement in new therapies involving various techniques are now available and continuously being developed; with the most recent being the promising CRISPR, which involves using a patient’s own immune system to fight cancer, using a particular type of immune cell known as the T cell. The logic behind it explores the usual purpose of those T cells in the human body which involves surveying the body to seek out and destroy abnormal cells that have to potential to turn cancerous- detected by T cells due to the presence of strange proteins on their surface [signs that the T cell knows as ‘dangerous’]. Surprisingly cancer has evolved a cat-and-mouse game to evade T cells by developing the ability to ‘switch off’ any T cell that gets in their way, effectively blocking their healing attack. The most effective cancer therapies try to counteract this response by abnormal and cancerous cells by boosting the immune system.

CRISPR: the promising new cancer treatment

In 2015, a study used an older, less efficient gene engineering technique known as the ‘zinc finger’ which led to nucleases that give T cells better fighting ability against HIV – the therapy was well tolerated in a 12-person test group. A further study used reprogrammed T cells from multiple myeloma patients in the specific recognition of cancer cells which shrank the tumours initially while the T cells gradually withered and lost their ability to regenerate themselves – a common issue that new trials hope to solve in the near future. Perhaps one of the most unfortunate part of the story with CRISPR despite being a promising cell therapy is that it is often offered and used on patients with relapsing diseases. Other genes can also be ‘tweaked’ for the particular protein PD-1 with the CRISPR method that counter the problem of T cells losing their ‘intensive ability’ as these new tweaked genes help prolong the lifespan of the modified T cells while simultaneously enhancing their cancer fighting ability since the PD-1 protein sits on the surface of T-cells and helps dampen the activity of the cancer cells after an immune response [tumours found ways to hide by flipping the PD-1 switch themselves, thus drugs that block PD-1 from this immune suppression have been proven to be a promising immunotherapy cancer treatment].  Researchers are currently carrying intensive research to understand the deeper mechanics of CRISPR by removing T cells from patients of cancers that have stopped responding to normal treatments, and using a harmless virus, deliver the CRISPR machinery into the cells, and perform three gene edits on them. The first gene edit will insert a gene protein called the NY-ESO-1 receptor, a protein that equips T cells with an enhanced ability in locating, recognising and destroying cancerous cells [the NY-ESO-1 displaying tumour]. The T cells have a native trait that is unfortunately unsupportive in this process as it interferes with this process of added protein, so the second edit will be to remove these inhibitors so that the engineered protein will have more efficiency against cancer. The final and third edit gives the T cell longevity by removing the gene that allows recognition as a cancer suppressor by cancer cells that disable the PD-1 protein, thus countering its attack while remaining active due to the added guide RNAs which would tell the CRISPR’s DNA-snipping enzyme, Cas9, where exactly to cut the genome. However, since CRISPR is not always effective, not all cells will receive the genetic modification, thus making the engineered cells in the end, a mixture with various combinations of proposed changes to balance the reaction into the desired one. Only 3-4% may contain all three genetic edits. After the edits, the researchers would generally infuse all the edited cells back into patients and monitor for issues closely. One of the main concerns with CRISPR is that it may inadvertently snip other genes potentially creating new cancer genes or trigger existing ones, and these side effects are planned for monitoring by a team expected to measure the growth rate of engineered T cells and carry test for genomic abnormalities. However, the concluding outlook on CRISPR is very bright, in a pilot run carried out by using T cells from healthy donors, the researchers checked for 148 genes that could be snipped by mistake, and the only faulty cut that was detected was deemed as harmless. Another major concern is the fear of activating the body’s immune system against the engineered T cells since the enzyme Cas9 originates from bacteria and is essential for the cancer cutting process CRISPR relies on – although ways exist to prevent the immune system from destroying engineered Cas9 T cells, the possibility remains.

Gene therapy trials have suffered a recent setback with the death of the young patient Jessie Gelsinger during a trial. Further investigation revealed that some of the researchers failed to disclose the side effects observed in animals and some of the investigators had financial incentive for the trial to be a success. Extra precaution is being taken by UPenn who pioneered the treatment to ensure the smooth progression of medicine in genetics. As Stanford bioethicist Dr. Mildred Cho said, “Often we have to take a leap of faith.”

Cancer research and treatment on the whole has seen innovations in surgery, chemotherapy, radiation therapy, a combination of the mentioned and the new promising method involving gene editing Cas9 based T cells with the CRISPR technique. All these together have and are increasing the prognosis for some sufferers, and in cardiology too, new treatments stunned the world, notably angiograms, open-heart surgery and heart transplants. The process of organ transplant has gradually been extended to lungs, livers and kidneys, and artificial joints for the hips and knees have also been improved.

Further education on family planning has been available and constantly updated since the 1960s where methods of contraception had first been marketed to the wider public [such as the oral contraceptive pill for women]. The controversial act of abortion too with the scientific legitimacy was made safer and legalised in many developing economies and at the other end of the scale couples unable to conceive benefited of fertility drugs and in vitro fertilisation provided many with the choice of starting a family.

 

 

 

 

 

 

With the growing discoveries and nearly godly feats of medicine, public perception of the field also changed and many soon started to entertain the belief that a cure exists for every ill. Unfortunately this is not true, as many complicated diseases such as cancer continue to defy knowledge and scientific research and new diseases and complications continue to emerge such as Ebola, HIV and antibiotic resistance. The constant struggle for 3rd world economies to keep up with medical cost has also led to major culturally destructive waves of migration that have very quickly turned out to be unsustainable for most major Western economies along with the religious and socio-cultural clashes being a constant topic of debate in most educated circles and the connected alternate media alike across Europe [to counter some of the extreme liberal & atavistic views promoted by the mainstream media fuelled by ruthless & scrupulous globalists].

 

The economic grip of pharmaceutical companies on the world’s economy has been a central issue for many concerned scientists and intellectuals of the times constantly questioning the responsibility of funding and providing cutting edge and hygienic health services for the people; while on the other hand other controversial but vital access to organs for transplantation have caused major social debates regarding the future cultural behaviour regarding the organs of the dead and the provision of a constant supply of fresh organs for the Western economies’ major health requirements.

While the Western model of medicine is the most effective, researched, respected and taught on earth, other sub disciplines of medicine that many medical empiricists consider to be complete lies continue to prosper at a medium scale for a surprisingly constant demand for folk and herbal medicines. In the urban areas of non-Western societies the trend is at a larger scale since Western medicine has still not made a significant impact to the adepts of traditional practices. Medically unproven and scientifically void practices such as chiropractic, aromatherapy, auto-suggestion, homoeopathy, osteopathy and hydrotherapy still exist in the West under the classification of ‘complementary medicine’ where many of the practitioners do not require any degree or certificate to ‘practice’ [a documentary with Dr. Richard Dawkins explored this topic in the UK]. Most of those treatments that have no scientific grounding somehow all have long histories, and a chosen few such as acupuncture, have been fused into Western orthodox medical practice in countries such as the UK.

La science n'a pas de patrie, parce que le savoir est le patrimoine de l'humanité, le flambeau qui éclaire le monde d'purb dpurb site web

Traduction(EN): « Science has no homeland, because knowledge is the heritage of humanity, the torch that lights up the world. » – Louis Pasteur

_______________________________________________________

Part V: Secularisation

Secularisation may be defined as the process of change where authority passes from a religious source to a secular one. This may turn into an issue or a need only where religion and the religious have gained considerable power or a dominant position in society and penetrate all aspects of life, including the government. For instance, in ancient Greece and Rome, religion does not seem to have ever dominated the state. The main religious officers was shared by the same men who held political office [religion may have been seen as simply a part of national culture]. While virtue consisted of piety and observance to the gods were expected, religion was rarely a primary focus for society. Furthermore, polytheism provided flexibility to the system as new gods and goddesses would be added to the pantheon to accommodate local cults, and an individual would have the freedom to choose a deity as his or her special patron. However, prudence demanded that other divinities not be neglected, and none of this was of major concern to the state.

Yet, as the petty logic of majority in many cases comes into conflict with strategy, the great monotheistic proselytising religions of Christianity and Islam saw a great rise and the situation and relationship with the state started to change. Now, as a matter of righteousness and justification as a moral authority, the state had to go with the religious beliefs that ruled most of the West. This led to the state having to ensure salvation, which became the founding pillar of ‘right religion’. Consequently, this acceptance and spread led to the increased power and influence of Christian kings who with them emerged a body of clerical men who claimed to exercise the spiritual ministry of the most almighty of beings, God, on earth. This led to large amounts of money, land and property being donated by individuals, organisations and Christian rulers to the Church in the hope of maintaining a good relationship and being protected. This also increased the overall influence of the power of the Church which however owed so much to the Crown in terms of donations and freedom that they gradually tended to act as its propagandists and servants. The term and principle of ‘Caesaropapism’ was accepted by the Church in the Eastern Roman (Byzantine) Empire, which simply proved their acceptance of subordination to an Emperor who was thought of as an ambassador of divine authority on earth. However, this claim of a supreme imperial being at the top of the religious scale soon led to conflicts with the popes of the West who were unhappy with such imposition in regards to their contribution to the works of God and soon, conflicts began between the sovereigns and the papacy over the limits and jurisdiction of royal and papal power – both, of course claiming to be guided by the divine mandate.

innocentiii

Perhaps one of the most famous of these clashes happened between Henry II of England and his Archbishop of Canterbury, Thomas Becket. At that time the Church’s power may have been at its peak, during the pontificate of Innocent III, who claimed that the Holy Roman Emperor was subordinate to him. Later, Innocent III pushed for Emperor Otto IV to be deposed, forced Philip II of France into reinstating his divorced second wife, Ingeborg of Denmark. He also placed England under an interdict, and had King John (Lackland) excommunicated to be able to secure the office of Archbishop of Canterbury for his candidate, Stephen Langton. Those clashes of power and interest saw a decrease however, when in the following years the papacy was in dire need of royal help to defeat the Conciliar Movement – a movement in Western Europe in the 14c and 15c of the Roman Catholic Church which believed that final authority in spiritual matters resided with the Church as a company of Christians, embodied by a general church council, not solely with the Pope [a movement started by Pope Innocent III and is still used today in France].

In other civilisations in the Middle-East, such as in Islamic territory that obeyed the laws of Islam’s sharia, conflicts between the professional religious classes and the rulers tended to be avoided since Islam has no priesthood. Religion and state were unified in the pursuit of what the Quran and the life of Muhammad qualified as the ‘pursuit of Islamic righteousness’. This however includes violent subjugation of all non-Muslims, oppression of women, obsolete traditions in direct conflict with modern human rights in all modern Western nations in relation to restrictions to women and indoctrination of violent political ideologies that are connected to the political teachings of Muhammad, mostly found in the sharia. Thus, the constant links between extremist groups promoting violence and major governments in the Middle-East with Islam as the main religious faith are a constant topic among cultured circles in the West who are against islamisation. Most Muslims however are similar in many ways, even on the borders of Europe, in Turkey similar to Saudi Arabia, most adhere and believe in the same ideology that Islam and the Sharia promotes and teaches, unsurprisingly many Islamic scholars too have turned out to have very dangerous views on Islam’s war on non-Islamic civilisations and non-Muslims. The Caliph claim was made in Istanbul by the Ottoman Sultan, or supreme head of all Sunni Muslims (Sunnis). The Shia form of Islam (Shiites) was ultimately associated and identified with the Safavid Sultans in Iran.

In Tibet, where Buddhism had been flourishing, monastic donations and a huge increase in the number of dedicated monks subsequently gave monastic cultural leaders who were regarded as the incarnations of the Buddha, such as the Dalai Lama and the Panchen Lama, ruling powers in their country. In China and Japan situations differed, as instead, religious beliefs tended to reinforce loyalty to the ruler; in China for example, Buddhism, and more particularly, Confucianism, taught civic virtues which were also taught by Buddhism and Shinto in Japan.

The Reformation

When the payments of annates to Rome was abolished by Henry VIII of England as he denied the authority of the pope upon proclaiming himself supreme head of the Church of England (1534) to further supress the monasteries, the new King was simply carrying to extremes the true traditions of his predecessors across Europe. Divine Right Kingship, that was what Henry’s Reformation was essentially, an assertion of complete power and trust in his legitimacy as an extension of God’s ministry. It is worthy to note that Henry VIII would deal as harshly as advocates of Lutheranism as with those who supported the pope as he had no doctrinal differences with Rome, he simply believed in the King as the only vice-regent of God on earth. The Reformation and Counter-Reformation revived the influence and power of religion in the domestic and international socio-cultural debates of the Western world, and for the time, turning the concept of a purely ‘secular’ power completely unconceivable and unthinkable. Yet, as the years went by the intended and expected clashes reached unprecedented heights as a result of competition between religious factions.

The wars that religion brought to humankind

In the Western Christian world, the wars of religion quickly turned into a common phenomenon or justification to shed blood and die for, and they were all based on the firm religious belief that the opposing religious civilisation had no claim to existence and even more importantly should not have any jurisdiction let alone religious or cultural control over some very specific geographical points, as these were believed to have specific powers that could be manipulated for socio-cultural advantages, for example, the ‘crusade’ against the Albigenses in Southern France was simply justified as the French crown simply trying to extend its power. The movements known to most historians as ‘The Crusades’ were in fact directed against the Islamic Middle East who had been subjugating Western Europe & Christians for hundreds of years through deadly wars where many Christian women were raped, tortured and turned into sexual slaves while many Christian leaders were beheaded others forced into Islam. Religious motives in 16c and 17c even led to violence against fellow Western Christians, and as the years went wars were endless, reaching lethal genocidal levels where whole civilisations were wiped out – the remaining joining, converting to or being enslaved by the dominant [a seemingly ruthless spectacle where the cycle of evolution may have simply been the driving force among societies who were less sophisticated and more primal – or in touch with their aggressive instincts in matters of survival and conquest].

Even with all the death in the name of religion, societal events did not persuade the current societies to perceive a possible atheistic lifestyle or system; and this endured even late in the 17c. However, private and secret groups such as ‘The Family of Love’ (of whose members many were close to Philip II of Spain, a leading figure of the Counter-Reformation) had started to spread the seeds of doubts over the particular motive and purpose of having to identity state power and dogmatic religious beliefs and traditions.

An Enlightened, educated and revolutionary civilisation

The only faith with intellectuals who stood with reason without showing any preference for any other school of thought, particularly religious ones, were Christians of the Western world in Europe and it began in the 18c where the term secularisation could only be discussed in European-derived state systems. The practice of secularisation started by individuals who originally came from different schools of thought and were seeking to be guided by a more stable doctrine than religion or traditions. Others like Holy Roman Emperor Joseph II, were dedicated Christians who disagreed with the state being the authority for moral policing or to conscience regulation [quite a perceptive stance judging the questionable reputation and credibility – in terms of morals and ethics – of practitioners of the obsolete discipline that is today still termed ‘politics’]. Even more curiously, the reasoning and avant-garde [at the time] clergy of the Church of Scotland agreed, and set their focus on the barbaric violence of the 17c religious wars as a blasphemous parody of Christianity. Furthermore, the growing movement fuelled and guided by the scientific and intellectual developments of the late 17c and the spirit of the Enlightenment remained sceptical about religion and its revelations, even Voltaire was a deist.

religiousscalebygdppercapita

Religious Scale by GDP per capita

 

The Cult of Reason was further sponsored as a replacement for Christianity when the Jacobins under Robespierre came to power in France, suggesting that the Gregorian calendar be replaced by a revolutionary and republican one where the year 1783 would be the Year 1. As the era developed, the first ‘secular’ state in the Christian West became the federal government of the USA after 1783, a reason somehow that may have been more due to the lack of options as the foundation of the society in the states was mainly composed of immigrants deeply divided by religion where many were persecuted and faced death in the countries they were escaping from who back in those times had no peace keeping military conventions to protect or sanction the State on the grounds of human rights.

Là où la corruption fait rage dans le monde

Where corruption rages in the world / Source: Statista

Corruption at the top was also very much present as it still is today in politics in most non-Western societies, especially in Islamic territory where many States are strictly combined with the doctrines of Islam and its violent religious law, sharia, leading to many cases of State connections to extremist terrorists operating under the guise of Islam to protect and propagate the Islamic way of life and eventually subjugate all non-Muslims[with techniques used to abuse diplomacy and the dangerous concept of ‘political correctness’ to slowly infiltrate the law and system of other Western economies to prepare and push for Islamic doctrines to be applied on Western soil]. A situation getting worse today, as obsolete politicians lack the knowledge and education to understand and cope with the techniques of Political Islam which has long been the topic of Dr. Bill Warner’s work – to protect and prevent the atavistic and dangerous Islamisation of the West.

 

 

Logically, it seems obvious to most that 3rd world traditions would clash with First World values and individualism and today, many intellectuals and growing movements are beginning to support the complete separation of religious traditions and cultures through geographical relocation and diplomatic arrangement between States of various nations to work on solutions at the source and on location and completely stop the unsustainable and clearly abused systems of refugee relocation as Western societies are at their limits with major socio-cultural clashes and disruptions to First world national communities sparking major concerns over the security of women, children and the vulnerable older people faced with 3rd world migrants with a completely different school of thought, crowding many Western cities and locations where the never-ending clash of values, education, philosophy, language and culture seem to leave authorities contemplating at the only solution that may come with radical policies to preserve the socio-cultural make up and identity of their nations in the face of a destabilizing overgrowth of population from African and the 3rd world Islamic territories and the failure of Western States to adopt appropriate and if necessary tough measures to alleviate and balance the situation while securing their own systems and providing security for their people against socio-economic and cultural degradation.

 

 

 

The 19c

After the Napoleonic era at the end of the 18c, the conservative climate that followed led to the Catholic Church regaining a lot of credibility that it had lost and the identification and association of Church and State was seen by many intellectuals and movements of the Enlightenment as a bulwark against freedom and revolution. This resulted to the developing climate where bourgeois liberalism rose due to its tendency towards anticlericalism and its strong belief in a new system with a secular state with no sectarian affiliations.

France saw the growing clashes over education between Church and State similar to most major Christian Western nations throughout the 19c. In 1829, the Test Act of 1673 was repealed, now not requiring holders of public office [including military officers and elected regional representatives in Parliament] to be active members of the Church of England. Eventually, reason also won in France where education became ‘compulsory, free and secular’ under the Third Republic after a series of acts passed between 1878 and 1886 with Jules Ferry as the main agitator to spearhead the change. Other economies in South America such as Mexico, with an established and influential colonial Church saw that post-independence liberal views tended to demand secularisation of the State.

As the 19c century was ending, secularism and anticlericalism grew in strength and supporters in many nations of the modern world spectated a rise of different branches of “Socialist” influenced movements.

George-Lincoln-Rockwell-800x445

For example, the late American George L. Rockwell initiated a National Socialist movement in the US, and even gave some brave speeches about Jews and Negroes at Brown University & embraced the derogatory term “NAZI” for its shock value. Although the American agitator clearly drifted far from the refined version of Adolf Hitler’s National Socialism, which initially emphasised strong moral/ethical philosophies, shared communal values at every level of society & synchronised psychosocial unity, Rockwell’s version of National Socialism seemed more appropriately adjusted to the industrialised society of America, focusing on the identity of the average hardworking American citizen and his/her relationship to the unscrupulous economic model that is at the foundation of the “Wild West”, i.e. the USA.

AmericanWorkers

Photo: American Workers

Rockwell remains one of the only US public figures to have proposed a straightforward, practical & ethical direction in finding a harmonious solution to the Negro population problems affecting the US (which is now along with other foreign populations growing faster than the original white US population). George Lincoln Rockwell‘s vision matched that of the prominent visionary & avant-garde Black nationalist, Marcus M. Garvey, who founded the Pan-Africanism movement, the Universal Negro Improvement Association (UNIA) and the African Communities League (ACL).

MarcusGarvey

Marcus M. Garvey, Jr. (1887 – 1940)

Garvey also founded the Black Star Line, a shipping and passenger line which promoted the return of the African people to their ancestral lands. “Garveyism” wanted all people of Black African ancestry to “redeem” the nations of Africa and for the European colonial powers to leave the African continent. Marcus Garvey’s essential ideas about Africa were stated in an editorial in “Negro World” entitled “African Fundamentalism“, where he wrote: “Our [negroes] union must know no clime, boundary, or nationality…

Bloomsbury 162

Unknown Painting of a Negro man

Darwinism and National Socialism  gave society an explanation of human rights and human history, and a model for progress where religion was not vital [but optional] and thus not a major concern. In France, the Dreyfus Affair united all the radical progressive elements and the leftist movements in French society against the then major section of the Right: the Catholic Right. The separation of Church and State finally happened in 1905.

The 20c

During the USSR right after the Russian Revolution, the development of socialist-inspired secularism could be seen in their secular state; however, the lack of vision, philosophy and fine management eventually led to its downfall.

One of the most innovative and stunning secular changes in the Muslim world came from Turkey’s founder who believed in secular western systematisation, Mustafa Kemal Ataturk, who in a revolutionary wave abolished the Sultanate and in 1924 abolished the office of the Caliph, the former spiritual head of the Ottoman Empire. Ataturk continued this avant-garde wave of secular changes by closing down all religious schools in Istanbul, and removed the Minister for Religion from the cabinet. Even more confidently, among the changes the modern and westernising founder made was the repealing of the provision in Turkish constitution that made Islam the state religion. From then, deputies would cease to take oaths in the name of Allah, but instead made a secular affirmation. However today with Turkish national representatives such as Recep Erdogan, the forward-thinking, productive and modernising changes of Ataturk have all been reversed and ruined by Erdogan’s atavistic policies that are oriented towards the Islamisation of the whole system and has even been linked and found to be unresponsive towards major anti-Western Islamic Jihadists who spread terror and violence across Western societies without any disregard for children.

The ignorant Chancellor of Germany, Angela Merkel has also played a major part in the Islamisation of Western Europe by successfully being manipulated by Islamic territories’ humanitarian departments to take in excessive numbers of Muslim refugees [by the millions] for resettlement which have mainly been healthy Muslim males with no other objectives but to find support on the welfare systems of the West while also contributing in the Islamic doctrines that promote migration [hijra] in the name of Allah for the process of Jihad [which is a process that involves multiple techniques to subjugate all non-Muslim societies to gradually allow Islam’s doctrines to take over], in the ongoing war for the Islamisation of the West. This continued clash of values makes the secularisation of Turkey by Ataturk particularly striking since Islam’s ideologies continue to control most indoctrinated minds in the vast Islamic territory that continues to promote 3rd world ideologies and show firm stance against secularisation in Muslim countries and perhaps even more shockingly, in some parts of the West where urban and uncultured low-skilled Muslim communities have amassed – a known recruiting field for many extremist Middle-East groups such as ISIS [Daech, Islamic State] and a known breeding place for rapists who in many cases justify their heinous acts as religiously valid, being the teachings of Muhammad on the treatment of non-Muslims in the Jihad war for islamic supremacy; non-muslims are deemed as spiritually ‘inferior’ beings fairly similarly to the teachings of Judaism where all non-Jews are believed to be inferior, destined to serve the Jewry and are completely disposable, perhaps more shockingly: non-Jews should even be killed.

Islam’s perfect muslim, Mohammad, conquered immense territories with his troops and took many women from a range of European countries as slaves and sexual slaves. There were about 300 000 French Christian slaves in North Africa that many great historians such as Fernand Braudel hardly spoke of, although he is considered as a specialist in the history of the western Mediterranean basin. Novelists, and other false historians, when they speak of the conquest of Algeria and the establishment of protectorates in Tunisia and Morocco, no longer speak of one of its motivations, to put an end to the slavery of Europeans in these countries. [See: Guy de Rambaud’s essay, “Les esclaves français des Maures et des Turcs.”] Slavery dates from prehistoric times, and is recorded in China from the Shang Dynasty, and in Ancient Egypt, Mesopotamia and India, as well as among the Aztecs and Incas in pre-Columbian America. Slaves were obtained from the enslavement of peoples conquered in war. The first people to be enslaved in Europe by Islamic conquerors were “Slavs” of Eastern Europe who formed a large proportion of the slave population in the early Middle Ages [some also as a punishment for crime, through voluntary self-enslavement of individuals or families for debt or by trade], hence the word “slave” is derived from them as it comes from the Latin word “sclavus” designing the enslaved Slavic man, a term that appeared in this particular sense in 937 in a Germanic diploma, then widely used in the Genoese and Venetian notarial acts from the end of the 12th century onwards to finally establish itself in the Romanesques and Germanic languages. The etymology, even more explicit in English, reveals a historical fact that is most often ignored not only by the general public, but by the historical community itself: the slave trade at the expense of the Eastern European Slavic peoples from the 8th to 18th centuries. There was usually a constant demand for fresh supplies of slaves from the outside as the slave population became self-reproducing, specially from the Islamic Empire. Slaves were considered as a luxury consumer item, where the possession of one created the demand for more. In Ancient Greece, all but the poorest families owned at least one slave. Alexandre Skirda, an essayist and historian of Russian origin, has devoted a book to this tragic episode of European history, which fills a gap in our documentation, yet which has not aroused much public interest because it is not given the publicity it deserves. How can we be surprised by media censorship? Skirda’s book provides the general public with irrefutable facts to show that millions of whites have been reduced to servitude, and that they have been subjected to an even more severe slave trade than the Atlantic slave trade of African negroes, since it was accompanied by castration [so that they could not impregnate any Arab-Muslim women] which led to countless deaths from this barbaric act, and that they have been sold in most cases to Muslim buyers.

Ancient Egypt

Slaves tended to be employed in two areas: as servants in the house or in large-scale industrial or construction projects [e.g. the building of the pyramids and royal palaces of ancient Egypt]. In Ancient Greece and Rome, slaves also worked as craftsmen, agricultural labourers, oarsmen in galleys, and in some rare instances as tutors for young children. In the Domesday Book, 10 per cent of the population of England are recorded as slaves. Islam approves of slavery; Muhammad and his people indeed practiced slavery and sexual slavery it is even allowed according to the writings in the Koran (Koran 33:50).

Le Marché aux esclaves - Gustave Boulanger - 1882

«Le Marché aux Esclaves» [The Slave Market] par le peintre orientaliste français, Gustave Boulanger (1882)

These two 3rd world religions, Judaism and Islam have doctrines of behaviour towards other groups that are rooted in hate and violence because they both instill a very strong sense of “US agains THEM [outsiders]”. Hence, the early expulsion of the Jewish communities globally much before the Nazi regime or any of its founders were even born. A practice known as holocaust done in the name of the Jewish god Baal, involved sacrificing young male babies was hated by many non-Jewish intellectuals and societies throughout Western history. However, today the atavistic process that should have been inexistent or even annihilated, is ironically happening to modern societies at the verge of being completely secularised after their independence such as in the West: the process of Islamisation.

Islamisation of the West, which was founded and evolved on Christian values, and famous deist intellectuals such as Voltaire who placed reason before irrational claims of God [although not denying the existence of powers that may be Godly], is happening at an alarming rate, as it is being forced into accepting millions of Muslim refugees known to be part of the process of Islamisation linked to major extremist and pro-Muslim association such as the Muslim Brotherhood [a group heavily linked with Barack Obama] who have links to the extreme left leaning seats in the United Nations. These dangerous extreme-left [not socialist] movements with religious affiliations have been finding ways to loosen the security of the West’s defence to infiltrate the ideologies of Islam through the process of cultural Jihad, which involves using techniques such as diplomacy, huge business ventures, and twisting arms with the unscrupulous use of ‘political correctness’ to further the purpose of Islam, aided by the act of Taqqiya, which is promoted by Islamic ideology to deceive, lie and act in whatever way it may be required to promote Islam and eventually subjugate non-Islamic societies.

One of the most recent example of complete Islamisation is Iran in 1979 where the overthrow of Shad Muhammad Reza Pahlavi ushered in an Islamic republic. This seemingly Islamic ‘success’ in the Iranian Revolution led to Islamic Fundamentalists in other undeveloped economies such as Pakistan, Egypt and Algeria to believe in their possible future, already being part of economies where governments make concessions to religious militants as they both are supporters of the ideology of Islam. In some countries, many Islamic terrorists have justified their acts as populist alternatives to what they perceive as corrupt, dictatorial regimes that lack compassion and righteousness. Others have questioned righteousness from the perspective of Islamic ideologies that involve beheading, mass terror and other inhuman practices on non-Muslims in the name of Allah as the teachings of Muhammad, a controversial prophet who consummated a marriage to a 6-year old when the latter was nine [even the practice and promotion of what most Western minds would perceive as paedophilia has seen a near complete silence from most authorities in the west for fear of repercussions such as accusations of racism, lack of political correctness or xenophobia, all forms of speech suppression that have started to raise more voices among many people who believe that Islamisation is incompatible, dangerous and unsustainable – massive causes of systematic socio-cultural and economic degradation].

 

Lhomme Papillon (1858)Caricature of Jules Didier by Claude Monet (1840 - 1926)

L’homme Papillon (Butterfly Man) / Caricature of Jules Didier by Claude Monet (1840 – 1926)

 

 

 

In order to move towards a system of management that includes government to replace the obsolete concept of politics and reinstate credibility in decision making based on reason and science, balanced with the right philosophy to fit the appropriate expectations at a given time, the mainstream mind set will have to accept reason as a more fitting compass to guide a civilised society instead of religion.

Although most [mentally sound individuals] should have the freedom to choose where to place their faith [religion, science, philosophy, etc], secularisation would at least ensure that the state bases its decisions on reality, logic and rationality; however the State should never forget to acknowledge the fact that religion is part of a society’s philosophical and cultural roots [e.g. Western Europe was founded on Christianity which inspired its writers and intellectuals; even if some were non religious they are undeniable products of the cultural realm of Christian thought] and is part of a society’s identity, and hence the secular State should consider religion as a matter of its own culture and identity to ensure that the mother religion is given priority over foreign ones [as most countries in the World do, for e.g. Israel and Arab States].

The State may initiate a workable but firm control over the appropriate influx of immigrants by specific religious groups to maintain and not discriminate the national cultural identity of the foundation [religion would simply be a part of culture and not a reigning authority synthesised with most departments of the state] while adapting to changes that socio-cultural economic developments and research lead to [however careful consideration over the purpose and benefits must remain of vital importance and focus].

immigrationsource

 

As the system of democracy still gives voices to the masses, it is also fair noting that majority votes do not decide or confirm the degree of righteousness in a particular thought or decision. In fact, majority debates in choice simply conclude the general ‘views on a specific topic’ of a particular group of human organisms from a particular geographical location on earth. In cases such as medicine, physics, chemistry and other science based studies majority votes lead to and mean nothing, in those disciplines only reason wins, with the conclusion based on logic. Certainly what a perfect secular state may include could be a decision making department that bases every decision based on the required concept that applies to it, i.e. for e.g. matters of professional disciplines could be approved by the required boards of professionals (by their field), and decisions on socio-cultural matters would benefit from public opinion, further matters of economy would be supervised by the board of economy, etc, and this may eventually lead to a system that relies on only democratic values and management, and hardly any politics [if regional representatives by area could have a better description].

The USA’s secular government has so far demonstrated to be far from perfect with major differences in opinion on a range of issues regarding military ethics during World War 2 where Eisenhower sadistically allowed thousands of Germans to die in starvation in his very own ‘death camps’, and other claims of secrecy with Churchill & Stalin in a German boycott along with the ongoing national socio-cultural conflicts with the Islamisation of the USA by the Obama regime – open promoters of Muslims and Islam in the West. The deistic Founding Fathers of the USA’s secular government would definitely be surprised at the influence of orthodox, evangelical Christianity of various kinds in the modern but over-liberal republic. Although it may if appropriate to consider the fact that secular states will somehow forever have religious roots, and while some may not be practising Christians, most of Western literature are full of biblical references. Major festivities such as Christmas have turned into a symbol of celebration and gifts for Western societies more than a religious observance, and it unites and benefits more than only Christians in many major societies of the West – especially economically for most businesses.

santahat

Obésité - la culture des gros ventres

Image: Europe: obese individuals exercising to burn their accumulated excess of calories. Obesity nowadays is generally associated with a culture of big bellies.

SAMSARA food sequence from Baraka & Samsara

ocde_obesity_update_data_2017

Obesity in the World / Source: OECD


Secularisation in everyday life in an increasingly post-Christian Europe

Nowadays most of the so called “developed” societies of the modern westernised world are entrapped in the global economy; a great section of their population have been conditioned by various influences [e.g. mainstream media] into seeing their life from a different perspective that sometimes seems mechanical, alternative ways to make rites of passage and more importantly, other doctrines imposed by politically-controlled governments and the medias to be guided by; this has gradually reduced the importance of spirituality and religious dimensions for the masses in public and private life.

Munich.jpg

Munich by Harry Schiffer

Major changes in Britain saw the 1836 Marriage Act which for the first time allowed marriages to be solemnised in Britain by other practices besides a religious ceremony. On 1 July 1837, six hundred district offices opened as the act came into force along with an ongoing set of necessary changes. By 1857, divorce was obtainable in the UK by other means than the Act of Parliament – although not easily and only when requested by husbands. These changes along with the liberalised attitude on legislations such as abortion has long been opposed by the Church however, especially in Catholic countries. Nowadays, the growing number of people relying less on religious associations as a guide is ever increasing, notably in developed economies with education systems evolving at an incredible speed with the Internet of Tim Berners-Lee since the early 1990s. Thus, the knowledge of science and philosophy has become more widespread, along with its application to modern culture – leading to a new orthodoxy.

theinternet

The triumphs of technology have also made life for the secular minds fairly comfortable and safe in the developed world; although a lot of work remains to be done at the systematic level regarding economic policies, socio-cultural and philosophical developments, beliefs and directions in some so called “Westernised” societies to counter the now dangerously increasing waves of Islamisation (See: Daniel Secomb – Muslim Immigration and the Islamic Doctrine of Hijrah)

Perhaps a painful reality to most of those raised in a sophisticated science-oriented philosophical circle, or tutored with a conservative education but a liberal outlook from the West or Western derived systems, or in the ever more secular societies of France, UK, Germany and Western Europe, is that so far we are the ‘minority’ and are seen as an ‘exception’ when compared with the majority in terms of humans living on earth globally.

That may send visions of the inundation of migrants from poorly managed nations of the 3rd world Middle-East and Africa who also play a major part on the low socio-economic birth rate explosion and consequent socio-cultural burden on global humanitarian budgets expected to cause major economic and socio-cultural unrest for the West in the coming future if situations do not change. Sadly for the secular intellectuals today, is the fact that in most lesser developed societies of the world, the great religions, the smaller ones, and a series of traditional beliefs [some as illogical and ridiculous to reason or intelligence] continue to give a reason to live and subsequently meaning to the lives of many communities who are born and live in a completely different psycho-social reality fused with religious beliefs of ancient cultures [specially in the 3rd world and/or Islamic territories].

The progressive & ethical solution to deal with the alarming situation

Since, engineering environmentally also applies to the human organism, maximising the potential of humans according to their best environmental (socio-cultural) fit would seem like the most globally progressive philosophy. However, engineering our planet in terms of human abilities would also side with relocating populations to alleviate their own stress caused by incompatibility in terms of culture, language, identity and skills – a process that goes in line with evolutionary logic, but also fosters a harmonious human ecosystem with less tension, thus less stress [mental health & health].

RichardDawkinsEvolution

Cooperation on matters beneficial for both states could be achieved from synchronised work from respective locations [e.g. nature, environment, climate change, business, etc]. This would alleviate systems that lack stability due to massive population imbalance and socio-cultural conflicts caused majorly by uncontrolled geographical shifts and the birth rates that follow, leading to ‘organisms’ [from an objective perspective] that do not ‘identify’ with the system that they were born into, but see themselves as part of an ‘external system and its school of thought’, who mostly earn and live to promote the latter system and flood the current one with further external and incompatible organisms.

This continuous unregulated & unsustainable process of mass-migration & mass low-SES births add to the ongoing burden of socio-cultural conflict and economic degradation due to the sole motivating factor being foreign interest [mostly 3rd world & developing economies] in economic resources from Western systems while remaining ‘foreign’ and indifferent to public/civic expectations socio-culturally [due to a lack of linguistic proficiency and other low-SES complications such as quality education, linguistic acculturation, etc]. Such issues in uncontrollable amounts that reflect in most aspects of a society have shown to lead to systemic instability, fragmentation and low social-cohesion mostly linked to differences in belief systems created by heritage or indoctrination of beliefs from incompatible systems through exposure.

 

 

 

 

ksdam8f

Top Minority Languages by Country

 

mostusefulforeignlanguageforpersonaldevelopment

Foreign Language people consider the most useful for Personal Development

 

 

 

 

Once more, from an objective perspective and through the humble logic of observation, any system from any part of the world would face degradation with excessive sections of their population not focused in contributing in its protection, promotion, strength and stability – a simple matter of factual reasoning, an e.g. of such a statement would be “If an egg is released from a metre on hard floor, it will fall and break.”.  With geographical engineering, it seems to simply be a matter of re-assessing and replacing  ‘organic units’ with ones that are reliable in terms of stability, compatibility and long term development [experience] – a clear example of progressive innovation. A simple case of synthesising the knowledge gained from science and applying its philosophy with an understanding of human evolution to prevent further catastrophes while correcting the dangerous path of the present.

BritishPopulationOnImmigration.jpg

 

 

regionalpopulationchangeforecast

 

Cs6Cs_iVUAIF0Gs.jpg

 

 

 

 

 

Louis Léopold Boilly - 1825 - Étude de 35 têtes d'expression 1200

Quelle Émotion: «Étude de trente-cinq têtes d’expression» par Louis Léopold Boilly (1825)

 

Exposition Victor Hugo - une exposition sur Victor-Hugo

“History has for sewers times like ours.” -Victor Hugo

 

PublicTrustInGovernmentUK

A New Era for Management may be near: UK & France rank low for trust in government


Reflections: The purpose of “History” and making sense of “Heritage

On the question of history and heritage, many people still seem to misunderstand those terms. “History” is simply a series of events that happened throughout time. Thus, history is continuous and is made and modified every single day. There is nothing wrong with people who love wigs, period costumes of the past and high definition television series, but it is important to remember that history is not uniquely a theatrical recreation of the past because we are not prisoners of the past or living in the past. History is not a book that ended in the 19th century like the behaviour and fantasy of some historians seem to portray.

Traduction[EN]: “By insisting on searching for the origins, one becomes a crayfish. The historian sees backwards; he ends up believing backwards.” – Friedrich Nietzsche

History itself has been made by individuals who throughout time were living in their present and made the most of what was available to them to have an impact on their century and the future, for e.g. Da Vinci was not living in the past when he decided to start dissecting human corpses to learn about the human body and to discover new ways to turn medicine into a respectable discipline from the barbaric pseudoscience it was in his time. This fundamental observation shows that history is shaped by the continuous and multi-faceted process of evolution which involves the transmission of cultural knowledge [i.e. skills of various forms]. Da Vinci along with many other great innovators [e.g. Newton, Darwin, Pasteur, Voltaire, etc] undeniably promote the idea that human progress can only be achieved by making the most of the latest knowledge and skills, since the reality that each generation faces is different from that of cave-dwelling prehistoric and past generations [e.g. climate change]. If we have managed to free ourselves from the burdens and horrors of the past, along with atavistic perceptions in so many fields [e.g. science, medicine, philosophy, psychology, arts, etc], it is down to the exceptional human ability to think, reflect and find solutions to outdated models and beliefs by relying on present knowledge [e.g. science & philosophy].

The present generation has a wide range of knowledge, skills and opportunities that generations of the past did not have and could not even imagine [e.g. accelerated learning abilities with modern technology by accessing a wide range of resources along with a wealth of knowledge]. Our present generation has many problems to solve. It is our collective responsibility to deal with those issues and find solutions that benefits the whole of humanity [for e.g. the climate disaster, the alarming epidemic of obesity, the lack of emphasis on philosophical values as a guiding compass at both individual and societal level, and the destructive effects of savage and unregulated capitalism which is transforming individuals into a mass of passive and simple-minded biological organisms of mass consumption and servitude, while fostering poverty & murderous financial inequalities] – that was also one of intellectual fights of Albert Camus.

Où perd-on foi dans le capitalisme ? / Source: Statista France

Human beings will benefit greatly in developing an awareness of the petty and programmed existence that the ruthless effects of industrialization and the organized markets of capitalism impose on them by forcefully fitting them into a range of boxes [categories] to serve the purpose of their mechanical system. It is only after grasping a proper understanding of the way this mundane & repetitive mechanical system works that the enlightened individual will be able re-assess personal values, priorities and the true meaning of life [human existence]. The writings and philosophical works of Schopenhauer and Lucretius provide us with powerful elements to reflect upon in the quest to develop our consciousness and reach personal enlightenment [See: Essay // A Philosophical Critique of Schopenhauer’s “World as Will and Idea” & a Modern Lucretian View of “l’Art de Vivre”]. The present can learn and extract meaning from the past [for e.g. from the great philosophers, intellectuals, romantics, sculptors, artists, researchers, innovators, legendary leaders, etc], but if we are to shape civilization for the better, we must always apply skills, knowledge and wisdom gained from the past in the present, and always remember that we are not prisoners of the past.

The fact that we live in a society of individuals, means that although we are a group, we are also unique from one person to the other. Modern science has also revealed that individuals differ in terms of IQ and reasoning, which means that we are not all equal in terms of intellectual abilities. However, those intellectual abilities cannot simply be assessed by any specific academic qualification because the structure of academic courses are designed to assess very specific abilities to fit the curriculum [for e.g. physics, dentistry, finance, accounting, banking, coding, etc] whereas being responsible for the lives of a whole empire, like Napoleon or Alexander the great was, requires a wide range of skills, much more than any singular academic course would require. Hence, we can conclude from the examples of Alexander the great and Napoleon that the most reasonable way to assess the depth and grandeur of an individual is through his discourse, i.e. what his mind can conceive and believe. We can also see from the history of legendary leaders, that Alexander the great, who almost conquered the planet, was a great admirer of Diogenes who was a philosopher of the cynic school of thought and who had absolutely no belongings and lived on the streets. Hence, for an emperor to admire such an individual, he based his assessment solely on the content of the philosopher’s mind and character and not his appearance or material possessions.

Past generations also fought for individual liberties and left behind examples so that those living in the present could have a better existence. Voltaire carried the spirit of the enlightenment and fought against the inequalities of his time by defying the atavistic social structures of the Ancient Regime so that society at large could recognise the amazing abilities of singular and talented individuals; he did so not only for himself, but so that all those in the coming generations could acknowledge and benefit from his intellectual fight, and believe in the power of the solitary individual with amazing abilities to change the world with his mind and pen. Another example that deserves to be emphasized is that of Napoléon, who coming from a modest background from a foreign country, went to France and became one of the most legendary Frenchman to ever live, going as far as becoming a Republican Emperor legally, supported by the whole nation. A senatus-consult adopted almost unanimously by the French Senate in its session of May 18, 1804 created the Empire. Modifications to the Constitution were approved unanimously by the senators, with the Senate declaring : « Article premier. Le gouvernement de la République est confié à un empereur, qui prend le titre d’Empereur des Français ». [French for : “Article one. The government of the Republic is entrusted to an emperor, who takes the title of Emperor of the French”]. The application was immediate, without waiting for the results of the plebiscite on the hereditary government.

History is the never ending tale of humanity continuously being shaped and transformed in the present, its purpose is not to entrap the present generation in burdens of the past. Progress in history takes place because of the human desire to provide a better life for individuals of the current and future generations – a force generated by the continuous process of evolution that shapes the present and impacts the future. The majority of the planet’s scientific and academic community believe in evolution. Evolution means that there is no eternal essence to anything: absolutely everything and everyone is locked in a constant process of change – evolution is never-ending!

As for “heritage“, it is what has been passed down to the present generation by past generations. Nowadays, individuals of our present generation can only live a healthier life because they have had the cultural transmission necessary from past generations to do so; for e.g. no individual living in the present could re-invent electricity, a sophisticated language such as French, a personal computer or even a simple pressure cooker from scratch – that is one good way to understand the term heritage. Heritage is what we have inherited as a civilization of collective homo sapiens from past generations living on planet Earth. Heritage is a fundamental part of the process of evolution which involves the transmission of the knowledge gained from human cultures in a variety of fields [e.g. language, music, literature, arts, science, mathematics, philosophy, values, etc]. From a psychological and behavioural perspective, heritage is also the psychological structure [cognitive system with some degree of neuro-genetic influences], the communicative pattern and the philosophical values embedded through environmental exposure [e.g. education and artistic exposure] in the mind of the individual; those manifests themselves in the form of language(s), speech, behavioural patterns and expression of various types [e.g. art in various forms].

de vinci - le dernier souper (1495 - 1498) - ordinateur portable

History provides human civilization with the knowledge of where we came from, where we are and where we are going; it is a collection of a series of events that started since the dawn of mankind and continues into the present while being guided by the multi-faceted forces of evolution. On the other hand, heritage is the knowledge and wisdom continuously passed down to current generations by past generations; it is embedded in the mind of the individual by direct and indirect environmental exposure of various kinds and is reflected in the individual’s psychology through perception, behaviour, language(s) and artistic tastes; heritage is also constantly changing through evolutionary changes in the wider human environment on our planet [e.g. the wealth of knowledge made accessible through the disruptive and barrier-smashing forces of the 21st century’s technological revolution].

Danny D’Purb | DPURB.com

*****

Reference

Lenman, B. and Marsden, H. (2005). Chambers dictionary of world history. Edinburgh: Chambers.

____________________________________________________

While the aim of the community at dpurb.com has  been & will always be to focus on a modern & progressive culture, human progress, scientific research, philosophical advancement & a future in harmony with our natural environment; the tireless efforts in researching & providing our valued audience the latest & finest information in various fields unfortunately takes its toll on our very human admins, who along with the time sacrificed & the pleasure of contributing in advancing our world through sensitive discussions & progressive ideas, have to deal with the stresses that test even the toughest of minds. Your valued support would ensure our work remains at its standards and remind our admins that their efforts are appreciated while also allowing you to take pride in our journey towards an enlightened human civilization. Your support would benefit a cause that focuses on mankind, current & future generations.

Thank you once again for your time.

Please feel free to support us by considering a donation.

Sincerely,

The Team @ dpurb.com

P.S.
– If you are a group/organization or individual looking for consultancy services, email: info[AT]dpurb.com
If you need to reach Danny J. D’Purb directly for any other queries or questions, email: danny[AT]dpurb.com [Inbox checked periodically / Responses may take up to 20 days or more depending on his schedule]

Stay connected by linking up with us on Facebook and Twitter

Donate Button with Credit Cards

Essay // Psychology: The Concept of Self

Mis à jour le Jeudi, 2 Mars 2023

SelfRope

The concept of the self will be explored in this essay – where it comes from, what it looks like and how it influences thought and behaviour. Since self and identity are cognitive constructs that influence social interaction and perception, and are themselves partially influenced by society, the material of this essay connects to virtually all aspects of psychological science. The self is an enormously popular focus of research (e.g. Leary and Tangney, 2003; Sedikides and Brewer, 2001; Swann and Bosson, 2010). A 1997 review by Ashmore and Jussim reported 31,000 social psychological publications on the self over a two-decade period to the mid-1990s, and there is now even an International Society for Self and Identity and a scholarly journal imaginatively entitled Self and Identity.

Nikon Portrait DSC_0169 Res600

The concept of the “self” is a relatively new idea in psychological science. While Roy Baumeister’s (1987) painted a picture of a medievally organised society where most human organism’s reality were fixed and predefined by rigid social relations and legitimised with religious affiliations [family membership, social rank, birth order & place of birth, etc], the modern perspectives adopted by scholars and innovative psychologists has been contradicting such outdated concepts. The idea of a complex & sophisticated individual self, lurking underneath would have been difficult, if not impossible, to entertain under such atavistic assumptions of social structures affecting an individual human organism.

However, all this changed in the 16th century, where momentum gathered ever since from forces such as:

Secularisation – where the idea that fulfilment occurs in afterlife was replaced by the idea that one should actively pursue personal fulfilment in this life

Industrialisation – where the human being was increasingly being seen as individual units of production who moved from place to place with their own “portable” personal identity which was not locked into static social structures such as extended family

Enlightenment – where people felt they were solely responsible for choosing, organising and creating better identities for themselves by overthrowing orthodox value systems and oppressive regimes [e.g. the French revolution and the American revolution of the late 18th century]

and

Psychoanalysis – the psychoanalytic theory of the human mind unleashed the creative individual with the notion that the self was unfathomable because it lived in the depth of the unconscious [e.g. Theory of social representations – theory invoking psychoanalysis as an example of how a novel idea or analysis can entirely change how people think about their world (e.g. Moscovici, 1961; see Lorenzi-Cioldi and Clémence, 2001). [See: Psychoanalysis: History, Foundations, Legacy, Impact & Evolution]

Jacques Lacan d'purb dpurb site web

Jacques Lacan (1901 – 1981)

Together, these and other socio-political and cultural influences lead to society thinking about the self and identity as complex subjects, where theories of self and identity propagated and flourished in this fertile soil.

As far as self and identity are concerned, we have noticed one pervasive finding in cultural differences. The so called “Western” world involving continents such as Western Europe, North America and Australasia, tend to be individualistic, whereas most other cultures, such as in Asia, South America and Africa are collectivist (Triandis, 1989; also see Chiu and Hong, 2007, Heine, 2010, 2012; Oyserman, Coon and Kemmelmeier, 2002). Anthropologist Geertz puts it beautifully:

“The Western conception of the person as a bounded, unique, more or less integrated, motivational and cognitive universe, a dynamic centre of awareness, emotion, judgement, and action organized into a distinctive whole and set contrastively both against other such wholes and against a social and natural background is, however incorrigible it may seem to us, a rather peculiar idea within the context of the world’s cultures.”

Geertz (1975, p.48)

conceptofself d'purb dpurb site web

Markus and Kityama (1991) describe how those from individualistic cultures tend to have an independent self, whereas people from collectivist cultures have an interdependent self. Although in both cases, people seek a clear sense of who they are, the [Western] independent self is grounded in a view of the self that is autonomous, separate from other people and revealed through one’s inner thoughts and feelings. The [Eastern] interdependent self on the other hand, unlike in the West, tends to be grounded in one’s connection to and relationships with other people [expressed through one’s roles and relationships]. As Gao explained: ‘Self… is defined by a person’s surrounding relations, which often are derived from kinship networks and supported by cultural values based on subjective definitions of filial piety, loyalty, dignity, and integrity’ (Gao, 1996, p. 83).

From a conceptual review of the cultural context of self-conception, Vignoles, Chryssochoou and Breakwell (2000) conclude that the need to have a distinctive and integrated sense of self is “likely” universal. However from individualist and collectivist cultures, the term “self-distinctiveness” holds a set of very different assumptions. In the individualist West, separateness adds meaning and definition to the isolated and bounded self. In the collectivist & Eastern others, the “self” is relational and gains meaning from its relations with others.

universal

A logic proposed by analysing historical conceptions of self with an account of the origins of individualist and collectivist cultures along with the associated independent and interdependent self-conceptions may be related to economic policies. The labour market is an example where mobility helped the industry by viewing humans as “units” of production who are expected to shift their geographical locations from places of low labour demand to those of higher demand, along with their ability to organise their lives, relationships, self-concepts around mobility and transient relationships.

New York Construction Workers Lunching on a Crossbeam

Construction workers eat their lunches atop a steel beam 800 feet above ground, at the building site of the RCA Building in Rockefeller Center.

Independence, separateness and uniqueness have become more important than connectedness and long-term maintenance of enduring relationships [values that seem to have become pillars of modern Western Labour Culture – self-conceptions reflect cultural norms that codify economic activity].

However, this logic applied to any modern human organism seems to clearly offer more routes to development [personal and professional], more options to continuously nurture the evolving concepts of self-conception through expansive social experience and cultural exploration, while being a set of philosophy that places more powers of self-defined identity in the hands of the individual [more modern and sophisticated].

TheMan

Now that some basic concepts and origins of the “self” along with its importance and significance to psychological science has been covered, we are going to explore two creative ways of learning about ourselves.

Firstly, the concept of self-knowledge which involves us storing information about ourselves in a complex and varied way in the form of a schema means that information about the self is assumed to be stored cognitively as separate context specific nodes such that different nodes activate different ones and thus, different aspects of self (Breckler, Pratkanis and McCann, 1991; Higgins, van Hook and Dorfman, 1988). The concept of self emerges from widely distributed brain activity across the medial prefrontal and medial precuneus cortex of the brain (e.g. Saxe, Moran, Scholz, and Gabrieli, 2006). According the Hazel Markus, self-concept is neither “a singular, static, lump-like entity” nor a simple averaged view of the self – it is a complex and multi-faceted, with a relatively large number of discrete self-schemas (Markus, 1977; Markus and Wurf, 1987).

masks

Most individuals tend to have clear conceptions of themselves on some dimensions but not others – generally more self-schematic on dimensions that hold more meaning to them, for e.g. if one thinks of oneself as sophisticated and being sophisticated is of importance to oneself, then we would be self-schematic on that dimension [part of our self-concept], if not then we would not [would not be part of our self-concept – unsophisticated]. It is widely believed that most people have a complex self-concept with a large number of discrete self-schemas. Patrice Linville (1985, 1987; see below) has suggested that this variety helps to buffer people from life’s negative impacts by ensuring enough self-schemas are available for the individual to maintain a sense of satisfaction. We can be strategic in the use of our self-schemas – Linville described such judgement colourfully by saying: “don’t put all your eggs in one cognitive basket.” Self-schemas influence information processing and behaviour similarly to how schemas about others do (Markus and Sentis, 1982): self-schematic information is more readily noticed, is overrepresented in cognition and is associated with longer processing time.

S€lection de Vos Oeufs d'purb

Self-schemas do not only describe how we are, but they are also believed to differ as we have an array of possible selves (Markus and Nurius, 1986) – future-oriented schemas of what we would like to become, or what we fear we might become. For example, a scholar completing a postgraduate may think of a career as an artist, lecturer, writer, philosopher, politician, actor, singer, producer, entrepreneur, etc. Higgins (1987) proposed the self-discrepancy theory, suggesting that we have 3 major types of self-schema:

  • The actual self – how we are
  • The ideal self – how we would like to be
  • The ‘ought’ self – how we think we should be

Discrepancies between the actual, ideal and/or ought, can motivate change to reduce the discrepancy – in this way we engage in self-regulation. Furthermore, the self-discrepancy and the general notion of self-regulation have been elaborated into the regulatory focus-theory (Higgins, 1997, 1998).This theory proposes that most individuals have two separate self-regulatory systems, termed Promotion and Prevention. The “Promotion” system is concerned with the attainment of one’s hopes and aspirations – one’s ideals. For example, those in a promotion focus adopt approach strategic means to attain their goals [e.g. promotion-focused students would seek ways to improve their grades, find new challenges and treat problems as interesting obstacles to overcome. The “Prevention” system is concerned with the fulfilment of one’s duties and obligations. Those in a prevention focus use avoidance strategy means to attain their goals. For example, prevention-focussed students would avoid new situations or new people and concentrate on avoiding failure rather than achieving highest possible grade.

aimhigh

Whether an individual is more approach or prevention focussed is believed to stem during childhood (Higgins and Silberman, 1998). Promotion-focus may arise if children are habitually hugged and kissed for behaving in a desired manner and love is withdrawn as a form of discipline. Prevention-focus may arise if children are encouraged to be alert to potential dangers and punished when they display undesirable behaviours. Against this background of individual differences however, regulatory focus has also been observed to be influenced by immediate context, for example by structuring the situation so that subjects focus on prevention or on promotion (Higgins, Roney, Crowe and Hymes, 1994). Research also revealed that those who are promotion-focussed are more likely to recall information relating to the pursuit of success by others (Higgins and Tykocinski, 1992). Lockwood and her associates found that those who are promotion-focussed look for inspiration to positive role models who emphasise strategies for achieving success (Lockwood, Jordan and Kunda, 2002). Such individuals also show elevated motivation and persistence on tasks framed in terms of gains and non-gains (Shah, Higgins and Friedman, 1998). On the other side of the spectrum, individuals who are prevention-focussed tend to recall information relating to the avoidance of failure by others, are most inspired by negative role models who highlight strategies for avoiding failure and exhibit motivation and persistence on tasks that framed in terms of losses and non-losses. After being studied in intergroup relations (Shah, Higgins and Friedman, 1998), the regulatory focus theory was found to strengthen positive emotion related bias and behavioural tendencies towards the ingroup when in the context of a measured or manipulated promotion focus. Prevention-focus strengthens more negative emotion-related bias [haters] and behavioural tendencies against the outgroup (Shah, Brazy and Higgins, 2004).

ADLER PLANETARIUM UNIVERSE

On May 25, 2012, take off on a mind-blowing tour of the Universe in the Adler’s new space adventure Welcome to the Universe! Audiences travel a billion light-years and back in this live, guided tour unlike any other in the world. Visitors explore the breathtaking, seemingly infinite Universe as they fly through space to orbit the Moon, zoom into a canyon on Mars, and soar through the cosmic web where a million galaxies shower down upon them in the most immersive space environment ever created. (C) Adler Planetarium. (PRNewsFoto/Adler Planetarium)

The second way of learning about the concept of self is through the understanding of our “many selves” and multiple identities. In the book, The Concepf of Self, Kenneth Gergen (1971) depicts the self-concept as containing a repertoire of relatively discrete and often quite varied identities, each with a distinct body of knowledge. These identities have their origins in a vast array of different types of social relationships that form, or have formed, the anchoring points for our lives, ranging from close personal relationships with other professionals, mentors, trusted friends, etc and roles defined by skills, fields, divisions and categories, to relationships fully or partially defined by languages, geography, cultures [sub-cultures], groups values, philosophy, religion, gender and/or ethnicity. Linville (1985) also noted that individuals differ in terms of self-complexity, in the sense that some individuals have more diverse and extensive set of selves than othersthose with many independent aspects of selves have higher self-complexity than those with a few, relatively similar, aspects of self. The notion of self-complexity is given a rather different emphasis by Marilynn Brewer and her colleagues (Brewer and Pierce, 2005; Roccas and Bewer, 2002) who focussed on the self that is defined in group terms (social identity) and the relationship among identities rather than number of identities individuals have.

TheMask

They argued that individuals have a complex social identity if they have discrete social identities that do not share many attributes but a simple social identity if they have overlapping social identities that share many attributes [simple]. For example, when Cognitive Psychologists [cognitive psychology explores mental processes] study high-level functions such as problem solving and decision making, they often ask participants to think aloud. The verbal protocols that are obtained [heard] are then analysed at different levels of granularity: e.g. to look at the speed with which participants carry out mental processes, or, at a higher level of analysis, to identify the strategies being used. Grant and Hogg (2012) have recently suggested and empirically shown that the effect, particularly on group identification and group behaviours of the number of identities one has and their overlap may be better explained in terms of the general property of social identity prominencehow subjectively prominent, overall and in a specific situation, a particular identity is one’s self-concept. Social identity theorists (Tajfel and Turner, 1979) argued 2 broad classes of identity that define different types of self:

(i) Social Identity [which defines self in terms of a “particular” group membership (if any meaningful ones exist for the individual), and

(ii) Personal Identity [which defines self in terms of idiosyncratic traits & close personal relationships with specific individuals/groups (if any) which may be more than physical/social, e.g. mental [strength of association with specific others on specific tasks/degrees]

The first main focus question here is asked by Brewer and Gardner (1996), ‘Who is this “we”?’ and distinguished three forms of self:

  • Individual self – based on personal traits that differentiate the self from all others
  • Relational self – based on connections and role relationships with significant/meaningful others
  • Collective self – based on group membership [can depend of many criteria] that differentiates ‘us’ from ‘them’

More recently it has been proposed that there are four types of identity (Brewer, 2001; Chen, Boucher and Tapias, 2006):

  • Personal-based social identities – emphasising the way that group properties are internalised by individual group members as part of their self-concept
  • Relational social identities – defining the self in relation to specific other people with whom one interacts [may not be physical or social only] in a group context – corresponding to Brewer and Gardner’s (1996) relational identity and to Markus and Kitayama’s (1991) ‘interdependent self’.
  • Group-based social identities – equivalent to social identity as defined above [sense of belonging and emotional salience for a group is subjective]
  • Collective identities – referring to a process whereby  those who consider themselves as “group members” not only share self-defining attributes, but also engage in social action to forge an image of what the group stands for and how it is represented and viewed by others.

China Collective

The relational self  [for those who choose to be defined by others at least] is a concept that can be considered a particular type of collective self. As Masaki Yuki (2003) observed, some groups and cultures (notable East-Asian cultures) define groups in terms of networks of relationships. Research also revealed that women tend to place a greater importance than men on their relationships with others in a group (Seeley, Gardner, Pennington and Gabriel, 2003; see also Baumeister and Sommer, 1997; Cross and Madson, 1997).

In search for the evidence for the existence of multiple selves which came from research where contextual factors were varied to discover that most individuals describe themselves and behave differently in different contexts. In one experiment, participants were made to describe themselves on very different ways by being asked loaded questions which prompted them to search from their stock of self-knowledge for information that presented the self in a different light (Fazio, Effrein and Falender, 1981). Other researchers also found, time and time again, that experimental procedures that focus on group membership lead people to act very differently from procedures that focus on individuality and interpersonal relationships. Even “minimal group” studies in which participants are either: (a) identified as individuals; or (b) explicitly categorised, randomly or by some minimal or trivial criterion as ‘group’ members (Tajfel, 1970; see Diehl, 1990), a consistent finding is that being categorised tends to lead people to being discriminatory towards an outgroup, conform to ingroup norms, express attitudes and feelings that favour ingroup, and indicate a sense of belonging and loyalty to the ingroup.

ManVsGorilla

Furthermore, these effects of minimal group categorisation are generally very fast and automatic (Otten and Wentura, 1999). The idea that we may have many selves and that contextual factors can bring different selves into play, has a number of ramifications. Social constructionists have suggested that the self is entirely situation-dependent. An extreme form of this position argues that we do not carry self-knowledge around in our heads as cognitive representations at all, but rather that we construct disposable selves through talk (e.g. Potter and Wetherell, 1987). A less extreme version was proposed by Penny Oakes (e.g. Oakes, Haslam and Reynolds, 1999), who does not emphasise the role of talk but still maintains that self-conception is highly context-dependent. It is argued that most people have cognitive representations of the self that they carry in their heads as organising principles for perception, categorisation and action, but that these representations are temporarily or more enduringly modified by situational factors (e.g. Abrams and Hogg, 2001; Turner, Reynolds, Haslam and Veenstra, 2006).

evolution

Although we have a diversity of relatively discrete selves, we also have a quest: to find and maintain a reasonably integrated picture of who we are. Self-conceptual coherence provides us with a continuing theme for our lives – an ‘autobiography’ that weaves our various identities and selves together into a whole person. Individuals who have highly fragmented selves (e.g. some patients suffering from schizophrenia, amnesia or Alzheimer’s disease) find it very difficult to function effectively. People use many strategies to construct a coherent sense of self (Baumeister, 1998). Here is a list of some that we have used ourselves:

Sometimes we restrict our life to a limited set of contexts. Because different selves come into play as contexts keep changing, protections from self-conceptual clashes seem like a valid motive.

Other times, we continuously keep revising and integrating our ‘biographies’ to accommodate new identities. Along the way, we dispose of any meaningless inconsistencies. In effect, we are rewriting our own history to make it work to our advantage (Greenwald, 1980).

We also tend to attribute some change in the self externally to changing circumstances [e.g. educational achievements, professional circle, industry, etc] rather than only internally, to construct who we are. This is an application of the actor-observer effect (Jones and Nisbett, 1972).

In other cases, we can also develop self-schemas that embody a core set of attributes that we feel distinguishes us from all other peoplethat makes us unique (Markus, 1977). We then tend to recognise these attributes disproportionately in all our selves, providing thematic consistency that delivers a sense of a stable and unitary self (Cantor and Kihlstrom, 1987). To sum up, individuals tend to construct their lives such that their self-conceptions are both steady and coherent. A major element in the conception of self, is the ability to master language and its varying degrees of granularity that hold a major role in social identity [linguistic discourse].

[The remaining part of this essay will focus on the power and importance of language as the essence of the human being]

___________________________________________________

MicOne.jpg

The Essence of the Modern Human Being: Language, Psycholinguistics & Self-Definition

Human communication is completely different from that of other species as it allows virtually limitless amounts of ideas to be expressed by combining finite sets of elements (Hauser, Chomsky, & Fitch, 2005; Wargo, 2008). Other species [e.g. apes] do have communicative methods but none of them compare with human language. For example, monkeys use unique warning calls for different threats, but never combine these calls on new ideas. Similarly, birds and whales sing complex songs, but creative recombination of these sounds in the expression of new ideas has not occurred to these animals either.

As a system of symbols, language lies at the heart of social life and all its multitude of aspects in social identity. Language may be at the essence of existence if explored from the philosopher Descartes most famous quote, “Cogito Ergo Sum” which is Latin for “I think, therefore I am.”, as thought is believed to be experienced and entertained in language. In expressing his discourse, Descartes based the science system on the knowing subject in front of the world that it constructs and represents to itself – a system that would later also be the basis for many of the concepts of Jacques Lacan’s psychoanalysis.

cogito ergo sum

The act of thinking often involves an inner personal conversation with oneself, as we tend to perceived and think about the world in terms of linguistic categories. Lev Vygotsky (1962) believed that inner speech was the medium of thought and that it was interdependent with external speech [the medium of social communication]. This interdependence would lead to the logical conclusion that cultural differences in language and speech are reflected in cultural differences in thought.

In the theory of linguistic relativity devised by linguists Edward Sapir and Benjamin Whorf, a more extreme version of that logic was proposed. Brown writes:

Linguistic relativity is the reverse of the view that human cognition constrains the form of language. Relativity is the view that the cognitive processes of a human being – perception, memory, inference, deduction – vary with structural characteristics – lexicon, morphology, syntax – of the language [one speaks].

rene-descartes

Rene Descartes was not only one of the most prominent philosophers of the 17th century but in the history of Western philosophy. Often referred to as the “father of modern philosophy”, Descartes profoundly influenced intellectuals across Europe with his writings. Best known for his statement “Cogito ergo sum” (I think, therefore I am), the philosopher started the school of rationalism which broke with the scholastic Aristotelianism. Firstly, Descartes rejected the mind-body dualism, arguing that matter (the body) and intelligence (the mind) are 2 independent substances (metaphysical dualism) and secondly rejected the causal model of explaining natural phenomena and replaced it with science-based observation and experiment. The philosopher spent a great part of his life in conflict with scholastic approach (historically part of the religious order and its adherents) which still dominated thoughts in the early 17th century.

Les bons plans de René

Rene Descartes (1596-1659) / Image: Université Paris-Descartes

Communication & Language

The study of communication is therefore an enormous undertaking that draws on a wide range of disciplines, such as psychology, social psychology, sociology, linguistics, socio-linguistics, philosophy and literary criticism. Social psychologists have tended to distinguish between the study of language and the study of non-verbal communication [where scholars agree both are vital to study communication (Ambady and Weisbuch, 2010; Holtgraves, 2010; Semin, 2007)]; with also a focus on conversation and the nature of discourse. However the scientific revolution has quickly turned our era into one hugely influenced by computer-mediated communication which is quickly turning into a dominant channel of communication for many (Birchmeier, Dietz-Uhler and Stasser, 2011; Hollingshead, 2001).

Communication in all its varieties is the essence of social interaction: when we interact we communicate. Information is constantly being transmitted about what we sense, think and feel – even about “who we are” – and some of our “messages” are unintentional [instinctive]. Communication among educated humans comprises of words, facial expressions, signs, gestures and touch; and this is done face-to-face or by phone, writing, texting, emails or video. The social factors of communication are inescapable:

  • It involves our relationship with others
  • It is built upon a shared understanding of meaning
  • It is how people influence each other

Spoken languages are based on rule-governed structuring of meaningless sounds (phonemes) into basic units of meaning (morphemes), which are further structured by morphological rules into words and by syntactic rules into sentences. The meanings of words, sentences and entire utterances are determined by semantic rules; which together represent “grammar”. Language has remained an incredibly and endlessly powerful medium of communication due to the limitless amount of meaningful utterances it can generate through the shared knowledge of morphological, syntactic and semantic rules. Meaning can be communicated by language at a number of levels, ranging from a simple utterance [a sound made by one person to another] to a locution [words placed in sequence, e.g. ‘It’s cold in this room’], to an illocution [the locution and context in which it is made: ‘It’s cold in this room’ may be a statement, or a criticism of the institution for not providing adequate heating, or a request to close the window, or a plea to move to another room (Austin, 1962; Hall, 2000)].

Délice Sonore M100 Master d'purb dpurb site web.jpg

Linguistic mastery therefore involves dexterity at many levels of cultural understanding and therefore should likely differ from one individual to another depending on their personality, IQ, education and cultural proficiency in self adjustment. This would lead to being able to navigate properly in the appropriate cultural context through language whilst knowing the appropriateness of the choice of words in term of “when, where, how and to whom say it.” Being able to master these, opens the doors to sociolinguistics (Fishman, 1972; also see Forgas, 1985), and the study of discourse as the basic unit of analysis (Edwards and Potter, 1992; McKinlay and McVittie, 2008; Potter and Wetherell, 1987). The philosopher John Searle (1979) has identified five sorts of meanings that humans can intentionally use language to communicate; we can use language:

  • To say how something is signified
  • To get someone to do something.
  • To express feelings and attitudes
  • To make a commitment
  • To accomplish something directly

Language is a uniquely human form of communication, as observed in the natural world, no other mammal has the elaborate form of communication in its repertoire of survival skills. Young apes have been taught to combine basic signs in order to communicate meaningfully (Gardner and Gardner, 1971; Patterson, 1978), however not even the most precocious ape can match the complexity of hierarchical language structure used by a normal 3-year-old child (Limber, 1977).

BabyBoy

Language has been called a human instinct because it is so readily and universally learned by infants. At 10 months of age, little is said, but at 30-month-old infants speak in complete sentences and user over 500 words (Golinkoff & Hirsh-Pasek, 2006). Moreover, over this very 20 month period, the plastic infant brain reorganises itself to learn the language of its environment(s). At 10 months infants can distinguish the sounds of all languages, but by 30 months, they can readily discriminate only those sounds to which they have been exposed (Kraus and Banai, 2007). Once the ability to discriminate particular speech sounds is lost, it is very hard to regain in most, which is one of the reason why most adults tend to have difficulties with learning a new language without an accent.

Neuro_SpeakingAHeardWord

Processes involved in the brain when speaking a heard word. Damage to areas of the Primary auditory cortex on the Left temporal lobe induce Language Recognition Problems & damage to the same areas on the Right produce deficits in processing more complex & delicate sounds [e.g. music, vocal performances, etc]. Hence, in Neuroscience, although it is not always the case, it can be generalised with a fair amount of confidence that Left is concerned with Speed, and Right is focused on Complex Frequency Patterns.

Most intellectuals researching the evolution of sophisticated human languages turned first to comparative studies of the vocal communications between human beings and other lesser primates [e.g. apes / monkeys]. For example, vervet monkeys do not use alarm calls unless other similar monkeys are within the vicinity, and the calls are more likely to be made only if the surrounding monkeys are relatives (Cheney and Seyfarth, 2005). Furthermore, chimpanzees vary the screams they produce during aggressive encounters depending on the severity of the encounter, their role in it, and which other chimpanzees can hear them (Slocombe and Zuberbuhler, 2005).

A fairly consistent pattern has emerged in the study of non-human vocal communication: There is a substantial difference between vocal production and auditory comprehension. Even the most vocal non-human primates can produce a relatively few calls, yet they are capable of interpreting a wide range of other sonic patterns in their environment. This seems to suggest that non-human primates’ ability to produce vocal language is limited, not by their inability to interpret sounds, but by their inability to exert ‘fine motor control’ over their voices – only humans have this distinct ability. It also confidently suggests that human language has likely evolved from a competence in comprehension already existing in our primate ancestors.

theyoungafricanape

The species specificity to language has led to some linguistic theorist to assume that an innate component to language must be unique to humans, notably Noam Chomsky (1957) who argued that the most basic universal rules of grammar are innate [called a “Language Acquisition Device”] and are activated through social interaction which enables the “code of language” to be cracked. However some other theorists argue for a different proposal, believing that the basic rules of language may not be innate as they can be learnt from the prelinguistic parent-child interaction (Lock, 1978, 1980), furthermore the meanings of utterances are so dependent on social context that they seem unlikely to be innate (Bloom, 1970; Rommetveit, 1974; see Durkin, 1995).

Motor Theory of Speech Perception

The motor theory of speech perception proposes that the perception of speech depends on the words activating the same neural circuits in the motor system that would be activated if the listener said the words (see Scott, McGettigan, and Eisner, 2009). Support for this theory has come from evidence that simply thinking about performing a particular task often activates the similar brain areas as performing the action itself, and also the discover of mirror neurons, motor cortex neurons that fire when particular responses are either observed or performed (Fogassi and Ferrari, 2007).

Cerebellum

Broca’s area: Speech production & Language processing // Wernicke’s area: Speech Comprehension

This seems to make perfect sense when solving the equation on the simple observation that Broca’s Area [speech area] is a part of the left premotor cortex [motor skills/movement area]. And since the main thesis of the motor theory of speech perception is that the motor cortex is essential in language comprehension (Andres, Olivier, and Badets, 2008; Hagoort and Levelt, 2009; Sahin et al., 2009), the confirmation comes from the fact that many functional brain-imaging studies have revealed activity in the primary or secondary motor cortex during language tests that do not involve language expression at all (i.e., speaking or writing). This may also suggest that fine linguistic skills may be linked to fine motor skills. Scott, McGettigan, and Eisner (2009) compiled and evaluated results of recorded activity in the motor cortex during speech perception and concluded that the motor cortex is active during conversation.

Gestural Language

Since the unique ability of a high degree of motor control over the vocal apparatus is present only in humans, communication in lesser non-human primates are mainly gestural rather than vocal.

chimps-gestures

Image: Reuters

This hypothesis was tested by Pollick, and de Waal in 2007, who compared the gestures and the vocalisations of chimpanzees. They found a highly nuanced vocabulary of hand gestures being used in numerous situations with a variety of combinations. To conclude, chimpanzees gestures were much more comparable to human language than were their vocalisations. Could this simply suggest that primate gestures have been a critical stage in the evolution of human language (Corballis, 2003)?

On this same note, we may focus on the already mentioned “Theory of Linguistic Relativity” (Whorf, 1956) which states that our internalised cognitions as a human being, i.e. perception, memory, inference, deduction, vary with the structural characteristics, i.e. lexicon, morphology and syntax of the language we speak [cultural influence shapes our thoughts].

Thoughts

In support of of Sapir and Whorf’s position, Diederik Stapel and Gun Semin (2007) refer poetically to the “magic spell of language” and report their research, showing how different categories in the language we speak guide our observations in particular ways. We tend to use our category of language to attend to different aspects of reality. The strong version of the Sapir-Whorf hypothesis is that language entirely determines thought, so those who speak different languages actually perceive the world in entirely different ways and effectively live in entirely different cognitive-perceptual universes. However extreme this suggestion may seem, a good argument against this assumption would be to consider whether the fact that we can distinguish between living and non-living things in English means that the Hopi of North-America, who do not, cannot distinguish between a bee and an aeroplane? Japanese personal pronouns differentiate between interpersonal relationships more subtly than do English personal pronouns; does this mean that English speakers cannot tell the difference between relationships? [What about Chong, Khan, Balaraggoo, Tyrone, Vodkadinov, Jacob, Obatemba M’benge and Boringski – where would you attribute their skills in the former question?]

The strong form of the Sapir-Whorf hypothesis is believed to be the most extreme version to be applicable to the mainstream, so a weak form seems to better accord with the quantitative facts (Hoffman, Lau and Johnson, 1986). Language does not determine thought but allows for the communication of aspects of the physical or social environment deemed important for the community. Therefore in the event of being in a situation where the expertise in snow is deemed essential, one would likely develop a rich vocabulary around the subject. Similarly, should one feel the need to have a connoisseur’s discussion about fine wines, the language of the wine masters would be a vital requisite in being able to interact with flawless granularity in the expression of finer tasting experiences.

EQ2

Although language may not determine thought, its limitations across cultures may entrap those ‘cultured’ to a specific one due to its limited range of available words. Logically, if there are no words to express a particular thought or experience we would not likely be able to think about it. Nowadays such an idea based on enhancing freedom of expression and the evolution of human emancipation, a huge borrowing of words across languages has been noted over the years: for example, English has borrowed Zeitgeist from German, raison d’être from French, aficionado from Spanish and verandah from Hindi. This particular concept is powerfully illustrated in George Orwell’s novel 1984, in which a totalitarian regime based on Stalin’s Soviet Union is described as imposing its own highly restricted language called “Newspeak” designed specifically to prohibit people from even thinking non-orthodox or heretical thoughts, because the relevant words do not exist.

Further evidence over the impact of language on thought-restriction comes from research led by Andrea Carnaghi and her colleagues (Carnaghi, Maas, Gresta, Bianchi, Cardinu and Arcuri, 2008). In German, Italian and some other Indo-European languages [such as English], nouns and adjectives can have different effects on how we perceive people. Compare ‘Mark is gay’ [using an adjective] with ‘Mark is a gay’ [using a noun]. When describing an individual, the use of an adjective suggests an attribute of that individual; whereas a noun seems to imply a social group and being a member of a ‘gay’ group. The latter description with a noun is more likely to invoke further stereotypic/prejudicial inferences and an associated process of essentialism (e.g. Haslam, Rothschild and Ernst, 1998) that maps attributes onto invariant, often bio-genetic properties of the particular social category/group.

Paralanguage and speech style

The impact of language on communication is not only dependent on what is said but also by how it is said. Paralanguage refers to all the non-linguistic accompaniment of speech – volume, stress, pitch, speed, tone of voice, pauses, throat clearing, grunts and sighs (Knapp, 1978; Tra