Essay // Clinical Psychology: Controversies that surround modern day mental health practice


Modern day mental health practice could be defined as the application of the four main schools of thoughts that dominate the field of psychology in the clinical setting, by abiding to strict criteria set out by packaged behavioural sets, diagnostically defined by names and categorised depending on the core nature of their specific characteristics in terms of behaviour, aetiology and epidemiology. While these four [biological, psychodynamic, cognitive-behavioural & systemic] main schools of thought have contributed to the development and ongoing evolution of the field of psychology, they also have downsides when applied to different types of psychological cases, with some being more efficient in treating particular disorders while others being hardly efficient and questionable. Applying and integrating these four schools of thoughts with new intuitive fact-based theories to explain psychological constructs and disorders are leading to major innovations in psychology; however with each field’s limitations controversies over the validity of their interpretations and the efficiency of their applied doctrines remain a constant topic of debate among scholars and clinicians.

One of the main controversies that surround modern day mental health practice is the medicalisation of psychological disorders, a tradition influenced by the field of medicine which contradicts an important founding philosophy of psychology, which was originally initiated to study the “mind”, not the physical characteristics of the brain as an organ. Furthermore, evidence suggests that psychological problems are not caused exclusively by organic factors. In anxiety, depression and/or schizophrenia, people with genetic vulnerability to the development of those psychological disorders only do so when exposed to particular stresses in their environment (Hankin & Abele, 2005). However, on the other side of the argument, evidence has also shown that deficiencies in genetics and neurobiological anatomy are linked to psychological difficulties and disorders, and hence nowadays, integrated approaches are used in a variety of assessments when treating patients affected by psychological disorders.

On the theme of medicalization, the debate over eating disorders has led to one of the major controversies within the field between advocates of the biomedical conceptualisation of eating disorders and the feminist position (Maine & Bunnell, 2010). The former sees an individual woman as a patient with a debilitating disease, in need of a cure to her illness; while the feminist position views eating disorders as a condition that is gender specific with the woman as a victim of socio-cultural pressures generated by a male-dominated society governed by a hedonistic economic reality focused on the pursuit of the thin ideal. There is an important distinction that should be made here for the benefit of patients since the feminist view may not fully comprehend that in the case of obesity and emaciation related to eating-disorders, the patients are at severe risk of medical complications such as growth retardation, osteoporosis, gastrointestinal bleeding, dehydration, electrolyte abnormalities and cardiac arrest [in chronic cases]. The social feminist constructivist perspective may be interpreting eating disorder as an image debate of “Fat” versus “Thin”. This may lead to the normalisation of obesity and destructive eating habits which in turn may result in further medical complications that involve surgical interventions. As for the feminists, it may be ethical to acknowledge that obesity & emaciation associated with eating disorders are major health issues that precede further complications such as diabetes, cancer and high blood pressure; and should not be confused with social stigma regarding image, but seen as a sign of poor-health and lifestyle that require attention and effort in providing patients with the medical and psychological help they need to adjust their patterns of life to a healthy one by adopting a culture synchronised with dietary & nutritional education.

Secondly, the medicalization of anxiety disorders as distinct medical & psychological conditions may seem less favourable to the biological model previously mentioned. A mass market of pharmacological products used in treatment has been favoured for being more convenient and less time consuming. This may lead to patients feeling disempowered and hopeless when being treated as victims of an uncontrollable illnesses requiring pharmacological treatment, while already being in a state of distress, shock, disbelief and/or confusion. Diazepam (Valium) or other benzodiazepines that are highly addictive have also been prescribed for years to treat anxiety disorders. The long term side effects have been trivialised along with the arrogant act of medicalizing fear and courage (Breggin, 1991). Critics of the medicalization of experiences argue that if patients are helped in understanding that panic attacks develop from the misrepresentation of bodily sensations and hyperventilation, this knowledge along with their own courage may strengthen them to take control of their fear. Research has also shown how patients who are educated in cognitive-behaviour techniques learn to use problem-solving and develop other skills (e.g. social – help them build meaningful lasting relationships while letting go of psychosocial burdens) that they lack to reappraise situations that may formerly have brought distress.


English evolutionary-biologist Dr. Richard Dawkins

The tragic death of one of the most talented vocalists on the planet, Chris Cornell, has sent a shock throughout the arts world and reports have revealed that the gifted artist was on Lorazepam [a benzodiazepine medication sold under the name Ativan used in the treatment of anxiety disorders], the substance is known to heighten the risk of suicide in those suffering from depression, while a recent investigation (Bushnell et al., 2017) has also shown no meaningful clinical benefit from the addition of benzodiazepines during treatment initiation. To prevent such tragedies from affecting the human race, more emphasis could be placed on “the mind” with clear guidance on the “thinking styles” (cognitive scripts) to adopt in the protection of the individual organism’s own psyche (mind). Simple foundations based on psychological logic should be propagated educationally to help people understand their uniqueness as organisms while protecting their psyche [mind] from the influence/control of external environmental factors that are beyond their control [e.g. biased negativity, uninformed prejudicial comments of meaningless acquaintances, etc]; acknowledging the fact that as long as an individual organism is within the boundaries of the law, he is allowed to live the life of his choice, and external factors would only affect one’s psyche if attention is given to them; and selectively ignoring parts of the environment  is also an acquired skill vital in maintaining sanity, along with the ability to select experiences that are positive & progressive to the organism [while discarding negative ones] in the context and theme of their chosen individual lifestyles.


An artist many might consider to be the Fréderic Chopin & the Edouard Manet of Rock, composing with his heart and painting with his voice, enigmatic vocalist Chris Cornell, known for timeless titles such as “What You Are“, “Like A Stone“, “The Last Remaining Light“, “Exploder“, “Be Yourself“, “Getaway Car” & “Dandelion” left a hole in the hearts of millions touched by his work. His tragic death is a reminder that further research is required in understanding the thought structure of artistic individuals whose psychological subjective reality would likely be deeper and more complex compared to the average person – an approach focusing on the “mind” rather than the “behaviour or brain” in the tradition of Sigmund Freud would likely reveal and explain the granularity of their psyche; and whether their suicidal decisions are rooted in full awareness and motivated by a reality they consider to be inadequate for their state of consciousness and IQ; and whether appropriate interventions involving the restructuration of their psychosocial patterns/exposure [to prevent the burden of stress] may be more individualistic & appropriate.

This would also shift the focus to the individual’s mind, courage & abilities to handle the world while maintaining a stable sense of self and resilience; and not turn them into biological organisms that are having their neurochemistry savagely altered by powerful chemical substances that are known to affect individuals differently with dangerous & sometimes fatal outcomes.


The same would apply to sufferers of post-traumatic stress disorder who would benefit of a non-pharmacological and empowering intervention to manage and take control of recurrent intrusive and distressing memories – it may be useful to study fear, distress and courage as normal psychological processes happening on a dimensional scale on a normal continuum from one individual to another where those on the extreme ends of the scales may be considered for psychological interventions.

Similarly, antidepressant medication used to treat depression remains controversial due to its questionable efficacy and side-effects. The high level of effectiveness of SSRIs reported in academic journals was greatly due to only trials with positive results of antidepressants being published while those where antidepressants were found to be no more effective than placebos being rejected. The effects of TCAs and SSRIs have also been found to be negligible in mild to moderate depression but effective in severe depression in meta-analyses (Fournier et al., 2010). The negative side-effects of antidepressants are known to be risky and dangerous where symptoms such as loss of sexual desire and impotence, weight gain, nausea, sedation or activation, and dizziness are known to be some of the more disturbing ones, with effects varying with types of antidepressants – for depressed pregnant women, health risks may affect their offspring. Dangerous antidepressants such as MAOIs are only prescribed to patients who can follow strict dietary patterns that exclude foods with thyramine (e.g. cheese) to prevent risks of high blood pressure and hypertensive crises. Although meta-analyses suggest benefits may outweigh the risks, an increased risk of suicide has also been noted among patients under 25 (Bridge et al., 2007).

Edouard Manet - Le Suicide

Edouard Manet (1832 – 1883), “Le Suicidé

Electroconvulsive therapy has also sparked a major controversy as a primitive, dangerous and non-scientific practice for the brevity of its effect and negative side-effects on memory (Read & Bentall, 2010). A thorough review of studies on the effectiveness of ECT and its side-effects [retrograde and anterograde amnesia] revealed it to be effective for a brief duration in treating severe depression [in cases that are unresponsive to psychological treatment] and questionably only supported by psychiatrists with a vested interest in proving ECT’s effectiveness. ECT has also been associated with a slight but significant risk of death, and a qualitative study of patients’ negative experiences concluded that for some ECT leads to fear, shame and humiliation, and reinforces experiences of worthlessness and helplessness associated with depression.


Medicalization has also led to controversy over the diagnosis of schizophrenia, a condition classified as a disease by the World Health Organization and ranked second only to cardiovascular diseases in terms of overall disease burden internationally (Murray & Lopez, 1996). Diagnosis is believed to be part of best practice in the patient’s “best” interest, however a strongly presented viewpoint by Thomas Szasz (2010) qualified diagnosis as an act of oppression as it may pave way for involuntary hospitalisation; where a deviant, maladjusted or poorly educated person may be subjected to “control” processes that they are not fully aware of – this has been proposed as a “possible” explanation for the greater rates of schizophrenia among ethnic minorities (particularly Africans in the US & those of low-SES groups). This view has also been supported by many who argue that schizophrenia as a distinct category may not be a fully valid diagnostic, but a fabrication constructed that may stigmatise disadvantaged or poorly educated people – while this may be positive in shaping “unacceptable behaviour” and protect citizens & society, some people with moderate symptoms may also be forcefully hospitalised. Thus, nowadays, schizophrenia is not a single definite disorder anymore, but one among others, as it has been revised and turned into a spectrum, known as the schizoid spectrum [with other related disorders]. In the treatment of schizophrenia, medicalisation has also led to the evaluation of psychotherapy as a possibly ineffective treatment (Lehman & Steinwachs, 1998). Freud & others in his discipline acknowledged the treatment of psychosis as problematic with psychotherapy as psychotic individuals tend not to develop transference [interpretation of their hidden feelings, defences & anxiety] to the analyst – unlike neurotic patients. For personality disorders, addictions and other severe mental health problems medicalisation has led to the development of alternative methods of treatment that unlike the traditional authoritarian & hierarchically organised inpatient mental health settings, are run in a more democratic line where service users are encouraged to take an active role in their rehabilitation rather than simply being passive recipients of treatment.


Therapeutic communities have turned out to be effective in the long-term treatment of difficult patients with severe personality disorders with the outcome being more positive with longer treatments. These therapeutic communities are believed to lead to improvements in mental health and interpersonal functioning. For drug misuse issues, the assumption that clinicians make over users attempt to quit being due to conscious guidance & coherent plans should be revised as no evidence suggests so, and more evidence argue that unconscious processes, classical and operant conditioning, erratic impulses, and highly specific environmental cues affect the development and cessation of drug use (West, 2006). According to West, interventions should not stimulate adolescents to think of what ‘stage’ they are in or be matched to a stage, but maximum tolerable pressure should be put on the young person to cease drug use – which contradicts the stages of change model (DiClemente, 2003; Prochaska et al., 1992) where 30 days are allocated to stages [pre-contemplation, contemplation, action & maintenance] based on no evidence. While concepts such as harm reduction programmes with needle exchange, safe injection sites, and the provisions of free tests of quality of MDMA sold at raves remain controversial, some believe they prevent mortality and morbidity (Marlatt & Witkiewitz, 2010), while others argue they send the message that hard drug use [such as heroin] may be acceptable.

The second major controversy in modern day mental health practice remains the “Person or Context” debate where many in the field still question the validity of focusing on context as it shifts attention from the individualistic characteristics of the patient, and whether the focus should shift depending on the disorder and the patient’s age. For example in the treatment of childhood disorders, if difficulties are assumed to be individual ‘psychiatric’ illnesses the risk of focus being solely on the child and not on broader social environment may lead to medical treatments and individual therapy without addressing important risk factors for those of such young age who are influenced by their social environment, e.g. teacher, school and wider social context. This may not be the case for some adults who value a sense of autonomy more than being influenced by wider social contexts that they have no connection to, interest in or affinity for. In contrast, to the autonomic adult, treatment cases of other childhood behaviour disorders such as oppositional defiant disorder and conduct disorders may be particularly problematic, since the major risk factors that should be addressed are social: through interventions such as parent training, family therapy, multisystemic therapy and treatment foster care. For ADHD, the bold emphasis on medication is dangerous as the effects are limited to only 3 years (Swanson & Volkow, 2009), while growth and cardiovascular functioning may be affected that may lead to somatic complaints such as loss of appetite, headaches, insomnia and tics, which are present in 5-12% of cases (Breggin, 2001; Paykina et al., 2007; Rapport & Moffitt, 2002).

Another interesting argument comes from the Scottish psychiatrist and psychoanalyst R. D. Laing (2009) in the 1960s and 1970s who opposed the view that schizophrenia was a genetically based medical condition requiring treatment with antipsychotic medication. His dimensional approach led him to view schizophrenia as a ‘sane reaction to an insane situation’ where the contents of psychotic symptoms were simply viewed as psychological responses to complex, confusing, conflicting and powerful parental injunctions that left no scope for more rational and adaptive modes of expression. Thus, Laing proposed that the treatment involved creating a context where insight into the complex family process [e.g. poor housing, low SES, deviant parents with drug problems, over-involved family members who maintain the patient’s stress, alcohol problems, sexual deviance, incest, lack of financial stability, poor educational motivation, poor emotional education, lack of problem solving skills, lack of sophistication, poor nutrition, restricted finances, etc] of patients with schizophrenia and psychotic response to these could be facilitated. The context here seems partially important in the case where the patient’s delusions and hallucinations are linked, where their interpretation would be the client’s response to conflicting parental injunctions. The experience of psychosis and recovery was a process where the individual could emerge stronger with new and valuable insights regarding the solutions to their problems. However, this has not been supported by any evidence or subsequent research. In contrast, strong scientific evidence points to the importance of a more client-centred individual approach focussed solely on the patient with defective inherited neurobiological factors as major focus for the role they play in schizophrenia, and antipsychotic medication for the reduction of symptoms in two-thirds of psychotic patients affected (Ritsner & Gottesman, 2011; Tandon et al., 2010). Research has supported the hypothesis that suggests the family does affect the psychotic process and that psychotherapy has a place in the management of psychosis, for example personal trauma, including child abuse increases the risk of psychosis, and stressful life events including those within the family can precipitate an episode of psychosis, and high levels of family criticism, hostility and emotional over-involvement increase the risk of relapse (Bebbington & Kuipers, 2008; Hooley, 2007; Shelvin et al., 2008). So for those with a strong sense of family, and heavily involved peers, family therapy delays relapse in troubled families characterized by “extreme” levels of expressed emotion; and cognitive behaviour therapy which stresses the idea that psychotic symptoms are understandable and on a continuum with normal experience can help patients control these psychotic symptoms (Tandon et al., 2010), with solutions to rebuild their lives, their own identity and manage their social circle intelligently by differentiating types of relationship and expectations.


The third and last controversy to be addressed is the ongoing debate in clinical psychology over the categorisation of psychological disorders where many have been arguing over a dimensional outlook on psychological conditions that offers more precision in diagnosis along with a more scientific approach. In the case of childhood behaviour disorders with regard to scientific approaches, there is an ongoing debate over whether they should be viewed and classified in categorical or dimensional terms. While DSM are based on rigid categories, most empirical studies support the view of a dimensional outlook. Furthermore, factor analytic studies consistently show that common childhood difficulties belong to two dimensions of internalizing and externalizing behaviour, which are normally distributed within the population (Achenbach, 2009). Young children diagnosed with oppositional defiant disorder (ODD), conduct disorder and ADHD are part of a subgroup of cases with extreme externalizing behavioural problems, while those with anxiety or depressive disorders have extreme internalizing behaviour problems (Carr, 2006a). By the same dimensional approach, children diagnosed with intellectual disability fall at the lower end of the continuum of intelligence, a trait also normally distributed within the population (Carr et al., 2007). The dimensional approach is not only more scientific, but also has a less stigmatizing and rational approach to human uniqueness. The dimensional approach has also enhanced the movement critical of qualifying psychological deficiencies as ‘real psychiatric illnesses’, conditions such as ADHD, conduct disorder and other DSM diagnoses. Questions have been raised over whether they are invalid fabrications or spurious social constructions (Kutchins & Kirk, 1999). Those who trust the evidence of the dimensionality of childhood disorders argue that they may simply be traits distributed normally among the population where some cases fall on the extreme ends of certain traits, while those who point to the interests of pharmaceutical industries’ financial motives argue that they are spurious social constructions. The latter seems unethical but is a part of the decadent and immoral economic reality that we have allowed to exist. As parents, health and educational professionals, it is clear that the pharmaceutical industry and governments may all gain from conceptualising children’s psychological difficulties as ‘real psychiatric illnesses’. Some schools or uncaring parents may prefer children to receive a diagnosis of ADHD with stimulant therapy as they may have difficulty meeting their needs for intellectual stimulation, nurturance and clear limit-setting; thus these children in their care become more aggressive and disruptive.

In the case of schizophrenia, a dimensional approach has also led to the schizotypy construct as a dimensional alternative to the prevailing categorical conceptualization of schizophrenia (Lenzenweger, 2010). In contrast to the categorical view based on Kraepelin’s (1899) work and used in the DSM which sees schizophrenia as a discrete diagnostic category, this one proposes that anomalous sensory experiences, odd beliefs and disorganized thinking exist in extreme forms of schizophrenia as hallucinations, delusions and thought disorder, but these are simply on continuum with normal experience [i.e. it is present in all ‘normal’ people but peaks in abnormal ones] – a position originally advocated by Bleuler (1911). Research measures have provided support for the dimensional construct of schizotypy (Lenzenweger, 2010) where the continuum may be composed of sub-dimensions; from normal to psychotic experiences. Schizotypy is heritable; and patients with high schizotypy scores but who are not psychotic show attentional, eye-movement and other neuropsychological abnormalities associated with schizophrenia. Further, the dimensional approach has also led to the distinction between schizophrenia and split personality where 40% in the UK equated split or multiple personality with schizophrenia – as popular culture often does. It is clear that schizophrenia does not refer to such characteristics.


The closest equivalent to split personality is a condition known as dissociative identity disorder (DID), where the central feature is the apparent existence of two or more distinct personalities within the same individual, with only one being evident at a time. Each personality (or alter) is distinct with its own memories, behaviour and interpersonal style. In most cases, the host personality is unaware of the existence of alters and these vary in knowledge of each other. Evidence suggests that the capacity to dissociate is normally distributed within the population and an attribute many use to manage their own lives and network. Those with high degree of this trait may cope by dissociating their consciousness from the experience of trauma (such as child abuse, extreme graphic violence, etc) in early childhood by entering a trance-like state. This dissociative habit is negatively reinforced (strengthened) as an effective distress-reducing coping strategy over repeated traumas in early childhood as it brings relief from distress during trauma exposure. Eventually a sufficient number of experiences become dissociated to constitute a separate personality that may be activated in later life at times of stress or trauma through suggestion in hypnotic psychotherapeutic situations. Treatment often simply involves helping clients integrate the multiple personalities into a single personality and develop non-dissociative strategies for dealing with stress [e.g. argument with work colleagues, new manager, divorce, adolescents leaving home for studies, partner with alcohol problems, over-involved family members, etc] – this helps them deal with tough situations by facing them with problem-solving abilities and skills to come out with a firm resolution and have their views understood. Core symptoms of multiple personality disorder are not treated with psychotropic medication unlike schizophrenia but involves psychological education for patients to learn the skill of mentalizing [understand their own state of mind and that of others].


Finally, with personality disorders, the dimensional approach has led to the trait theory in conceptualizing important aspects of behaviour and experience from a limited number of dimensions. Any given trait is believed to be normally distributed in the population, for example, introversion – extraversion, most people show a moderate level of the trait, however those who exhibit extremely low or high levels [extremes] would have the sort of difficulties attributed in the DSM. So, normal people only differ from the abnormal in the degree to which they show particular traits. The trait theory has become dominated by the five-factor theory (McCrae & Costa, 2008) in recent years. This model includes the dimensions: neuroticism, extraversion, openness to experience, agreeableness, and conscientiousness. There is evidence for the heritability of all of factors within the Five Factor Model except agreeableness which seems to be predominantly determined by one’s environment (Costa & Widiger, 1994). Thomas Widiger has proposed that the five-factor model may be used as an alternative system for describing personality disorders (Widiger & Mullins-Sweatt, 2010). Widiger also argues that trait theory offers a more scientifically useful approach to assessment with good psychometric properties embraced by its questionnaires (De Raad & Perugini, 2002) – they are reliable and valid, and have population norms. Compared to categorical classification systems, trait models offer a more parsimonious way of describing patients with rigid dysfunctional behaviour patterns which in turn offers a more parsimonious way to conceptualize the development of effective treatments.


Photo: The Promise of Dawn (J.Hawkes)

The major controversies in modern day mental health practice seem to revolve around the precision and the validity of constructs as psychological illnesses, and since they may stigmatise those who suffer from them, the constant research into better and more modern interpretations and explanations of their characteristics and treatment seem bound to revolutionise the field of psychology, as the movement takes a more dimensional approach; with a new generation of psychologists applying the rules with an open mind and a creative outlook on new perspectives and methods – the field of psychology looks set on a positively progressive course.


“A great aggregation of men sane in mind & warm in the heart, creates a moral conscience that is known as a nation” – Ernest Renan / Source: Université Paris 1 Panthéon-Sorbonne

Arthur Hughes - A Music Party 1864

Arthur Hughes (1832 – 1915), “A Music Party



  1. Achenbach, T. M. (2009). ASEBA: Development, findings, theory, and applications. Burlington, VT: University of Vermont Research Centre for Children, Youth and Families.
  2. Bleuler, E. (1911). Dementia praecox or the group of schizophrenias. New York: International University Press.
  3. Breggin (1991). Toxic psychiatry. London: Harper Collins.
  4. Breggin, P. (2001). Talking back to Ritalin: What doctors aren’t telling you about stimulants and ADHD. New York: Da Capo Press.
  5. Bridge, J. A., Iyengar, S., & Salary, C. B. (2007). Clinical response and risk for reported suicidal ideation and suicide attempts in paediatric antidepressant treatment: A meta-analysis of randomized controlled trials. Journal of the American Medical Association, 297, 1683-1696.
  6. Bushnell, G., Stürmer, T., Gaynes, B., Pate, V. and Miller, M. (2017). Simultaneous Antidepressant and Benzodiazepine New Use and Subsequent Long-term Benzodiazepine Use in Adults With Depression, United States, 2001-2014. JAMA Psychiatry, 74(7), p.747.
  7. Carr, A. (2006a). Handbook of child and adolescent clinical psychology: A contextual approach (second edition). London: Routledge.
  8. Carr, A. (2012). Clinical psychology. 1st ed. New York: Routledge.
  9. Carr, A., O’Reilly, G., Walsh, P., & McEvoy, J. (2007). Handbook of intellectual disability and clinical psychology practice. London: Brunner-Routledge.
  10. Costa, P. & Widiger, T. (1994). Personality disorders and the five factor model of personality. Washington, DC: APA.
  11. De Raad, B., & Perugini, M. (2002). Big five assessment. Bern, Switzerland: Hogrete & Huber.
  12. DiClemente, C. (2003). Addiction and change. New York: Guilford.
  13. Fournier, J., DeRubeis, R., Hollon, S., Dimidjian, S., Amsterdam, J., & Shelton, R. (2010). Antidepressant drug effects and depression severity. Journal of the American Medical Association, 303, 7-53.
  14. Hankin, B., & Abele, J. (2005). Developmental psychopathology: A vulnerability-stress perspective. Thousand Oakes, CA: Sage.
  15. Kraepelin, E. (1899). Psychiatrie (sixth edition). Leipzig, Germany: Barth.
  16. Kutchins, H. & Kirk, S. (1999). Making us crazy: DSM – The psychiatric bible and the creation of mental disorders. New York: Constable.
  17. Laing, R. D. (2009). Selected works of R. D. Laing, Volumes 1-7. (Vol. 1. The divided self. Vol 2. Self and others. Vol. 3. Reason and violence. Vol. 4. Sanity and madness in the family. Vol. 5. The politics of the family. Vol. 6. Interpersonal Perception. Vol. 7. Knots.) London: Routledge.
  18. Lehman, A., & Steinwachs, D. (1998). At issue: Translating research into practice: The Schizophrenia Patient Outcomes Research Team (PORT) treatment recommendations. Schizophrenia Bulletin, 2, 1-10.
  19. Lenzenweger, M. (2010). Schizotypy and schizophrenia. New York: Guilford.
  20. Maine, M. & Bunnell, D. (2010). A perfect biopsychosocial storm: Gender, culture, and eating disorders. In M. Maine, B. McGilley, & D. Bunnell (Eds.), Treatment of eating disorders: Bridging the research-practice gap (pp. 3-16). San Diego, CA: Elsevier.
  21. Marlatt, G. A., & Witkiewitz, K. (2010). Update on harm-reduction policy and intervention research. Annual Review of Clinical Psychology, 6, 591-606.
  22. McCrae, R. R., & Costa, P. T., Jr. (2008). The five-factor theory of personality. In O. P. John, R. W. Robins, & L. A. Pervin (Eds.), Handbook of personality: Theory and research (third edition, pp. 159-181). New York: Guildford Press.
  23. Murray, C., & Lopez, A. (1996). The global burden of disease: A comprehensive assessment of mortality and disability from diseases, injuries and risk factors in 1990 and projected to 2020. Cambridge, MA: Harvard University Press.
  24. Paykina, N., Greenhill, L., & Gorman, J. (2007). Pharmacological treatments for attention-deficit hyperactivity disorder. In P. Nathan & J. Gorman (Eds.), A guide to treatments that work (Third Edition, pp.29-70). New York: Oxford University Press.
  25. Prochaska, J., DiClemente, C., & Norcross, J. (1992). In search of how people change: Applications to addictive behaviours. American Psychologist, 47, 1102-1114.
  26. Rapport, M. & Moffitt, C. (2002). Attention deficit/hyperactivity disorder and methylphenidate: A review of height/weight, cardiovascular, and somatic complaint side effects. Clinical Psychology Review, 22, 1107-1131.
  27. Read, J., & Bentall, R. (2010). The effectiveness of electroconvulsive therapy: A literature review. Epidemiologia e Psichiatria Sociale, 19, 333-347.
  28. Ritsner, M., & Gottesman, I. (2011). The schizophrenia construct after 100 years of challenges. In M. Ritsner (Ed.), Handbook of schizophrenia spectrum disorders, Volume I: Conceptual issues and neurobiological advances (pp. 1-44). New York: Springer.
  29. Swanson, J. M., & Volkow, N. D. (2009). Psychopharmacology: Concepts and opinions about the use of stimulant medications. Journal of Child Psychology and Psychiatry, 50 (1-2), 180-193.
  30. Szasz, T. (2010). Psychiatry, anti-psychiatry, critical psychiatry: What do these terms mean? Philosophy, Psychiatry, & Psychology, 17, 229-232.
  31. Tandon, R., Nasrallah, H. A., & Keshavan, M. S. (2010). Schizophrenia, “just the facts” 5. Treatment and prevention past, present and future. Schizophrenia Research, 122, 1-23.
  32. West, R. (2006). Theory of Addiction. Oxford: Blackwell.
  33. Widiger, T.A., & Mullins-Sweatt, S. N. (2010). Clinical utility of a dimensional model of personality disorder. Professional Psychology: Research and Practice, 41, 488-494.

Updated 8th of August 2017 | Danny J. D’Purb |


While the aim of the community at has  been & will always be to focus on a modern & progressive culture, human progress, scientific research, philosophical advancement & a future in harmony with our natural environment; the tireless efforts in researching & providing our valued audience the latest & finest information in various fields unfortunately takes its toll on our very human admins, who along with the time sacrificed & the pleasure of contributing in advancing our world through sensitive discussions & progressive ideas, have to deal with the stresses that test even the toughest of minds. Your valued support would ensure our work remains at its standards and remind our admins that their efforts are appreciated while also allowing you to take pride in our journey towards an enlightened human civilization. Your support would benefit a cause that focuses on mankind, current & future generations.

Thank you once again for your time.

Please feel free to support us by considering a donation.


The Team @

Donate Button with Credit Cards

Essay // History on Western Philosophy, Religious cultures, Science, Medicine & Secularisation


Part I: Western Philosophy

The fact that philosophy’s focus has never remained static over time makes its history very complex with the added possibility that most of the early writers may have even been philosophers before historians. The world’s main philosophical trends and traditions can however be traced with a decent amount of precision while considering that the ruling philosophy of any period is determined by the socio-cultural climate and economic context [when it was written and published].

The first Western philosophers, starting with Thales of Miletus (c.620-c.555BC), were cosmologists who made inquiries about the nature and origin of all things; what defined them particularly as a new type of thinkers was that their speculations unlike those before them were purely naturalistic and not based on or guided by myth or legend. The traditions of Western philosophy originates around the Aegean Sea and southern Italy in the 6c BC in the Greek-speaking region which saw its philosophical traditions and teachings blossom with Plato (c.428-c.348BC) and Aristotle (384-322BC), who have remained highly influential in Western thought, and who probed virtually all areas of knowledge; no distinction separated theology, philosophy and science then.

As the centuries came, Christianity grew as a major religious and socio-cultural force in Europe (2-5c), and apologists such as Augustine de Hippo (354-430) started to synthesise the Christian world-view with ancient philosophy, a tradition that continued with St Thomas Aquinas (1225-1274) and throughout the Middle Ages.

As the 16c and the 17c were the years that experienced the Scientific Revolution, the physical sciences started to assert their authority as a field of their own and grow separate from theology and philosophy. A new age of Rationalist philosophers, notably Descartes (1596-1650) started their works based on the minute analysis and interpretation of the philosophical implications of the ground-breaking new scientific discoveries and knowledge of the time. The 18c produced the empiricist school of thought of John Locke and David Hume (1711-1776) in the search for the foundations of knowledge, to conclude the turn of the century with Immanuel Kant (1724-1804) who developed a strong synthesis of rationalism and empiricism as a school of philosophy. Further, the development of positivist philosophy in the 19c was inspired and based solely on the scientific method and American pragmatism [with the competing philosophy of Utilitarianism and Marxism]. Later, the individual experienced the philosophy of existentialism based on the works of Soren Kierkegaard (1813-1855) and in the 20c the discipline of psychology had firmly invented itself as a field separate from philosophy [including many branches such as neuroscience, psychiatry, psycho-analysis, etc].


The 20c and Western Philosophy’s influence across civilisation

Perhaps due to its wide use in maintaining reason among intellectuals and society, philosophy had fragmented into different precise and specific branches by the 20c [philosophy of mind, philosophy of science, philosophy of religion, philosophy of medicine…]. However at its core, the emphasis of philosophy remained on the analytics and linguistic philosophy due to the huge influence of Ludwig Wittgenstein (1889-1951).

Indian philosophy for example shares similarities with some aspects of Western philosophy in its foundations based on the development of logic from the Nyaya School, founded by Gautama (fl. 1c). The tenet of most schools were codified into short aphorisms (sutras) commented upon by later philosophers in the Southern parts of Asia, and India. More specifically the emphasis on linguistic expression and the nature of language which is believed to be similarly important as in the West, but different in theme as India’s language was greatly enhanced by the early development of linguistics or Sanskrit grammar, and the nature of knowledge and its acquisition. In modern times, Indian philosophy has seen an increasing Western influence especially from the social philosophies of utilitarian schools which inspired a number of religious and socio-cultural movements, such as the Brahmo Samaj. The 20c saw the Anglo-American linguistic philosophy form the basis of research, with added influence from European phenomenology present in the works of scholars such as KC Bhattacharya. The trend of Western philosophy as inspiration continued to be disseminated by intellectuals in the East, and Chinese philosophy too which first made its appearance during the Zhou Dynasty (1122-256BC) later experienced Western influence in the 20c, most notably in the introduction of the leftist branch of Marxism which became China’s official political philosophy. Around the same period, a New Confucian movement rose, attempting to synthesise the traditions of the West and the East [traditional Confucian values with Western democracy and science].

As for the African continent, starting from the Middle-East and North-Africa, it may be unsurprising that Western values or philosophy had no major influence in the Islamic territories and Muslim world who had been subjugating non-Muslin civilisations with violent wars [jihad] in the name of their God. The major European incursions and hence influence in the Arab world comes from the time of Napoleon I’s invasion of Egypt (1798) which led to the promotion of Western philosophy in the area for a short time before a backlash from Islamic circles called for a religious and politically-oriented philosophy to counter foreign domination.

Regarding African philosophy, it is to this day a subject of intense debates among intellectuals and cultured circles whether such a thing exists, along with the definition that ‘African philosophy’ may include: for example, many scholars associate the term to communal values, beliefs and world-views of traditional Black African oral cultures, highlighting the rich, long and sometimes violent tradition of indigenous African philosophy [stretching back in time] with tales of supernaturalism and communally-derived ethics by tribes. What seems to be a certitude is that African philosophy is unlike Western, Indian, Chinese and Arabic traditions as there is very little in terms of African philosophical traditions before the modern period. However, the logical question remains, and that is: if African philosophy are works that were created within the geographical area that constitutes Africa, then perhaps all of the writings of ancient Egyptians may quality as African, and also Christian apologists of the 4-5c period like St Augustine de Hippo. Indeed, to further the argument of logic, the whole world’s culture and societies could all be qualified as African, since it has recently been proven scientifically that all humans evolved after leaving Africa.



Part II: Religious Cultures


Image: The Atlantic


The main driving power behind the psychological movement focused on the “Human Mind”, Sigmund Freud, was an atheist unlike Isaac Newton who was a devout Christian with complex and heterodox private beliefs

The world’s cultures are generally classified into the five major religious traditions:

  • Buddhism
  • Islam
  • Hinduism
  • Judaism
  • Christianity



The tradition of Buddhism which is made up of thought and practice originates in India around 2500 years ago, it was inspired by the teaching of Buddha (Siddhartha Gautama). The concept of Buddha is explained in the ‘Four Noble Truths’, which concludes by the claim of a path leading to deliverance from the universal human experience of suffering. One of its main tenet is the law of karma, which states that good and evil deeds result in the appropriate reward or punishment in life or in a succession of rebirths. 


Dharma day commemorates the day when Buddha made his first sermon or religious teaching after his enlightenment


Dating from its earliest history, Buddhism is divided into two main traditions.

  • Theravada Buddhism adheres to the strict and narrow of early Buddhist writings, where salvation is possible only for the few who accept the severe discipline and effort necessary to achieve it.
  • Mahayana Buddhism is the more ‘liberal form’ and makes concession to popular piety by seemingly diluting the degree of discipline required for salvation, claiming that it is achievable for everyone instead. It introduces the doctrine of bodhisattva (or personal saviour). The spread of Buddhism lead to other schools to expand, namely Chan or Zen, Tendai, Nichiren, Pure Land and Soka Gakkai.


Theravada Buddhism in South and South-East Asia

While being nearly eradicated in its original birthplace, the practice of Theravada Buddhism has turned into a significant religious force in the states of Burma, Cambodia, Laos, Sri Lanka and Thailand. Traditionally, it is believed that missions in the area by the emperor of India, Ashoka in the 3c BC introduced Buddhism. While the evidence lacks the consistency to be conclusive, it is assumed and believed by most that many different variations of Hindu and Buddhist traditional movements were present, scattered across South-East Asia up to the 10c. Theravada Buddhism eventually acquired more influence from the 11c to 15c as it experienced growing contacts with Sri Lanka where the movement was outward looking. In Burma (now Myanmar), Buddhist states arose and soon others followed, namely Cambodia, Laos, Java and Thailand, including the Angkor state in Cambodia and the Pagan state in Burma. During the modern period [at the exception of Thailand which was never colonised], the imperial occupation, Christian missionaries and the Western world-view challenged Theravada Buddhism [the strict version of Buddhist philosophy] in South=East Asia.


Mahayana Buddhism in North and Central Asia

The Mahayana which is the form of Buddhism commonly practised in China, Tibet, Mongolia, Nepal, Korea and Japan dates from about the 1c when it arose as a more liberal movement within the Buddhist movement in northern India, focussing on various forms of popular devotion.

Tibetan Buddhism

Orthodox Mahayana Buddhism and Vajrayana Buddhism (a Tantric form of Mahayana Buddhism) had been transmitted through missionaries invited from India during the 8c in Tibet. Today’s popular Tibetan Buddhism places an emphasis on the appeasement of malevolent deities, pilgrimages and the accumulation of merit. Since the Chinese invasion in 1959 and the Dalai Lama’s exile from India however, Buddhism has been repressed drastically.

Chinese Buddhism

China’s introduction to Buddhism from India happened in the 1c AD via the central Asian oases along the Silk Route. It had surprisingly established itself as a reasonable presence in China by the end of the Han Dynasty (AD 220). Buddhism had become so successful by the 9c that the Tang Dynasty saw it as ‘an empire within the empire’ and persecuted it in 845 after which the Chan and Pure Land Schools only remained strong, drew closer and found harmony with each other. Buddhism and other religions however was nearly subjugated by the attempts of the Marxist government of Mao Zedong (1949 onwards) when the lands of China were nationalised and Buddhist monks forced into secular employments. Since 1978, the Buddhist movement and other religions have seen a revival in China.




Islam is simply Arabic for ‘submission to the will of God (Allah)’ and the name of the religion which was founded in Arabia during the 7c throughout a controversial prophet known as Muhammad. Islam relies on prophets to establish its doctrines which it believes have existed since the beginning of time, sent by God like Moses and Jesus, to provide the necessary guidance for the achievement of eternal reward; and the culmination of this succession is assumed by Muslims to be the revelation to Muhammad of the Quran, the ‘perfect Word of God’.

Beliefs and traditions

There are five religious duties that make up the founding pillar of Islam:

  • The shahadah (profession of faith) is the honest recitation of the two-fold creed: ‘There is no god but God’ and ‘Muhammad is the Messenger of God’.
  • The salat (formal prayer) must be said at fixed hours five times a day while facing towards the city of Mecca
  • The payment of zakat (‘purification’) [a form of religious tax by the Muslim community] which is regarded as an act of worship and considered as the duty of sharing one’s wealth out of gratitude for God’s favour, according to the uses laid down in the Quran [such as subjugation of all non-Muslims, the imposition of violent and controversial Sharia law (a section of Islam as a political ideology which dictates all aspects of Muslim life with severe repercussions if transgressed), learning to adapt behaviour to protect Islam at all cost even if it means deceiving (‘Taqqiya’), etc]
  • There is an imposition regarding fasting (saum) which has to be done during the month of Ramadan.
  • The pilgrimage to the Mecca, known as the Haji is part of the sacred law of Islam which applies to all aspects of Muslim life, not simply religious practices. The Haji is described as the Islamic way of life and prescribes the way for a Muslim to fulfil the commands of God and reach heaven, and must be performed at least once during one’s lifetime. The cycle of festivals such as Hijra (Hegira), the start of the Islamic year, and Ramadan, the month where Muslims fast during daytime are two of the most known practices still misunderstood by mainstream media.


Although all Muslims believe in the ideology of Islam and its teachings from Muhammad, two basic and distinct groups exist within Islam. The Sunnis are the majority and acknowledge the first four caliphs as Muhammad’s legitimate successors. The other group, known as the Shiites make up the largest minority movement in the Muslim world, and view the imam as the principal religious authority. A number of subsects and derivatives also exist, such as the Ismailis (one group, the Nizaris, regard the Aga Khan as their imam), while the Wahhabis, a movement focussed on reforming Islam begun in the 18c.


Islam remains a priority in matters of identity for most Muslims, unlike Westerners who tend to have more nationalistic feelings

Today Islam remains one of the fastest growing religions – probably due to the high birth rate of third world North Africa where it originates along with the strong impositions it inculcates in its adepts such as the subjugation of all non-Muslims into slaves, sexual slavery, forced conversation, childhood indoctrination, honour killings and jihad (a war in the name of Islam that guarantees salvation along with mass migration to promote Islam) – and today about 700 million Muslims exist throughout the World. The constant clash with enlightened movements of the Christian West, with intellectuals such as Dr Bill Warner who initiated the movement for the study of political Islam to help break down and propagate important facts about the ideology of Islam’s political techniques in subjugating global non-Muslim societies, have started to gain major attention from the intellectual crowd [who are active on media platforms such as Twitter, a controversial platform that uses its administrative rights dictatorially, known to restrict freedom of speech, research & factual information that oppose liberal opinions, and many researchers from accessing their archived ‘tweets’ and ‘retweets’, affecting their work and research – a direct breach of Human Rights as specified by Article 10 of the Human Rights Act 1998 – and many have questioned the practice over possibilities of World War III being caused by the USA’s unethical technological monopoly over other Western nations data. Saddam Hussein was assaulted militarily by the UN after breaching human rights]


Status of Women in the Hadith [purely based on the life, habits & actions of Muhammad]

Islam remains a controversial religions tradition while also being the only religion with a “manual to run a civilisation” as Dr. Bill Warner phrased it, in the Sharia [an Islamic set of doctrines in managing a civilisation – politics, culture, philosophy and economy] which at its deeper core includes the war on other civilisations through jihad, the subjugation of all non-Muslims, the destruction of all non-Islamic historical heritage, forced circumcision of both sexes and a whole set of violent and radical forms of Islamic lifestyle requirements that include violent and sometimes fatal repercussions [for ‘transgressing‘]. Repeatedly France has profoundly rejected Islam as a dangerous religious practice and culture that is incompatible with the values of French society & culture; however the obsolete system of management that is politics remains an atavistic barrier to banning Islam due to the concept of ‘political correctness’ – an invalid ideology created by the most corrupt & untrustworthy adepts of the obsolete practice of ‘politics’ [for reasons that are now being scrutinised in the name of change]. The late Christopher Hitchens was also a prominent speaker on secularisation and particularly focused on countering the atavistic Islamisation of the West which threatens personal liberty, freedom of expression, education, innovation, development, cohesion and socio-cultural creativity due to its rigid doctrines.




Hinduism does not trace its origin to a particular founder, does not have any prophets, no set creed, and no institutional structure, but instead focuses on the ‘right way of living’ (dharma) rather than a set of doctrines. It embraces a variety of religious beliefs and practices. Variations exist across different parts of India where it was founded, differences in practice can be found even from village to village in the deities worshipped, the scriptures used, and the festivals observed. Those of the Hindu faith may be either theists or non-theists, and revere one or more gods or goddesses, or none, and instead represent the ultimate in personal (e.g. Brahma) or impersonal (e.g. Brahman) terms. Over 500 million Hindus exist today.



Most forms of Hinduism assume and promote the idea of reincarnation or transmigration. The process of birth and rebirth continuing for life after life is a process referred to and termed ‘samsara. The state of rebirth (pleasant or unpleasant) is believed to be the results of karma, the law by which the consequences (good or bad) of actions reflect when life is transmigrating from one form to another which influences its character. Hindus’ ultimate spiritual goal is maksha – release from the cycle of samara.



No specific text is regarded as specifically authoritative unlike any other religion, Hinduism is based on a rich and varied literature with the earliest dating from Vedic period (c.1500-c500BC), known collectively as the Veda. Later (c.500BC-AD500) the religious law books (dharma sutras and dharma shastras) surfaced; they codified the classes of society (varna) and the four stages of life (ashrama), and formed the basis of the Indian caste system. The great epics were added to these, notably the Ramayana and the Mahabharata which includes one of the most influential Hindu scriptures, the Bhagavad Gita.


The concept of Hinduism is founded centrally on the caste system which is believed to have been structured since the first Aryans came to India and brought a three-tiered social structure of priests (brahmanas), warriors (Kshatriyas), and commoners (vaishyas), to which they added the serfs (shudras), the indigenous population of India which may have been hierarchically structured. The Rig Veda (10.90) gives sanction to the class system (varna), describing each class as coming from the body of the sacrificed primal person (purusha). Orthodox Hindus regard the class system which is derived from the caste system as a sacred structure in harmony with natural or cosmic law (dharma). The system of class developed into the caste (jati) system which exists today and there are thousands of castes within India based on inherited profession and concepts of purity and pollution. The upper castes are generally regarded as ritually and philosophically purer than the lower ones. While this practice was outlawed in 1951, a number of castes are still considered so ‘polluting’ that their members are known as ‘untouchables’ [too ‘polluting’ to be touched or meddled with], thus marriage between castes is forbidden and transgressors have been known to be harshly punished.


Shiva, Vishnu and Brahma are the main chief gods in Hinduism, and together form the triad (the Trimurti). Many lesser deities also exist, such as the goddesses Maya and Lakshmi. It is common to most Hindus to go on pilgrimages to local and regional worship sites with an annual cycle of local, regional and all-Indian festivals.


Like Christianity & the other major religions, Hinduism too gradually spread in influence across the globe. However, 94% of people who practice Hinduism  are the native Hindi-speaking population of India




Judaism is the religion of the Jews where the central belief in one God is the foundation. The primary source of Judaism is the Hebrew Bible, with the next important document being the Talmud, which consists of the Mishnah (the codification of the oral Torah) along with a series of rabbinical commentary. Jewish practice and thought however would be shaped by later documents, commentaries & the standard code of Jewish law and ritual (Halakhah) produced in the late Middle Ages.

Communal Life

Most Jews see themselves as members of a group whose origins lie in the patriarchal period – however varied the Jewish community may be. There is a marked preference for expressing beliefs and attitudes more through rituals that through abstract doctrine. In Jewish rituals, the family is the basic unit although the synagogue too has developed to play an important role in being a centre for community study and worship. The Sabbath, a period starting from sunset on Friday and ending at sunset on Saturday is a central part of religious observance in Judaism with a cycle every year comprising of festivals and days of fasting, the first of these being Rosh Hashanah, New Year’s Day; in the Jewish year, the holiest day is Yom Kippur, the Day of Atonement – others include Hanukkah and Pesach, the family festival of Passover.



Rabbinic Judaism is the root of modern Judaism with a diverse historical development. Most Jews today are the descendants of either Ashkenazim or Sephardim, while many other branches of Judaism also exist. The preservation of ‘traditional’ Judaism is generally linked to the Orthodox Judaism movement of the 19c. Other branches, such as Reform Judaism attempt to interpret Judaism in the light of modern scholarship and knowledge, a process pushed further by Liberal Judaism – unlike Conservative Judaism which attempts to emphasise on the positives of ancient Jewish traditions in attempts to modify orthodoxy.

Modern Controversies

Waves of anti-Semitic prejudice and persecution during World War II have been regular features of Western media outlets’ [mostly Jewish owned] focus, who throughout history have clashed with the Christian influenced heritage of European civilisations, and this ongoing tension between Semitic traditions/philosophies/beliefs and Western Christian-influenced cultures was to take a turn when the rise of a form of “patriotic socialism” [neither left or right, but all encompassing] nationalism across Europe was marked by the spectacular election of the talented Adolf Hitler, who had been the leader of the National Socialist party [Nationalsozialismus later tarnished as “NAZI” by a jew known as Konrad Heiden from the Social Democratic Party of Germany (Sozialdemokratische Partei Deutschlands)] in Germany, and implemented the core ideologies of National Socialism [a focus on self-sustainability and socio-cultural and economic independence while creating a healthier – psychologically & physically – nation] with Darwinian influence on health policies, and the arts and a scientific culture of research as an integral part of its core.

The lies surrounding the event known as “the holocaust” based on Communist propaganda, Global Zionist interests, along with the credulity of mediocre politicians across the globe, has today been unfairly implanted in the minds of the ignorant mass media consumer as being the “dark legacy” of a talents such as Adolf Hitler. This exaggerated picture that the media had already been circulating to the disapproval of some leading world figures such as John Kennedy and Gandhi, is still being reviewed by a wave of daring, talented and modern historians of whom many have denied accusations of gas chambers that were not present or inadequate to be used as gas chambers on all of German soil. More testimonies of camp survivors gave notes of swimming pool, orchestras, shower rooms and even a canteen, without ever mentioning gas chambers. Others explained how the media propaganda videos of mass deaths with emaciated bodies were due to the outbreak of Typhus carried by lice which was caused by low hygiene due to the Allied bombing of train tracks which restricted many cities from supplies of food, medicines & sanitation; causing the starvation and death of not only camp detainees but many German men, women and children who were scavenging the streets for food. The shower rooms in the camps on German soil were also documented as working shower rooms that were vital for hygiene and the delousing process.

English historian David Irving was jailed for his revision of events linked to Adolf Hitler while other ground breaking documentaries such as ‘The greatest story never told’ by Dennis Wise keep spreading the real facts that are never part of mainstream media to the new generation of the internet era who seek factual analysis over historical controversies, such as the 150 000 Jews who gave up their heritage and had firmly assimilated German society in Adolf Hitler’s Third Reich and served loyally against Bolshevism, Communism & Capitalism until the very end.


The 1290 Edict of Expulsion from England, the expulsion from France in 1306 to name a few & the Chart showing all the times throughout human history that the Jews have been expelled from the locations they had migrated to. Many books over some despicable practices regarding human sacrifices have been written by a range of  non-Jewish intellectuals and thinkers who opposed such vile ancient traditions.

However, the mainstream mind set remains stuck on the theory of the ‘gassing of the Jews by the Germans’ for most, with urgency being given to the Zionist movement, established by the World Zionist Organisation for the creation of a Jewish homeland, which is still pivotal in most relations between Jews and non-Jews to this day, with over 14 million Jews scattered around the world.




Christianity is a religion that developed out of Judaism, centred on the life of Jesus of Nazareth in Israel. Jesus is believed to be the Messiah or Christ promised by the prophets in the Old Testament, and in a unique relation to God, whose Son or ‘Word’ (Logos) he was proclaimed to be. He selected 12 men as his disciples during his life, who after his death by crucifixion and his resurrection, formed the very nucleus of the Church as a society of believers. Christians gathered together to worship God through the risen Jesus Christ, in the belief of his return to earth and to establish the ‘kingdom of God’. Despite sporadic persecution, the Christian faith saw a quick progression and spread throughout the Greek and Roman world through the witness of the 12 earliest leaders (Apostles) and their successors. In 315 Christianity was declared by Emperor Constantine as the official religion of the Roman Empire. The religion survived the Empire’s split and the ‘Dark Ages’ through the witness of groups of monks in monasteries, and made up the basis of civilisation in Europe in the Middle Ages.

The Bible

Christian scriptures are divided into two testaments:

  • The Old Testament (or Hebrew Bible) is a collection of writings originally composed in Hebrew, except for sections of Daniel and Ezra which are in Aramaic. The contents depict Israelite religion from its roots to about the 2c.
  • The New Testament, composed in Greek, is called so in Christian circles because it is believed to represent a new ‘testament’ or ‘covenant’ in the long history of God’s interactions with his people, focussing on Jesus’s ministries and the early development of the apostolic churches.


Differences in doctrines and practices however have led to major divisions in the Christian Church, these are the Eastern or Othodox Churches, the Roman Catholic Church, which recognises the Bishop of Rome (the pope) as head, and the Protestant Churches stemming from the break-up with the Roman Catholic Chuch in the Reformation. The desire to convert the non-Christian world and spread Christianity through missionary movements led to the establishment of numerically strong Churches in developing economies such as Asia, Africa and South America.



Part III: Science


‘Science’ derives from the Latin Scientia, ‘knowledge’, from the verb scire, ‘to know’. For many centuries ‘science’ meant knowledge and what is now termed science was formerly known as ‘natural philosophy’, similar to Newton’s work of 1687, Naturalis Philosophiae Principia Mathematica (‘The Mathematical Principles of Natural Philosophy’). In can be argued that the word ‘science’ itself was not widely used in its general modern meaning until the 19c, and that usage came with the prestige that the scientific method and scientific observation, experimentation and development had by then acquired.

Early Civilisations

The first exact science to emerge from ancient civilisations is astronomy. Astronomical purposes were the guiding force that led to studying the heavens – so that the ‘will of the gods’ may be foreknown – and in order to make a calendar [which would predict events], which had both practical and religious uses. The seven-day week for example is derived from the ancient Egyptians who although not known as excellent mathematicians, had wanted to predict the annual flooding of the Nile. Chinese records and observations provide valuable references in modern times for eclipses, comets and the positions of stars. In India and even more so in Mesopotamia, mathematics was applied in creating a more descriptive form of astronomy. The ancient Mesopotamian number system was based on 60, thus from it the system of degrees, minutes and seconds was developed.


The Ancient Greeks

It is to be noted that in all these civilisations, the emphasis had been on observation and description, as the tendency was to explain phenomena as being ‘the nature of things’ or the ‘will of the gods’. The Greeks, who had been looking for more immediate explanations, instead relentlessly examined phenomena and the theories propounded by other earlier thinkers critically. Thales of Miletus initiated the study of geometry in the 6c BC.


Thales de Miletus (c.620-c.555BC)

At the similar period, Pythagoras had been discovering the mathematical relationship of the chief musical intervals, crucially relating number relationships to physically observed phenomena. Early Greek natural philosophers (today known as ‘scientists) passed on two major concepts to their successors: the universe was an ordered structure, and the ordering of it was organic not mechanical; all things had a purpose and were imbued with the propensity to develop in accordance with the purpose they were fated to serve.

The main voice for such ideas to later ages was Aristotle (384-322BC), who provided a cosmology with the earth at its centre in which everything above the moon was subject to circular motion, and everything beneath it [on earth] was composed of one of the four elements: earth, air, fire or water. The whole system was believed to be set in motion by a ‘prime mover’, usually identified with God.


This concept was later given a Mathematical basis by Ptolemy (c.90-168AD), an astronomer and geographer working in Alexandria, whose main work [a solar system with the earth at its centre], the Amagest, was revered until the 17c. Aristotle also taught that living creatures were divided into species organised hierarchically throughout creation and reproducing unchangingly after their own kind – an idea that remained unchallenged until the great debate on evolution in the 19c. For Aristotle, scientific investigation was a matter of observation. Experimentation, by altering natural conditions, falsified the ‘truth of things’.

Archimedes (c.287-212BC) was Ancient Greek’s most famous and influential mathematician, who founded the science of hydrostatics, discovered formulae for areas and volume of spheres, cylinders and other plane and solid figures, anticipated calculus, and defined the principle of the lever. His principal contribution to scientific advancement lies perhaps in demonstrating how physical properties can be rendered in terms of mathematics and how formulae thus produced can be subjected to mathematical manipulation and the results translated back into physical terms.


Archimedes Thoughtful by Domenico Fetti (1620)

The Middle Ages

The pursuit of mathematical theory and pure science was not of great importance to the Romans, who preferred practical knowledge and concentrated on technology. After the fall of the Roman Empire, ancient Greek texts were preserved in monasteries. There the number system, derived from ancient Hindu sources, had given more flexibility to mathematics than was possible using Roman numerals. It was combined with an interest in astronomy and astrology, and in medicine.

Aristotelian thought made an emergence in Christian West in large measure through the work of St Thomas Aquinas in the 13c. Christianity assimilated what it could from Aristotle, as Islam had done some centuries before. Scientific knowledge was still regarded as part of a total system embracing philosophy and theology: a manifestation of God’s power, which could be observed and marvelled at, but not altered. Eventually, Aristotle was proclaimed as the ultimate authority and last word in natural philosophy. His enormous prestige combined with the conservatism of academics and of the Church laid something on the progress of science for several centuries. In the later medieval era and the Renaissance period however, ancient Greek scientific thought was refined, and advances were made both in the Christian Mediterranean and in the Islamic Ottoman Empire. The European voyages of exploration and discovery stimulated much precise astronomical work, done with the intention of assisting navigation. Jewish scholars who could move between the Christian and Muslim worlds were often prominent in this work.

The Scientific Revolution

The Scientific Revolution of the 16c and 17c remain up until this day the most defining era in science, and it happened just after the renaissance, where the conduct of scientific enquiry in the West underwent an incredible change. Nicolaus Copernicus (1473-1543) refuted many aspects of the already established Ptolemaic model of the solar system where the earth is at the centre of everything in astronomy – where he redefined the system with sun instead at the centre.


A German mathematician, Johannes Kepler (1571-1630), who was also influenced by his work concluded that the movements of planets’ orbits around the sun are elliptical rather than circular. Galileo Galilei who is now championed by many intellectuals as the father of modern science was an Italian philosopher, mathematician and scientist in those days who improved on the telescope that had been invented in Holland, and used it to make observations that included the Milky Way and Jupiter’s satellites. Later, his further research convinced him of the truth in the new Copernican system [with the sun at the centre], but under threat from the Inquisition he recanted.

In England, William Gilbert (1544-1603) established the magnetic nature of the earth and was the first to describe electricity; William Harvey (1544-1603) explained the circulation of blood; and Robert Boyle (1627-91) studied the behaviour of gases under pressure – all in the early 17c.


Isaac Newton (1642-1727)

Isaac Newton (1642-1727), who was to replace Aristotle as the leading authority in natural philosophy for the next two centuries also came from England. He established the universal law of gravitation as the key to the secrets of the universe. In 1687, he published his ground breaking work entitled Principia, which stated his three laws of motion. Alongside Gottfried Leibniz (1646-1716) he invented calculus, and he also did incredibly influential work on optics and the nature of light.

Cooperation and discourse among scientists and intellectuals had been fostered by the creation of societies where meeting and discussions about their work could take place: for example, the Royal Society in London established in 1662, and the Académie des Sciences in Paris, founded in 1666. Discoveries made by various scientists were used by others in science to advance faster to new theories, leading to science obtaining more status and prestige as a driving force in society.

The 18-19c

The 18c Enlightenment saw its writers play a major part in bringing the scientific advances of the previous century to the wider public and further enhancing the prestige of science as a reliable driving force of civilisation. The scientific method – observation, research, even experimentation and the use of reason, unfettered by preconceptions or dogma to analyse the findings – was applied to almost all aspects of human life.

Chemistry saw significant advances in the latter part of the century – notably the discovery of oxygen by Lavoisier in France, Priestley in Britain and Scheele in Sweden. The Industrial Revolution was a substantial contribution of scientific knowledge’s impact on society and a variety of minds from various fields with various intentions. The discovery of the dye, aniline led to a ‘revolution’ in the textile industry – an example of science’s usefulness to the ‘eyes of the public’, which gradually led to more public support and hence government funding. The École Polytechnique was founded in France in 1794 to propagate the benefits of scientific discovery throughout society. Elsewhere, technical institutions followed that were funded for scientific work – the new era of the professional in science had begun.

Throughout the 18c, botany also advanced when Linnaeus invented his system of binomial nomenclature (1735), while ever growing interest was aroused by the great variety of new species of plants and animals being discovered by explorers, particularly by Captain Cook.


The French naturalist Jean-Baptiste Lamarck’s (1749-1829) work foreshadowed Charles Darwin’s theories of evolution and made the first break with the notion of immutable species proposed by Aristotle. That particular moment in time also saw geology develop into a science: William Smith (1769-1839), ‘the father of English geology’, was drawn to investigate strata while working as an engineer on the Sommerset coal canal to eventually become the first to identify strata by the different fossils found in them. The epoch-making conclusions of Darwin’s (1809-1882) work on his theory of evolution was accepted by almost all biologists upon its publication as The Origin of Species in 1859, which however did clash with the ideologies promoted by the church. The laws of heredity that had been the work of Gregor Mendel (1822-84) was unfortunately not appreciated in his lifetime – to only later become the founding stone for genetic research. The germ theory of disease was also formulated by the French chemist Louis Pasteur (1822-1895) who moved into biology.


Physics also evolved from tremendous advances in the 19c, as the Italian, Alessandro Volta (1745-1827) developed the current theory of electricity, and invented the electric battery and electrolysis [a study which he formulated in French and sent as a letter to the Royal Society later]. Michael Faraday (1791-1867) carried out experiments with magnetism and electricity, and enabled the building of generators and motors. James Clerk Maxwell (1831-79) proposed the field theory of electromagnetism which mathematically related the phenomena of electricity, magnetism and light. The existence of radio waves was also predicted by him, which was eventually demonstrated by Heinrich Hertz (1857-1894).

Although science itself had not been of major importance in the very early stages of the Industrial Revolution in 18c Britain, technology by the end of the 19c – influenced by the works of scientists – had led to the development of most of the machines and tools that were to transform life for most of humankind in the developed world in the following century. Germany as a single nation excelled and innovated for the time between 1870 and 1914, where scientific education and applied science became major parts of the educational system, all the way up to the tertiary level. A research culture, with the ability to generate change became instilled and institutionalised to become part of German education, culture & philosophy.


Atomic physics and relativity

The theory that all matter is made up of minute and indivisible particles known as atoms was proposed by the ancient Greeks, and various early 19c scientists such as Newton, John Dalton (1766-1844), Amedeo Avogadro (1776-1856) and William Prout (1785-1850) made significant contributions in refining the concept of the atom and the molecule, and in 1869 Dmitri Mendeleyev (1834-1907) conceived the periodic table classifying the chemical properties of each known element to their atomic weight.


An Atom

Albert Einstein’s (1879-1955) theoretical work gave way to the development of the quantum theory in the early 20c. Einstein’s theory of relativity would incorporate Maxwell’s electromagnetic theory and Newton’s mechanics, while also predicting departures from the classical behaviour of materials at velocities approaching the speed of light. The century’s most famous formula was also provided by Einstein – E = mc 2 – to define the mass equivalence of energy. The postulation of the existence of subatomic particles, the building blocks of atoms and their nuclei, was also made after a series of experiments with ionising radiations. The large energy release created by the splitting of the atomic nucleus predicted by Einstein was demonstrated by Ernest Rutherford (1871-1937) in 1919. Force fields and their subatomic particles were studied further in the second half of the 20c through the use of large particle accelerators [up to 27km/17mi in length] with a view to forming a unified theory that would describe all forces including gravity.


What the laboratory could not provide in terms of information was gained through astronomical observations which would lead to complementary information in understanding the universe on a microscopic and cosmic scale.

The understanding of the atom in terms of a heavy nucleus surrounded by light electrons has led to a deeper knowledge of the chemical and electronic properties of materials and ways of modelling them. Near the end of the 20c, such advancement enabled the ‘tailor-making’ of materials, substances and devices exploited in chemical, pharmaceutical and electronic products.

Genetics and beyond

The study of the basic building blocks of organic life was largely influenced by the study of the atom of the 20c. Research into understanding the nature of the chemical bond and molecular structure applied in biology led to the work on DNA. Investigation by Francis Crick (1916-2004), James Watson (1928- ) and Maurice Wilkins (1916-2004) in the early 1950s revealed the famous helical structure, which has a particular structural feature in that it is composed of four types of proteins, which proved the existence of a genetic code.


A surge in genetic science was the reality of the latter second half of the century, suddenly unlocking the possibility of cloning and even more controversially, ‘tailor-making’ or ‘engineering’ living beings.

The pace of scientific development has definitely been progressing since the Renaissance and the ongoing Scientific Revolution started in the 16c and 17c. In the 20c, the revolution was exponential, and new information gained from research and experiment is still being used in the applied sciences and technology in the search for newer and more efficient modes of power, tools, and to meet the ever increasing demand for useful and smarter environmentally friendly materials to meet the demands of civilisation while maintaining the fragile balance of our environmental ecosystem due to excessive exploitation and fossil fuel use. The public perception of science is unfortunately only based on its practical applications in everyday life and not on the more life changing matters such as atomic physics or genetics – which are as remote from the average citizen as they have ever been.

Similarly to religion, science arose out of the desire to explain the world around us. The fierce clashes between both institutions have been hard fought, although by the 20c science was crowned as the dominant orthodoxy in guiding civilisation. Yet, with the existence of uncertainty factors and the development of chaos theory, science may be less dogmatic since the Renaissance.

The Scientific Revolution of the 16c & 17c: where science was established as a driving force


The Scientific Revolution could be qualified by many scientists, intellectuals and historians as an era born of a thirst of development and knowledge since it started just after the Renaissance, near the end of the 15c to give birth to science as it is known today. Perhaps its lasting appeal to the world is that it helped refine intellectual thoughts and establish the basis for the founding methods of investigation still used by all fields of science today. In fact, the Scientific Revolution is the name given to change in the nature of intellectual inquiry – the way in which civilisation thought about and investigated the natural world. This wave of scientific revolution began near the end of 15c Europe, and until it was accomplished or at least under way, it could be easily argued whether any of the thinkers, intellectuals and scholars of Christian Europe could properly qualify themselves as ‘scientists’.

The medieval mind set

Although the middle ages lacked the sophistication of today’s society, original thinkers did exist. It may be true however to say that scholasticism – the term given to theological and philosophical thought of the period operated within a tightly structured and closed system: the universe was God’s creation where the primary truths revealing its nature and workings were only found in the Bible. As knowledge, the Bible was also supported by the writings of selected authors of immemorial and unimpeachable authority, namely Galen, Aristotle and the Church Fathers. If one wanted to establish the truth in any matter, one would first seek support from such an authority, and if support was found, the case would be closed. The desires to critically challenge while pushing the boundaries was clearly not present as many may have believed. Most attempted rather to move closer to the supposedly ‘true meanings’ of the already authoritatively established or formulated. When Bishop James Ussher as late as the 1650s tried to investigate the age of the world, his attention went no further than the Holy Scripture, and by voraciously studying Biblical chronology, concluded at a precise date for the Creation – 4004BC.


The Creation of Adam by Michelangelo (part of the Sistine Chapel painted in 1508-1512)

Moreover, it was also axiomatic for the times and the credibility of such a powerful voice as the church for no loose ends to be present in God’s original ‘perfect Creation.’ Although the Fall of Man had created feelings of uncertainty into the cosmos, evidence of the intended order was still arguable – there was an underling order, pattern and correspondence everywhere. Things could – in most cases – best be understood or described by analogy with another. Assuming that the one who governs the universe is God, the Sun would therefore be most powerful of all the planets circling the earth, so the king is chief ruler among men, so reason should rule over the inner life of humankind, and even more so the lion must be the king of beasts. Nowadays, it would simply not be revealing much about the lion to claim that its position on the scale of nature in the animal kingdom is equivalent to that of a king among men or the sun among the planets; in medieval times the conversation would be closed here without any space for questioning or clarifying.

The Renaissance and the Reformation

The process of modernising and opening up the workings of the closed system began with the Renaissance, the Reformation, and the voyages of exploration and discovery. Those living during the Renaissance had then possessed new knowledge or had new access to old sources. Many thinkers and intellectuals of the time believed themselves to be part of a movement that was making a significant break with the past to pave the way for a new era of modern knowledge. A process of secularising knowledge was started, prising it away from its basis in theology, and making the study of subjects such as science and mathematics a thing of value in its own right. In northern Europe the Reformers rejected the authority of the Church and instilled in believers the confidence to study the Word of God – and, by extension, His works – for themselves. Voyages of discovery finally made known the existence of new worlds entirely unsuspected by the ancients on earth, leading to the questioning of not only the value of geographical authorities but of other authorities as well.

Copernicus, Kepler and Galileo

The Polish astronomer Nicolaus Copernicus (1473-1543) completed his work De Revolutionibus Orbium Coelestium (‘On the Revolutions of the Heavenly Spheres’). It represented the mature expression of an idea expressed earlier in a brief commentary, namely, that the sun was the centre of the universe and the earth and the other planets revolved around it. The work was published as a book in Frankfurt in 1543 by a Lutheran printer, shortly after Copernicus’s death.

Copernicus’s theory, if accepted, not only destroyed the old earth-centred system devised by Ptolemy, but also made obsolete all the analogies based on that cosmology. The new model however was accepted by few, not even by the popular Tycho Brahe (1546-1601), who himself contributed hugely to astronomy during the 16c through his observations of the stars and their movements. De Revolutionibus was banned by the Roman Catholic Church and remained so until 1835 [292 years].

The Copernican theory was however accepted by Johannes Kepler (1571-1630), a German mathematician and astronomer who was Tycho Brahe’s assistant and on his death succeeded him as the imperial mathematician and court astronomer in Prague. Intensive works on planetary orbits done by Kepler helped develop the theory further and provided it with a mathematical foundation. Kepler’s findings on the laws of planetary motion, published in Astronomia Nova (‘New Astronomy’) in 1609 and Harmonice Mundi (‘The Harmony of the World’) in 1619, formed an essential foundation for the later discoveries of Isaac Newton (1642-1727). Further significant discoveries in optics, general physics and geometry was also made by Kepler. It may also be noted while considering the still fragile and transitional status of science in the 17c, that he was appointed as astrologer to Albrecht Wallenstein, the Catholic general who commanded the Thirty Years’ War. Newton too was a student of alchemy.


Galileo Galilei (1564-1642)

The Copernican theory was also accepted by Johannes Kepler’s (1571-1630) older Italian contemporary, Galileo, who first took issue with Aristotle while studying in Pisa. When he was made Professor of Mathematics there in 1589, he disproved Aristotle’s theory regarding the assumption that the speed of an object’s descent is proportional to its weight – a presentation he made to his students to demonstrate the phenomenon, by releasing objects varying in weight from the Leaning Tower of Pisa. After his Aristotelian colleagues pressured him into giving up his professional chair, Galileo would make his way to Florence, by the same time he had also inferred the value of a pendulum for the exact measurement of time, created a hydrostatic balance, and written a treatise on specific gravity. From 1592 to 1610 when he was a Professor of Mathematics in Padua, Galileo modified and perfected the refracting telescope after learning of its invention in Holland in 1608 and used – a powerful tool denied to Copernicus and Tycho Brahe – to make remarkable discoveries, notably the four moons of Jupiter and the sunspots, which further confirmed his acknowledgement of the Copernican system which stated that the earth moved around the sun in an elliptical orbit, a system first formed in 1595. However Galileo’s daring conclusions at the time lead to conflicts not only with traditionalist academics, but also more seriously with the Church due to his writings when he was employed as the court mathematician in Florence in 1613. A warning from Cardinal Bellarmine in 1616 instructed the mathematician that his support of the Copernican system should be dropped as the belief in a moving Earth contradicted the Bible. After several years of excruciating silence, in 1632 he published Dialogo sopra I due massimi sistemi del mondo (‘Dialogue on the Two Chief World Systems’) in which, in the context of a discussion of the cycles of tides, he concluded with supporting Copernicus’s system of the solar system. The savage religious laws of the times saw Galileo compelled to abjure his position and sentenced to indefinite imprisonment – a sentence commuted immediately to house arrest. After abjuring he is believed to have murmured ‘eppur si muove’ (‘it does move nonetheless’).


More Progress

The 16c saw major strides in all branches of science, the Belgian Andreas Vesalius (1514-1564) became one of the first scientists to dissect human cadavers. Based on his professional observations, he published De Humani Corporis Fabrica (1543, ‘On the Structure of the Human Body’), the very same year that Copernicus’s De Revolutionibus appeared. The anatomical principles of Galen were repudiated, and paved way for William Harvey’s discovery of the circulation of the blood, explained in a book in 1628. The works of Galileo however had not only had an impact on knowledge itself but on many other intellectuals such as Evangelista Torricelli (1608-47), the inventor of the barometer [a vital equipment for experimentation], and the Dutch physicist Christiaan Huygens (1629-1693), the inventor of the pendulum clock, the discoverer of the polarisation of light and the first to put forward the idea of its wave nature


De Humani Corporis Fabrica by Andreas Vesalius (1543)

At the similar period, the Irish experimental philosopher and chemist, Robert Boyle (1627-1691), the formulator of ‘Boyle’s Law’, was studying the characteristics of air and vacuum by means of an air pump, created in partnership with his assistant Robert Hooke (1635-1703). The anti-scholastic ‘invisible college’ meetings of Oxford intellectuals, a precursor of the Royal Society, saw Boyle play an active part – his air pump became a powerful symbol of the ‘experimental philosophy’ promoted by the Royal Society since its founding in 1660. In 1662, Robert Hooke became the Royal Society’s first curator of experiments.

The Royal Society gradually provided a forum and focus for scientific discussions and a means of discussing scientific knowledge – its Philosophical Transactions became the first professional scientific journal. Together with other comparable institutions in other countries, such as the Académie des Sciences of Paris, founded in 1666, the systematisation of the scientific method and the way in which experiments and discoveries were reported were promoted. The importance of plain language in the detailed & systematic description of experiments for reproducibility was emphasised. The creation of prominent scientific associations also marked a cornerstone for the socio-cultural acceptance of science.


The Scientific Revolution’s culmination is believed to lie in the work of Isaac Newton, where his early mathematical studies led to the invention – simultaneously with Gottfried Leibniz (1646-1716) – of differential calculus. While focussing on the behaviour of light and prisms, he created the first reflecting telescope, a pivotal tool to the astrologers who followed. In 1684, Newton published his theory of gravitation, and in 1687 his famous Philosophiae Naturalis Principia Mathematica (‘Mathematical Principles of Natural Philosophy’), which stated his three laws of motion, would become the founding stone of modern physics – unchallenged until the arrival of Einstein in the early 20c.

Most importantly, Newton’s universal law of gravitation not only explained the movements of the planets within the Copernican system but it even gave an explanation to such humble events as the fall of an apple from a tree. But more surprisingly, it never excluded God from the universe since all of Newton’s work was undertaken within the framework of a devout Christian, though his private beliefs were complex and heterodox.

By the time of his death in 1727, the scientific method was firmly established, and the thinkers, intellectual and writers of the Enlightenment acknowledged that an era had dawned where observation, experiment and the free application of human reason were the foundation of knowledge. In fusing science with culture and spreading knowledge through various themes and outlet of the discoveries made from previous centuries, the writers of the Enlightenment helped to firmly establish the prestige that science and its affiliates and practitioners have inherited and enjoyed down to the present day.


Part IV: Medicine


From the earliest times of human civilisation, all societies seem to have had a certain amount of knowledge of herbal remedies and to have practised some folk medicine. Most patients in the earliest days were treated with the objective of regaining the favour of the gods or to ‘release’ the evil from the body, therefore the cause of illness was believed to be rooted in supernatural causes. In early civilisations such as in Egypt and Mesopotamia, for example, salves were used as part of medical practice which included divination to obtain a prognosis and incantation to help the sufferer. In the East, many commonly occurring diseases were documented by doctors in India and where they used some drugs still exploited by modern medicine; they also performed surgery that included skin graft. In some parts of the world, some societies banned the cutting of dead bodies due to religious beliefs and policies fused with the law. Unsurprisingly however, knowledge of physical anatomy was incredibly basic. Early Chinese society also banned the desecration of the dead and this resulted in Chinese concepts of physiology not being based on observational analysis. A developed medical tradition flourished in China however from the earliest times to the present day, with special focus placed on the pulse as means of diagnosis.


In Chinese medical philosophy, the objective is to balance the yin (the negative, dark, feminine, cold, passive element) and the yang (the positive, light, masculine, warm, active element), and the pharmacopoeia for achieving this: vegetable, animal and mineral. Similarly important is the practice of acupuncture, where needles are used to alter the flow of ch’i (energy) that is believed to travel along invisible channels in the body (meridians). Anaesthesia puts the efficacy of acupuncture to the test – being its most widespread use.

The sophistication of modernity in the West started to set a new course to medicine when it was partially rationalised by the Greek philosophers, since before this it was mainly an aspect of religion.


Asclepius, the Greek god of medicine

In ancient Greece for example, people suffering from illness would go to the god Asclepius’s temple for incubation – a sleep during which the god would visit in a dream which would then be interpreted by the priests to reveal the diagnosis or advice for the cure. Empedocles later came up with the idea that four elements exist – fire, air, earth and water, which when applied to the human body turned into blood, phlegm, yellow bile and black bile – which must obey certain rules to be maintained in harmonious balance. That concept was further reinforced when it was adopted by Aristotle (384-322BC) and remained a founding pillar of Western medicine until the new discoveries of the 18c. From the viewpoint of a biologist, Aristotle observed the world, performing dissections of animals and learning more of anatomy and embryology.


After his death, the main learning centre in Greece became Alexandria, where principles expounded by Hippocrates (c.460-c.377BC) were upheld and obsolete ideas such as illnesses caused by the gods were rejected, instead he made and raised a new school of thought where his diagnosis and prognosis were made after careful observation and consideration. Today, Hippocrates is regarded as the ‘father of medicine’, and sections of the oath attributed to him are still used in medical schools to this present day.

Galen (c.130-c201), a Greek doctor, was the next major and defining influence on Western medicine who studied at Alexandria and later went to Rome. Galen gathered up all the existing writings of Greek doctors, and emphasised on the importance of anatomy to medicine. He used apes to find out about the ways the body worked since dissection of human bodies were then illegal. Although his daring efforts were justified for medicine, his reports contained many mistakes on anatomical points which included the circulation of blood around the body, which he described instead to have ‘ebbed and flowed’.


Surprisingly, the point worth noting is that although the people then were living in the early times of human history, Rome had already developed an excellent culture with high regards for public health; more strikingly perhaps is also the fact that they even had clean drinking water, hospitals and sewage disposal – which was never developed or adopted by any civilisation until the 20c.

After the Roman Empire fell, the practice of medicine resided in the infirmaries of the monasteries. In the 12c century, medicine was developing as an important necessity in society from the lower to the upper end, and the first medical school was established at Salerno. Many other medical schools in Europe followed, namely: Bologna, Padua, Montpellier and Paris. Mondino dei Liucci (c.1270-1326) published the very first manual of anatomy after carrying out his own dissections in Bologna. The most major advancement in medicine however came from the Belgian Andreas Vesalius (1514-1564) who contributed through incredibly detailed sketches, descriptions and drawings published in 1543, correcting the errors of Galen. The Inquisition sentenced him to death for performing human dissections [once again an occasion where religious traditions came in the way of reason and research], however a new wave of inquisitive intellectuals had already surfaced abroad who could not be stopped.

A better and more precise knowledge of anatomy led to an improvement in techniques used in surgery, and surgeons, the long considered as inferior practitioners by physicians, began to be recognised as a major part in medicine and its procedures. The huge increase in the armies of Europe in the 16c and 17c created greater demands for effective surgery in the military departments. Ambroise Paré (1510-1590) reformed surgical practice in France, sealing and stopping the cauterising of wounds, while in the United Kingdom, more collectives of medicine intellectuals were formed which later became the College of Surgeons.


French nobleman and chemist Antoine Lavoisier (1743-1794) and his chemist wife Marie-Anne (1758-1826)

In 1628, the theory of the circulation of blood was formulated by William Harvey’s experiments in the 17c, which was reinforced by Marcello Malpighi’s work. However, it took more than a hundred years for medicine to fully understand the purpose of circulating blood up until Antoine Lavoisier (1743-1794), a French chemist discovered oxygen which has to be transported to various parts of the human body through blood. A new approach to obstetrics was also invented at that time, along with the growth of microscopal studies, and by the end of the 18c Europe was introduced to vaccines which helped to eradicate previously deadly diseases such as smallpox in the 20c.


Biologist and physician, Marcello Malpighi (1628-1694)

In the 19c scientific research generated new knowledge about physiology and medicine saw refinements to aid diagnosis, such as the invention and introduction of the stethoscope and chest percussions. The field of bacteriology was also born out of the work of Louis Pasteur (1822-1895) after the latter established the germ theory of disease transmission. This had a major impact and transformed safety for all patients, for example in the field of obstetrics where women had been dying regularly from puerperal fever before it was investigated to find out that doctors were transmitting bacteria from diseased patients to healthy ones. The first use of ether as a drug in the USA in 1846 and of chloroform in Scotland in 1847 made way for another major advance in surgery when their use as anaesthetic gases opened new doors to minute, longer and more complicated surgical sessions to be initiated.

The wave of cutting edge and precise research continued into the 19c with the recognition and detailed description of many conditions now available to medical education for the first time. Precautions were taken to halt the propagation of malaria and yellow fever after it was revealed that insect bites could transmit them. At around the end of the 19c, the birth of psychology as the study of the ‘mind’ was taking place with Sigmund Freud’s work, and Rontgen’s discovery of X-rays along with Pierre and Marie Curie’s radium provided new diagnostic tools to medicine.

The 20c continued to flourish with progress when the haphazard discovery of bacteria-killing organism were made, most famously Alexander Fleming, the scottish Bacteriologist and Nobel prize winner who discovered Penicillin in 1928 and also served during the First World War in the Army Medical Corps. After qualifying with distinction in 1906, Fleming went straight into research at the University of London. One of the most important discoveries in medicine would eventually be made by a him in 1928 over a simple observation. Fleming observed that the mould that had accidentally developed on a set of culture dishes used to grow the staphylococci germ had also created a bacteria free circle around itself. After careful observation and research, the substance that repelled bacteria from the mould was named Penicillin. The drug would later only be developed further by two other scientists, Australian Howard Florey and Ernst Chain, a refugee from Nazi Germany [all three shared the Nobel Prize in medicine]. Although the first supplies of Penicillin were limited, by the 1940s the pharmaceutical industry had made it a top priority and it was mass produced by the American drugs industry.

The era also spectated the growth of advanced technology and the further development of various forms of drug treatments, such as sulfonamides when they were discovered, followed by streptomycin, the first effective antibiotic against tuberculosis which was fatal until then similarly to diabetes which was also explored and treated with the discovery of insulin, thus halting its former reputation as deadly into a controllable condition – a new breed of surgeons are claiming to have found surgical methods to completely reverse the Type-2 Diabetes that affects most.

Typhoid, tetanus, diphtheria, tuberculosis, measles, whooping cough and polio were mostly eradicated in the West as the 20c was marked by improved public health services, living condition and nutrition along with well devised campaigns with the sound backing of science to promote immunisation campaigns for children. The West was also freed of diseases such as rickets and scurvy as new discoveries were made on the role and importance of vitamins which also led to the mitigation of beriberi in Africa and Asia early in the century.

Malaria, yellow fever and leprosy were also found to curable, and now that with all the advancement in medicine most people live longer in developed economies [at the exception of some that have mediocre policies due to their mediocre management system, e.g. politics], the chief causes of death nowadays have so far been cancer and heart disease.


Life Expectancy in the United Kingdom / Source:



Life Expectancy Global / Source:

Unleashing the power of genetics against cancer

Source: Cambridge University

In the field of cancer research, advancement in new therapies involving various techniques are now available and continuously being developed; with the most recent being the promising CRISPR, which involves using a patient’s own immune system to fight cancer, using a particular type of immune cell known as the T cell. The logic behind it explores the usual purpose of those T cells in the human body which involves surveying the body to seek out and destroy abnormal cells that have to potential to turn cancerous- detected by T cells due to the presence of strange proteins on their surface [signs that the T cell knows as ‘dangerous’]. Surprisingly cancer has evolved a cat-and-mouse game to evade T cells by developing the ability to ‘switch off’ any T cell that gets in their way, effectively blocking their healing attack. The most effective cancer therapies try to counteract this response by abnormal and cancerous cells by boosting the immune system.

CRISPR: the promising new cancer treatment

In 2015, a study used an older, less efficient gene engineering technique known as the ‘zinc finger’ which led to nucleases that give T cells better fighting ability against HIV – the therapy was well tolerated in a 12-person test group. A further study used reprogrammed T cells from multiple myeloma patients in the specific recognition of cancer cells which shrank the tumours initially while the T cells gradually withered and lost their ability to regenerate themselves – a common issue that new trials hope to solve in the near future. Perhaps one of the most unfortunate part of the story with CRISPR despite being a promising cell therapy is that it is often offered and used on patients with relapsing diseases. Other genes can also be ‘tweaked’ for the particular protein PD-1 with the CRISPR method that counter the problem of T cells losing their ‘intensive ability’ as these new tweaked genes help prolong the lifespan of the modified T cells while simultaneously enhancing their cancer fighting ability since the PD-1 protein sits on the surface of T-cells and helps dampen the activity of the cancer cells after an immune response [tumours found ways to hide by flipping the PD-1 switch themselves, thus drugs that block PD-1 from this immune suppression have been proven to be a promising immunotherapy cancer treatment].  Researchers are currently carrying intensive research to understand the deeper mechanics of CRISPR by removing T cells from patients of cancers that have stopped responding to normal treatments, and using a harmless virus, deliver the CRISPR machinery into the cells, and perform three gene edits on them. The first gene edit will insert a gene protein called the NY-ESO-1 receptor, a protein that equips T cells with an enhanced ability in locating, recognising and destroying cancerous cells [the NY-ESO-1 displaying tumour]. The T cells have a native trait that is unfortunately unsupportive in this process as it interferes with this process of added protein, so the second edit will be to remove these inhibitors so that the engineered protein will have more efficiency against cancer. The final and third edit gives the T cell longevity by removing the gene that allows recognition as a cancer suppressor by cancer cells that disable the PD-1 protein, thus countering its attack while remaining active due to the added guide RNAs which would tell the CRISPR’s DNA-snipping enzyme, Cas9, where exactly to cut the genome. However, since CRISPR is not always effective, not all cells will receive the genetic modification, thus making the engineered cells in the end, a mixture with various combinations of proposed changes to balance the reaction into the desired one. Only 3-4% may contain all three genetic edits. After the edits, the researchers would generally infuse all the edited cells back into patients and monitor for issues closely. One of the main concerns with CRISPR is that it may inadvertently snip other genes potentially creating new cancer genes or trigger existing ones, and these side effects are planned for monitoring by a team expected to measure the growth rate of engineered T cells and carry test for genomic abnormalities. However, the concluding outlook on CRISPR is very bright, in a pilot run carried out by using T cells from healthy donors, the researchers checked for 148 genes that could be snipped by mistake, and the only faulty cut that was detected was deemed as harmless. Another major concern is the fear of activating the body’s immune system against the engineered T cells since the enzyme Cas9 originates from bacteria and is essential for the cancer cutting process CRISPR relies on – although ways exist to prevent the immune system from destroying engineered Cas9 T cells, the possibility remains.

Gene therapy trials have suffered a recent setback with the death of the young patient Jessie Gelsinger during a trial. Further investigation revealed that some of the researchers failed to disclose the side effects observed in animals and some of the investigators had financial incentive for the trial to be a success. Extra precaution is being taken by UPenn who pioneered the treatment to ensure the smooth progression of medicine in genetics. As Stanford bioethicist Dr. Mildred Cho said, “Often we have to take a leap of faith.”

Cancer research and treatment on the whole has seen innovations in surgery, chemotherapy, radiation therapy, a combination of the mentioned and the new promising method involving gene editing Cas9 based T cells with the CRISPR technique. All these together have and are increasing the prognosis for some sufferers, and in cardiology too, new treatments stunned the world, notably angiograms, open-heart surgery and heart transplants. The process of organ transplant has gradually been extended to lungs, livers and kidneys, and artificial joints for the hips and knees have also been improved.

Further education on family planning has been available and constantly updated since the 1960s where methods of contraception had first been marketed to the wider public [such as the oral contraceptive pill for women]. The controversial act of abortion too with the scientific legitimacy was made safer and legalised in many developing economies and at the other end of the scale couples unable to conceive benefited of fertility drugs and in vitro fertilisation provided many with the choice of starting a family.

With the growing discoveries and nearly godly feats of medicine, public perception of the field also changed and many soon started to entertain the belief that a cure exists for every ill. Unfortunately this is not true, as many complicated diseases such as cancer continue to defy knowledge and scientific research and new diseases and complications continue to emerge such as Ebola, HIV and antibiotic resistance. The constant struggle for 3rd world economies to keep up with medical cost has also led to major culturally destructive waves of migration that have very quickly turned out to be unsustainable for most major Western economies along with the religious and socio-cultural clashes being a constant topic of debate in most educated circles and the connected alternate media alike across Europe [to counter some of the extreme liberal & atavistic views promoted by the mainstream media fuelled by ruthless & scrupulous globalists].

The economic grip of pharmaceutical companies on the world’s economy has been a central issue for many concerned scientists and intellectuals of the times constantly questioning the responsibility of funding and providing cutting edge and hygienic health services for the people; while on the other hand other controversial but vital access to organs for transplantation have caused major social debates regarding the future cultural behaviour regarding the organs of the dead and the provision of a constant supply of fresh organs for the Western economies’ major health requirements.

While the Western model of medicine is the most effective, researched, respected and taught on earth, other sub disciplines of medicine that many medical empiricists consider to be complete lies continue to prosper at a medium scale for a surprisingly constant demand for folk and herbal medicines. In the urban areas of non-Western societies the trend is at a larger scale since Western medicine has still not made a significant impact to the adepts of traditional practices. Medically unproven and scientifically void practices such as chiropractic, aromatherapy, auto-suggestion, homoeopathy, osteopathy and hydrotherapy still exist in the West under the classification of ‘complementary medicine’ where many of the practitioners do not require any degree or certificate to ‘practice’ [a documentary by Dr. Richard Dawkins was aired on the BBC on this topic]. Most of those treatments that have no scientific grounding somehow all have long histories, and a chosen few such as acupuncture, have been fused into Western orthodox medical practice in countries such as the UK.


Part V: Secularisation

Secularisation may be defined as the process of change where authority passes from a religious source to a secular one. This may turn into an issue or a need only where religion and the religious have gained considerable power or a dominant position in society and penetrate all aspects of life, including the government. For instance, in ancient Greece and Rome, religion does not seem to have ever dominated the state. The main religious officers was shared by the same men who held political office [religion may have been seen as simply a part of national culture]. While virtue consisted of piety and observance to the gods were expected, religion was rarely a primary focus for society. Furthermore, polytheism provided flexibility to the system as new gods and goddesses would be added to the pantheon to accommodate local cults, and an individual would have the freedom to choose a deity as his or her special patron. However, prudence demanded that other divinities not be neglected, and none of this was of major concern to the state.

Yet, as the petty logic of majority in many cases comes into conflict with strategy, the great monotheistic proselytising religions of Christianity and Islam saw a great rise and the situation and relationship with the state started to change. Now, as a matter of righteousness and justification as a moral authority, the state had to go with the religious beliefs that ruled most of the West. This led to the state having to ensure salvation, which became the founding pillar of ‘right religion’. Consequently, this acceptance and spread led to the increased power and influence of Christian kings who with them emerged a body of clerical men who claimed to exercise the spiritual ministry of the most almighty of beings, God, on earth. This led to large amounts of money, land and property being donated by individuals, organisations and Christian rulers to the Church in the hope of maintaining a good relationship and being protected. This also increased the overall influence of the power of the Church which however owed so much to the Crown in terms of donations and freedom that they gradually tended to act as its propagandists and servants. The term and principle of ‘Caesaropapism’ was accepted by the Church in the Eastern Roman (Byzantine) Empire, which simply proved their acceptance of subordination to an Emperor who was thought of as an ambassador of divine authority on earth. However, this claim of a supreme imperial being at the top of the religious scale soon led to conflicts with the popes of the West who were unhappy with such imposition in regards to their contribution to the works of God and soon, conflicts began between the sovereigns and the papacy over the limits and jurisdiction of royal and papal power – both, of course claiming to be guided by the divine mandate.


Perhaps one of the most famous of these clashes happened between Henry II of England and his Archbishop of Canterbury, Thomas Becket. At that time the Church’s power may have been at its peak, during the pontificate of Innocent III, who claimed that the Holy Roman Emperor was subordinate to him. Later, Innocent III pushed for Emperor Otto IV to be deposed, forced Philip II of France into reinstating his divorced second wife, Ingeborg of Denmark. He also placed England under an interdict, and had King John (Lackland) excommunicated to be able to secure the office of Archbishop of Canterbury for his candidate, Stephen Langton. Those clashes of power and interest saw a decrease however, when in the following years the papacy was in dire need of royal help to defeat the Conciliar Movement – a movement in Western Europe in the 14c and 15c of the Roman Catholic Church which believed that final authority in spiritual matters resided with the Church as a company of Christians, embodied by a general church council, not solely with the Pope [a movement started by Pope Innocent III and is still used today in France].

In other civilisations in the Middle-East, such as in Islamic territory that obeyed the laws of Islam’s sharia, conflicts between the professional religious classes and the rulers tended to be avoided since Islam has no priesthood. Religion and state were unified in the pursuit of what the Quran and the life of Muhammad qualified as the ‘pursuit of Islamic righteousness’. This however includes violent subjugation of all non-Muslims, oppression of women, obsolete traditions in direct conflict with modern human rights in all modern Western nations in relation to restrictions to women and indoctrination of violent political ideologies that are connected to the political teachings of Muhammad, mostly found in the sharia. Thus, the constant links between extremist groups promoting violence and major governments in the Middle-East with Islam as the main religious faith are a constant topic among cultured circles. Most Muslims however are similar in many ways, even on the borders of Europe, in Turkey similar to Saudi Arabia, most adhere and believe in the same ideology that Islam and the Sharia promotes and teaches, unsurprisingly many Islamic scholars too have turned out to have very dangerous views on Islam’s war on non-Islamic civilisations and non-Muslims. The Caliph claim was made in Istanbul by the Ottoman Sultan, or supreme head of all Sunni Muslims (Sunnis). The Shia form of Islam (Shiites) was ultimately associated and identified with the Safavid Sultans in Iran.

In Tibet, where Buddhism had been flourishing, monastic donations and a huge increase in the number of dedicated monks subsequently gave monastic cultural leaders who were regarded as the incarnations of the Buddha, such as the Dalai Lama and the Panchen Lama, ruling powers in their country. In China and Japan situations differed, as instead, religious beliefs tended to reinforce loyalty to the ruler; in China for example, Buddhism, and more particularly, Confucianism, taught civic virtues which were also taught by Buddhism and Shinto in Japan.

The Reformation

When the payments of annates to Rome was abolished by Henry VIII of England as he denied the authority of the pope upon proclaiming himself supreme head of the Church of England (1534) to further supress the monasteries, the new King was simply carrying to extremes the true traditions of his predecessors across Europe. Divine Right Kingship, that was what Henry’s Reformation was essentially, an assertion of complete power and trust in his legitimacy as an extension of God’s ministry. It is worthy to note that Henry VIII would deal as harshly as advocates of Lutheranism as with those who supported the pope as he had no doctrinal differences with Rome, he simply believed in the King as the only vice-regent of God on earth. The Reformation and Counter-Reformation revived the influence and power of religion in the domestic and international socio-cultural debates of the Western world, and for the time, turning the concept of a purely ‘secular’ power completely unconceivable and unthinkable. Yet, as the years went by the intended and expected clashes reached unprecedented heights as a result of competition between religious factions.

The wars that religion brought to humankind

In the Western Christian world, the wars of religion quickly turned into a common phenomenon or justification to shed blood and die for, and they were all based on the firm religious belief that the opposing religious civilisation had no claim to existence and even more importantly should not have any jurisdiction let alone religious or cultural control over some very specific geographical points, as these were believed to have specific powers that could be manipulated for socio-cultural advantages, for example, the ‘crusade’ against the Albigenses in Southern France was simply justified as the French crown simply trying to extend its power. The movements known to most historians as ‘The Crusades’ were in fact directed against the Islamic Middle East who had been subjugating Western Europe & Christians for hundreds of years through deadly wars where many Christian women were raped, tortured and turned into sexual slaves while many Christian leaders were beheaded others forced into Islam. Religious motives in 16c and 17c even led to violence against fellow Western Christians, and as the years went wars were endless, reaching lethal genocidal levels where whole civilisations were wiped out – the remaining joining, converting to or being enslaved by the dominant [a seemingly ruthless spectacle where the cycle of evolution may have simply been the driving force among societies who were less sophisticated and more primal – or in touch with their aggressive instincts in matters of survival and conquest].

Even with all the death in the name of religion, societal events did not persuade the current societies to perceive a possible atheistic lifestyle or system; and this endured even late in the 17c. However, private and secret groups such as ‘The Family of Love’ (of whose members many were close to Philip II of Spain, a leading figure of the Counter-Reformation) had started to spread the seeds of doubts over the particular motive and purpose of having to identity state power and dogmatic religious beliefs and traditions.

An Enlightened, educated and revolutionary civilisation

The only faith with intellectuals who stood with reason without showing any preference for any other school of thought, particularly religious ones, were Christians of the Western world in Europe and it began in the 18c where the term secularisation could only be discussed in European-derived state systems. The practice of secularisation started by individuals who originally came from different schools of thought and were seeking to be guided by a more stable doctrine than religion or traditions. Others like Holy Roman Emperor Joseph II, were dedicated Christians who disagreed with the state being the authority for moral policing or to conscience regulation [quite a perceptive stance judging the questionable reputation and credibility – in terms of morals and ethics – of practitioners of the obsolete discipline that is today still termed ‘politics’]. Even more curiously, the reasoning and avant-garde [at the time] clergy of the Church of Scotland agreed, and set their focus on the barbaric violence of the 17c religious wars as a blasphemous parody of Christianity. Furthermore, the growing movement fuelled and guided by the scientific and intellectual developments of the late 17c and the spirit of the Enlightenment remained sceptical about religion and its revelations, even Voltaire was a deist.


Religious Scale by GDP per capita

The Cult of Reason was further sponsored as a replacement for Christianity when the Jacobins under Robespierre came to power in France, suggesting that the Gregorian calendar be replaced by a revolutionary and republican one where the year 1783 would be the Year 1. As the era developed, the first ‘secular’ state in the Christian West became the federal government of the USA after 1783, a reason somehow that may have been more due to the lack of options as the foundation of the society in the states was mainly composed of immigrants deeply divided by religion where many were persecuted and faced death in the countries they were escaping from who back in those times had no peace keeping military conventions to protect or sanction the State on the grounds of human rights.

Corruption at the top was also very much present as it still is today in politics in most non-Western societies, especially in Islamic territory where many States are strictly combined with the doctrines of Islam and its violent religious law, sharia, leading to many cases of State connections to extremist terrorists operating under the guise of Islam to protect and propagate the Islamic way of life and eventually subjugate all non-Muslims[with techniques used to abuse diplomacy and the dangerous concept of ‘political correctness’ to slowly infiltrate the law and system of other Western economies to prepare and push for Islamic doctrines to be applied on Western soil]. A situation getting worse today, as obsolete politicians lack the knowledge and education to understand and cope with the techniques of Political Islam which has long been the topic of Dr. Bill Warner’s work – to protect and prevent the atavistic and dangerous Islamisation of the West.

Logically, it seems obvious to most that 3rd world traditions would clash with First World values and individualism and today, many intellectuals and growing movements are beginning to support the complete separation of religious traditions and cultures through geographical relocation and diplomatic arrangement between States of various nations to work on solutions at the source and on location and completely stop the unsustainable and clearly abused systems of refugee relocation as Western societies are at their limits with major socio-cultural clashes and disruptions to First world national communities sparking major concerns over the security of women, children and the vulnerable older people faced with 3rd world migrants with a completely different school of thought, crowding many Western cities and locations where the never-ending clash of values, education, philosophy, language and culture seem to leave authorities contemplating at the only solution that may come with radical policies to preserve the socio-cultural make up and identity of their nations in the face of a destabilizing overgrowth of population from African and the 3rd world Islamic territories and the failure of Western States to adopt appropriate and if necessary tough measures to alleviate and balance the situation while securing their own systems and providing security for their people against socio-economic and cultural degradation.

The 19c

After the final defeat of Napoleon I at the end of the 18c, the conservative climate that followed led to the Catholic Church regaining a lot of credibility that it had lost and the identification and association of Church and State was seen by many intellectuals and movements of the Enlightenment as a bulwark against freedom and revolution. This resulted to the developing climate where bourgeois liberalism rose due to its tendency towards anticlericalism and its strong belief in a new system with a secular state with no sectarian affiliations, based on the US federal model.

France saw the growing clashes over education between Church and State similar to most major Christian Western nations throughout the 19c. In 1829, the Test Act of 1673 was repealed, now not requiring holders of public office [including military officers and elected regional representatives in Parliament] to be active members of the Church of England. Eventually, reason also won in France where education became ‘compulsory, free and secular’ under the Third Republic after a series of acts passed between 1878 and 1886 with Jules Ferry as the main agitator to spearhead the change. Other economies in South America such as Mexico, with an established and influential colonial Church saw that post-independence liberal views tended to demand secularisation of the State.

As the 19c century was ending, secularism and anticlericalism grew in strength and supporters in many nations of the modern world spectated a rise of different branches of “Socialist” influenced movements. For example, the late American George L. Rockwell initiated a National Socialist movement in the US & embraced the derogatory term “NAZI” for its shock value. Although the American agitator clearly drifted far from the refined version of Adolf Hitler’s National Socialism, which initially emphasised strong moral/ethical philosophies, shared communal values at every level of society & synchronised psychosocial unity, Rockwell’s version of National Socialism seemed more appropriately adjusted to the industrialised society of America, focusing on the identity of the average hardworking American citizen and his/her relationship to the unscrupulous economic model that is at the foundation of the “Wild West”, i.e. the USA. Rockwell remains one of the only US public figures to have proposed a straightforward, practical & ethical direction in finding a harmonious solution to the Negro population problems affecting the US (which is now along with other foreign populations growing faster than the original white US population). George Lincoln Rockwell‘s vision matched that of the prominent visionary & avant-garde Black nationalist, Marcus M. Garvey, who founded the Pan-Africanism movement, the Universal Negro Improvement Association (UNIA) and the African Communities League (ACL).


Marcus M. Garvey, Jr. (1887 – 1940)

Garvey also founded the Black Star Line, a shipping and passenger line which promoted the return of the African people to their ancestral lands. “Garveyism” wanted all people of Black African ancestry to “redeem” the nations of Africa and for the European colonial powers to leave the African continent. Marcus Garvey’s essential ideas about Africa were stated in an editorial in “Negro World” entitled “African Fundamentalism“, where he wrote: “Our [negroes] union must know no clime, boundary, or nationality…

Darwinism and National Socialism  gave society an explanation of human rights and human history, and a model for progress where religion was not vital [but optional] and thus not a major concern. In France, the Dreyfus Affair united all the radical progressive elements and the leftist movements in French society against the then major section of the Right: the Catholic Right. The separation of Church and State finally happened in 1905.

The 20c

During the USSR right after the Russian Revolution, the development of socialist-inspired secularism could be seen in their secular state; however, the lack of vision, philosophy and fine management eventually led to its downfall.

One of the most innovative and stunning secular changes in the Muslim world came from Turkey’s founder who believed in secular western systematisation, Mustafa Kemal Ataturk, who in a revolutionary wave abolished the Sultanate and in 1924 abolished the office of the Caliph, the former spiritual head of the Ottoman Empire. Ataturk continued this avant-garde wave of secular changes by closing down all religious schools in Istanbul, and removed the Minister for Religion from the cabinet. Even more confidently, among the changes the modern and westernising founder made was the repealing of the provision in Turkish constitution that made Islam the state religion. From then, deputies would cease to take oaths in the name of Allah, but instead made a secular affirmation. However today with Turkish national representatives such as Recep Erdogan, the forward-thinking, productive and modernising changes of Ataturk have all been reversed and ruined by Erdogan’s atavistic policies that are oriented towards the Islamisation of the whole system and has even been linked and found to be unresponsive towards major anti-Western Islamic Jihadists who spread terror and violence across Western societies without any disregard for children. The retrograde and deluded Chancellor of Germany, Angela Merkel has also played a major part in the Islamisation of Western Europe by successfully being manipulated by Islamic territories’ humanitarian departments to take in excessive numbers of Muslim refugees [by the millions] for resettlement which have mainly been healthy Muslim males with no other objectives but to find support on the welfare systems of the West while also contributing in the Islamic doctrines that promote migration [hijra] in the name of Allah for the process of Jihad [which is a process that involves multiple techniques to subjugate all non-Muslim societies to gradually allow Islam’s doctrines to take over], in the ongoing war for the Islamisation of the West. This continued clash of values makes the secularisation of Turkey by Ataturk particularly striking since Islam’s ideologies continue to control most indoctrinated minds in the vast Islamic territory that continues to promote 3rd world ideologies and show firm stance against secularisation in Muslim countries and perhaps even more shockingly, in some parts of the West where urban and uncultured low-skilled Muslim communities have amassed – a known recruiting field for many extremist Middle-East groups such as ISIS [Daech, Islamic State] and a known breeding place for rapists who in many cases justify their heinous acts as religiously valid, being the teachings of Muhammad on the treatment of non-Muslims, who are deemed spiritually ‘inferior’ beings similarly to the teachings of Judaism where all non-Jews are believed to be inferior, destined to serve the Jewry and are completely disposable, perhaps more shockingly: non-Jews should even be killed. These two 3rd world religions have doctrines of behaviour towards other groups that are rooted in hate and violence. Hence, the early expulsion of the Jewish communities globally much before the Nazi regime or any of its founders were even born. A practice known as holocaust done in the name of the Jewish god Baal, involved sacrificing young male babies was hated by many non-Jewish intellectuals and societies throughout Western history. However, today the atavistic process that should have been inexistent or even annihilated, is ironically happening to modern societies at the verge of being completely secularised after their independence such as in the West: the process of Islamisation. Islamisation of the West which was founded and evolved on Christian values, and famous deist intellectuals such as Voltaire who placed reason before irrational claims of God [although not denying the existence of powers that may be Godly], are being forced into accepting millions of Muslim refugees known to be part of the process of Islamisation linked to major extremist and pro-Muslim association such as the Muslim Brotherhood [a group heavily linked with Barack Obama] who have links to the extreme left leaning seats in the United Nations. These dangerous extreme-left [not socialist] movements with religious affiliations have been finding ways to loosen the security of the West’s defence to infiltrate the ideologies of Islam through the process of cultural Jihad, which involves using techniques such as diplomacy, and twisting arms with the unscrupulous use of ‘political correctness’ to further the purpose of Islam, aided by the act of Taqqiya, which is promoted by Islamic ideology to deceive, lie and act in whatever way it may be required to promote Islam and eventually subjugate non-Islamic societies.

One of the most recent example of complete Islamisation is Iran in 1979 where the overthrow of Shad Muhammad Reza Pahlavi ushered in an Islamic republic. This seemingly Islamic ‘success’ in the Iranian Revolution led to Islamic Fundamentalists in other undeveloped economies such as Pakistan, Egypt and Algeria to believe in their possible future, already being part of economies where governments make concessions to religious militants as they both are supporters of the ideology of Islam.

In some countries, many Islamic terrorists have justified their acts as populist alternatives to what they perceive as corrupt, dictatorial regimes that lack compassion and righteousness. Others have questioned righteousness from the perspective of Islamic ideologies that involve beheading, mass terror and other inhuman practices on non-Muslims in the name of Allah as the teachings of Muhammad, a controversial prophet who consummated a marriage to a 6-year old when the latter was nine [even the practice and promotion of what most Western minds would perceive as paedophilia has seen a near complete silence from most authorities in the west for fear of repercussions such as accusations of racism, lack of political correctness or xenophobia, all forms of speech suppression that have started to raise more voices among many people who believe that Islamisation is incompatible, dangerous and unsustainable – massive causes of systematic socio-cultural and economic degradation].


In order to move towards a system of management that includes government to replace the obsolete concept of politics and reinstate credibility in decision making based on reason and science, balanced with the right philosophy to fit the appropriate expectations at a given time, the mainstream mind set will have to accept reason as a more fitting compass to guide a civilised society instead of religion.

Although most [mentally sound individuals] should have the freedom to choose where to place their faith [religion, science, philosophy, etc], secularisation would at least ensure that the state balances its priorities appropriately without discriminating those who should be prioritised.

The State may initiate a workable but firm control over the appropriate influx of immigrants by specific religious groups to maintain and not discriminate the national cultural identity of the foundation [religion would simply be a part of culture and not a reigning authority synthesised with most departments of the state] while adapting to changes that socio-cultural economic developments and research lead to [however careful consideration over the purpose and benefits must remain of vital importance and focus].


As the system of democracy still gives voices to the masses, it is also fair noting that majority votes do not decide or confirm the degree of righteousness in a particular thought or decision. In fact, majority debates in choice simply conclude the general ‘views on a specific topic’ of a particular group of human organisms from a particular geographical location on earth. In cases such as medicine, physics, chemistry and other science based studies majority votes lead to and mean nothing, in those disciplines only reason wins, with the conclusion based on logic. Certainly what a perfect secular state may include could be a decision making department that bases every decision based on the required concept that applies to it, i.e. for e.g. matters of professional disciplines could be approved by the required boards of professionals (by their field), and decisions on socio-cultural matters would benefit from public opinion, further matters of economy would be supervised by the board of economy, etc, and this may eventually lead to a system that relies on only democratic values and management, and hardly any politics [if regional representatives by area could have a better description].

The USA’s secular government has so far demonstrated to be far from perfect with major differences in opinion on a range of issues regarding military ethics during World War 2 where Eisenhower sadistically allowed thousands of Germans to die in starvation in his very own ‘death camps’, and other claims of secrecy with Churchill & Stalin in a German boycott along with the ongoing national socio-cultural conflicts with the Islamisation of the USA by the Obama regime – open promoters of Muslims and Islam in the West. The deistic Founding Fathers of the USA’s secular government would definitely be surprised at the influence of orthodox, evangelical Christianity of various kinds in the modern but over-liberal republic. Although it may if appropriate to consider the fact that secular states will somehow forever have religious roots, and while some may not be practising Christians, most of Western literature are full of biblical references. Major festivities such as Christmas have turned into a symbol of celebration and gifts for Western societies more than a religious observance, and it unites and benefits more than only Christians in many major societies of the West – especially economically for most businesses.


Secularisation in everyday life in an increasingly post-Christian Europe

Nowadays most educated societies in the developed nations of the West are partially entrapped in the global economy and a great section of the people have looked for alternative context for their life, alternative ways to make rites of passage and more importantly other doctrines to be cultured and guided by, and the importance of religious dimensions in public and private life today has decreased considerably.


Munich by Harry Schiffer

Major changes in Britain saw the 1836 Marriage Act which for the first time allowed marriages to be solemnised in Britain by other practices besides a religious ceremony. On 1 July 1837, six hundred district offices opened as the act came into force along with an ongoing set of necessary changes. By 1857, divorce was obtainable in the UK by other means than the Act of Parliament – although not easily and only when requested by husbands. These changes along with the liberalised attitude on legislations such as abortion has long been opposed by the Church however, especially in Catholic countries. Nowadays, the growing number of people relying less on religion as guide is ever increasing, notably in developed economies with decent education systems supported by the newly developed Internet of Tim Berners-Lee since the early 1990s. Thus, the knowledge of science has become more widespread, along with its application to modern culture – leading to a new orthodoxy.


The triumphs of technology have also made life for the secular minds fairly comfortable and safe in the developed world [although a lot of work remains to be done at the systematic level regarding economic policies, socio-cultural and philosophical beliefs and directions in some atavistic States of the West to counter the now dangerously increasing waves of Islamisation].

Perhaps a painful reality to most of those raised in a sophisticated science-oriented philosophical circle, or tutored with a conservative education but a liberal outlook from the West or Western derived systems, or in the ever more secular societies of the UK, France and Western Europe, is that so far we are the ‘minority’ and are seen as an ‘exception’ when compared with the majority in terms of humans living on earth globally.

That may send visions of the inundation of migrants from poorly managed nations of the 3rd world Middle-East and Africa who also play a major part on the low socio-economic birth rate explosion and consequent socio-cultural burden on global humanitarian budgets expected to cause major economic and socio-cultural unrest for the West in the coming future if situations do not change. Sadly for the secular intellectuals today is the fact that in most lesser developed societies of the world, the great religions, the smaller ones, and a series of traditional beliefs [some as illogical and ridiculous to reason or intelligence] continue to give a reason to live and subsequently meaning to the lives of many communities who are born and live in a completely different psycho-social reality fused with religious beliefs of ancient cultures [specially in the 3rd world and/or Islamic territories].

The progressive & ethical solution to deal with the alarming situation

Since, engineering environmentally also applies to the human organism, maximising the potential of humans according to their best environmental (socio-cultural) fit would seem like the most globally progressive philosophy. However, engineering our planet in terms of human abilities would also side with relocating populations to alleviate their own stress caused by incompatibility in terms of culture, language, identity and skills – a process that goes in line with evolutionary logic, but also fosters a harmonious human ecosystem with less tension, thus less stress [mental health & health].


Cooperation on matters beneficial for both states would be achieved from synchronised work from respective locations [e.g. nature, environment, climate change, business, etc]. This would alleviate systems that lack stability due to massive population imbalance and socio-cultural conflicts caused majorly by uncontrolled geographical shifts and the birth rates that follow, leading to ‘organisms’ [from an objective perspective] that do not ‘identify’ with the system that they were born into, but see themselves as part of an ‘external system and its school of thought’, who mostly earn and live to promote the latter system and flood the current one with further external and incompatible organisms.

This continuous unregulated & unsustainable process of mass-migration & mass low-SES births add to the ongoing burden of socio-cultural conflict and economic degradation due to the sole motivating factor being foreign interest [mostly 3rd world & developing economies] in economic resources from Western systems while remaining ‘foreign’ and indifferent to public/civic expectations socio-culturally [due to a lack of linguistic proficiency and other low-SES complications such as quality education, linguistic acculturation, etc]. Such issues in uncontrollable amounts that reflect in most aspects of a society have shown to lead to systemic instability, fragmentation and low social-cohesion mostly linked to differences in belief systems created by heritage or indoctrination of beliefs from incompatible systems through exposure.


Top Minority Languages by Country


Foreign Language people consider the most useful for Personal Development

Once more, from an objective perspective and through the humble logic of observation, any system from any part of the world would face degradation with excessive sections of their population not focused in contributing in its protection, promotion, strength and stability – a simple matter of factual reasoning, an e.g. of such a statement would be “If an egg is released from a metre on hard floor, it will fall and break.”.  With geographical engineering, it seems to simply be a matter of re-assessing and replacing  ‘organic units’ with ones that are reliable in terms of stability, compatibility and long term development [experience] – a clear example of progressive innovation. A simple case of synthesising the knowledge gained from science and applying its philosophy to prevent further catastrophes while correcting the dangerous path of the present.





A New Era for Management may be near: UK & France rank low for trust in government

Updated: 2nd of July, 2017 | Danny J. D’Purb |



Lenman, B. and Marsden, H. (2005). Chambers dictionary of world history. Edinburgh: Chambers.


While the aim of the community at has  been & will always be to focus on a modern & progressive culture, human progress, scientific research, philosophical advancement & a future in harmony with our natural environment; the tireless efforts in researching & providing our valued audience the latest & finest information in various fields unfortunately takes its toll on our very human admins, who along with the time sacrificed & the pleasure of contributing in advancing our world through sensitive discussions & progressive ideas, have to deal with the stresses that test even the toughest of minds. Your valued support would ensure our work remains at its standards and remind our admins that their efforts are appreciated while also allowing you to take pride in our journey towards an enlightened human civilization. Your support would benefit a cause that focuses on mankind, current & future generations.

Thank you once again for your time.

Please feel free to support us by considering a donation.


The Team @

Donate Button with Credit Cards

Essay // Biopsychology: Children & Impulsiveness

Image: PsyBlog

The Frontal lobe, responsible for most executive functions and attention, has shown to take years [at least 20] to fully develop. The Frontal lobe [located behind the forehead] is responsible for all thoughts and voluntary behaviour such as motor skills, emotions, problem-solving and speech.

In childhood, as the frontal lobe develops, new functions are constantly added; the brain’s activity in childhood is so intense that it uses nearly half of the calories consumed by the child in its development.

As the Pre-Frontal Lobe/Cortex is believed to take a considerable amount of at least 20 years to reach maturity (Diamond, 2002), children’s impulsiveness seem to be linked to neurological factors with the Pre-Frontal Lobe/Cortex; particularly, their [sometimes] inability to inhibit response(s).


“Our real problem is: what is the goal of education? Are we forming children who are only capable of learning what is already known? Or should we try to develop creative and innovative minds capable of discovery from the preschool age through life?” – Jean Piaget (1896 – 1980)

The idea was supported by developmental psychologist and philosopher Jean Piaget‘s  Theory of Cognitive Development of Children [known for his epistemological studies] where he showed the A-not-B error [also known as the “stage 4 error” or “perseverative error”] is mostly made by infants during the substage 4 of their sensorimotor stage.

Researchers used 2 boxes, marked A and B, where the experimenter had repeatedly hid a visually attractive toy under the Box A within the infant’s reach [for the latter to find]. After the infant had been conditioned to look under Box A, the critical trial had the experimenter move the toy under Box B.

Children of 10 months or younger make the “perseveration error” [looked under Box A although fully seeing experimenter move the toy under Box B]; demonstrating a lack of schema of object permanence [unlike adults with fully developed Frontal lobes].


Frontal lobe development in adults was compared with that in adolescents, e.g. Sowell et al (1999); Giedd et all (1999); who noted differences in Grey matter volume; and differences in White matter connections. Adolescents are likely to have their response inhibition and executive attention performing less intensely than adults’. There has also been a growing & ongoing interest in researching the adolescent brain; where great differences in some areas are being discovered.

The Pre-Frontal Lobe/Cortex [located behind the forehead] is essential for ‘mentalising’ complex social and cognitive tasks. Wang et al (2006) and Blakemore et al (2007) provided more evidence between the difference in Pre-Frontal Lobe activity when ‘mentalising’ between adolescents and adults. Anderson, Damasio et al (1999) also noted that patients with very early damage to their frontal lobes suffered throughout their adult lives.


2 subjects with Frontal Lobe damage were studied:

1) Subject A: Female patient of 20 years old who suffered damages to her Frontal lobe at 15 months old was observed as being disruptive through adult life; also lied, stole, was verbally and physically abusive to others; had no career plans and was unable to remain in employment.

2) Subject B was a male of 23 years of age who had sustained damages to his Frontal lobe at 3 months of age; he turned out to be unmotivated, flat with bursts of anger, slacked in front of the television while comfort eating, and ended up obese in poor hygiene and could not maintain employment. [However…]


While research and tests have proven the link between personality traits & mental abilities and frontal brain damage, the physiological defects of the frontal lobe would likely be linked to certain traits deemed negative by a subject willing to be a functional member of society [generally Western societies].

However, personality traits similar to the above Subjects [A & B] may in fact not always be linked to deficiency and/or damage to the frontal lobes; as many other factors are to be considered when assessing the behaviour & personality traits of subjects; where [for example] violence and short temper may [at times] be linked to a range of factors and environmental events during development, or other mental strains such as sustained stress, emotional deficiencies due to abnormal brain neurochemistry, genetics, or other factors that may lead to intense emotional reactivity [such as provocation or certain themes/topics that have high emotional salience to particular subjects, ‘passion‘]




Anderson, S.W., Bechara, A., Damasio, H., Tranel, D., Damasio, A.R. (1999) Impairment of social and moral behaviour related to early damage in human prefrontal cortex. Nat Neurisci, 2(11), 1032-7

Blakemore, S.J., Den Ouden, H., Choudhury, S., Frith, C. (2007). Adolescent development of the neural circuitry for thinking about intentions. Social Cognitive and Affective Neuroscience, 2(2), 130-9

Diamond A. (2002). Normal development of prefrontal cortex from birth to young adulthood: cognitive functions, anatomy, and biochemistry. In: Stuss DT, Knight RT, editors. Principles of frontal lobe function. New York: Oxford University Press. P 466-503

Giedd, J.N., Blumenthal, J., Jeffries, N.O., Castellanos, F.X., Liu, H., Zijdenbos, A., et al. (1999). Brain development during childhood and adolescence: a longitudinal MRI study. Nat Neurosci, 2, 861-863

Miller P, Wang XJ (2006) Inhibitory control by an integral feedback signal in prefrontal cortex: A model of discrimination between sequential stimuli. Proc Natl Acad Sci USA, 103(1), 201-206

Sowell ER, Thompson PM, Holmes C.J., Jernigan, T.L., Toga A.W. (1999). In vivo evidence for post-adolescent brain maturation in frontal and striatal regions. Nat Neurosci, 2, 859-861

24.01.2016 | Danny J. D’Purb |


While the aim of the community at has  been & will always be to focus on a modern & progressive culture, human progress, scientific research, philosophical advancement & a future in harmony with our natural environment; the tireless efforts in researching & providing our valued audience the latest & finest information in various fields unfortunately takes its toll on our very human admins, who along with the time sacrificed & the pleasure of contributing in advancing our world through sensitive discussions & progressive ideas, have to deal with the stresses that test even the toughest of minds. Your valued support would ensure our work remains at its standards and remind our admins that their efforts are appreciated while also allowing you to take pride in our journey towards an enlightened human civilization. Your support would benefit a cause that focuses on mankind, current & future generations.

Thank you once again for your time.

Please feel free to support us by considering a donation.


The Team @

Donate Button with Credit Cards

Essay // Psychology: The Concept of Self


The concept of the self will be explored in this essay – where it comes from, what it looks like and how it influences thought and behaviour. Since self and identity are cognitive constructs that influence social interaction and perception, and are themselves partially influenced by society, the material of this essay connects to virtually all aspects of psychological science. The self is an enormously popular focus of research (e.g. Leary and Tangney, 2003; Sedikides and Brewer, 2001; Swann and Bosson, 2010). A 1997 review by Ashmore and Jussim reported 31,000 social psychological publications on the self over a two-decade period to the mid-1990s, and there is now even an International Society for Self and Identity and a scholarly journal imaginatively entitled Self and Identity.

Nikon Portrait DSC_0169 Res600

The concept of the “self” is a relatively new idea in psychological science. While Roy Baumeister’s (1987) painted a picture of a medievally organised society where most human organism’s reality were fixed and predefined by rigid social relations and legitimised with religious affiliations [family membership, social rank, birth order & place of birth, etc], the modern perspectives adopted by scholars and innovative psychologists has been contradicting such outdated concepts. The idea of a complex & sophisticated individual self, lurking underneath would have been difficult, if not impossible, to entertain under such atavistic assumptions of social structures affecting an individual human organism.

However, all this changed in the 16th century, where momentum gathered ever since from forces such as:

Secularisation – where the idea that fulfilment occurs in afterlife was replaced by the idea that one should actively pursue personal fulfilment in this life

Industrialisation – where the human being was increasingly being seen as individual units of production who moved from place to place with their own “portable” personal identity which was not locked into static social structures such as extended family

Enlightenment – where people felt they were solely responsible for choosing, organising and creating better identities for themselves by overthrowing orthodox value systems and oppressive regimes [e.g. the French revolution and the American revolution of the late 18th century]


Psychoanalysis – Freud’s theory of the human mind unleashed the creative individual with the notion that the self was unfathomable because it lived in the depth of the unconscious [e.g. Theory of social representations – theory invoking psychoanalysis as an example of how a novel idea or analysis can entirely change how people think about their world (e.g. Moscovici, 1961; see Lorenzi-Cioldi and Clémence, 2001).


Together, these and other socio-political and cultural influences lead to society thinking about the self and identity as complex subjects, where theories of self and identity propagated and flourished in this fertile soil.

As far as self and identity are concerned, we have noticed one pervasive finding in cultural differences. The so called “Western” world involving continents such as Western Europe, North America and Australasia, tend to be individualistic, whereas most other cultures, such as in Asia, South America and Africa are collectivist (Triandis, 1989; also see Chiu and Hong, 2007, Heine, 2010, 2012; Oyserman, Coon and Kemmelmeier, 2002). Anthropologist Geertz puts it beautifully:

“The Western conception of the person as a bounded, unique, more or less integrated, motivational and cognitive universe, a dynamic centre of awareness, emotion, judgement, and action organized into a distinctive whole and set contrastively both against other such wholes and against a social and natural background is, however incorrigible it may seem to us, a rather peculiar idea within the context of the world’s cultures.”

Geertz (1975, p.48)

the individual

Markus and Kityama (1991) describe how those from individualistic cultures tend to have an independent self, whereas people from collectivist cultures have an interdependent self. Although in both cases, people seek a clear sense of who they are, the [Western] independent self is grounded in a view of the self that is autonomous, separate from other people and revealed through one’s inner thoughts and feelings. The [Eastern] interdependent self on the other hand, unlike in the West, tends to be grounded in one’s connection to and relationships with other people [expressed through one’s roles and relationships]. As Gao explained: ‘Self… is defined by a person’s surrounding relations, which often are derived from kinship networks and supported by cultural values based on subjective definitions of filial piety, loyalty, dignity, and integrity’ (Gao, 1996, p. 83).

From a conceptual review of the cultural context of self-conception, Vignoles, Chryssochoou and Breakwell (2000) conclude that the need to have a distinctive and integrated sense of self is “likely” universal. However from individualist and collectivist cultures, the term “self-distinctiveness” holds a set of very different assumptions. In the individualist West, separateness adds meaning and definition to the isolated and bounded self. In the collectivist & Eastern others, the “self” is relational and gains meaning from its relations with others.


A logic proposed by analysing historical conceptions of self with an account of the origins of individualist and collectivist cultures along with the associated independent and interdependent self-conceptions may be related to economic policies. The labour market is an example where mobility helped the industry by viewing humans as “units” of production who are expected to shift their geographical locations from places of low labour demand to those of higher demand, along with their ability to organise their lives, relationships, self-concepts around mobility and transient relationships.

New York Construction Workers Lunching on a Crossbeam

Construction workers eat their lunches atop a steel beam 800 feet above ground, at the building site of the RCA Building in Rockefeller Center.

Independence, separateness and uniqueness have become more important than connectedness and long-term maintenance of enduring relationships [values that seem to have become pillars of modern Western Labour Culture – self-conceptions reflect cultural norms that codify economic activity].

However, this logic applied to any modern human organism seems to clearly offer more routes to development [personal and professional], more options to continuously nurture the evolving concepts of self-conception through expansive social experience and cultural exploration, while being a set of philosophy that places more powers of self-defined identity in the hands of the individual [more modern and sophisticated].


Now that some basic concepts and origins of the “self” along with its importance and significance to psychological science has been covered, we are going to explore two creative ways of learning about ourselves.

Firstly, the concept of self-knowledge which involves us storing information about ourselves in a complex and varied way in the form of a schema means that information about the self is assumed to be stored cognitively as separate context specific nodes such that different nodes activate different ones and thus, different aspects of self (Breckler, Pratkanis and McCann, 1991; Higgins, van Hook and Dorfman, 1988). The concept of self emerges from widely distributed brain activity across the medial prefrontal and medial precuneus cortex of the brain (e.g. Saxe, Moran, Scholz, and Gabrieli, 2006). According the Hazel Markus, self-concept is neither “a singular, static, lump-like entity” nor a simple averaged view of the self – it is a complex and multi-faceted, with a relatively large number of discrete self-schemas (Markus, 1977; Markus and Wurf, 1987).


Most individuals tend to have clear conceptions of themselves on some dimensions but not others – generally more self-schematic on dimensions that hold more meaning to them, for e.g. if one thinks of oneself as sophisticated and being sophisticated is of importance to oneself, then we would be self-schematic on that dimension [part of our self-concept], if not then we would not [would not be part of our self-concept – unsophisticated]. It is widely believed that most people have a complex self-concept with a large number of discrete self-schemas. Patrice Linville (1985, 1987; see below) has suggested that this variety helps to buffer people from life’s negative impacts by ensuring enough self-schemas are available for the individual to maintain a sense of satisfaction. We can be strategic in the use of our self-schemas – Linville described such judgement colourfully by saying: “don’t put all your eggs in one cognitive basket.” Self-schemas influence information processing and behaviour similarly to how schemas about others do (Markus and Sentis, 1982): self-schematic information is more readily noticed, is overrepresented in cognition and is associated with longer processing time.


Self-schemas do not only describe how we are, but they are also believed to differ as we have an array of possible selves (Markus and Nurius, 1986) – future-oriented schemas of what we would like to become, or what we fear we might become. For example, a scholar completing a postgraduate may think of a career as a lecturer, writer, entrepreneur, politician, actor, rock musician, etc. Higgins (1987) proposed the self-discrepancy theory, suggesting that we have 3 major types of self-schema:

  • The actual self – how we are
  • The ideal self – how we would like to be
  • The ‘ought’ self – how we think we should be

Discrepancies between the actual, ideal and/or ought, can motivate change to reduce the discrepancy – in this way we engage in self-regulation. Furthermore, the self-discrepancy and the general notion of self-regulation have been elaborated into the regulatory focus-theory (Higgins, 1997, 1998).This theory proposes that most individuals have two separate self-regulatory systems, termed Promotion and Prevention. The “Promotion” system is concerned with the attainment of one’s hopes and aspirations – one’s ideals. For example, those in a promotion focus adopt approach strategic means to attain their goals [e.g. promotion-focused students would seek ways to improve their grades, find new challenges and treat problems as interesting obstacles to overcome. The “Prevention” system is concerned with the fulfilment of one’s duties and obligations. Those in a prevention focus use avoidance strategy means to attain their goals. For example, prevention-focussed students would avoid new situations or new people and concentrate on avoiding failure rather than achieving highest possible grade.


Whether an individual is more approach or prevention focussed is believed to stem during childhood (Higgins and Silberman, 1998). Promotion-focus may arise if children are habitually hugged and kissed for behaving in a desired manner and love is withdrawn as a form of discipline. Prevention-focus may arise if children are encouraged to be alert to potential dangers and punished when they display undesirable behaviours. Against this background of individual differences however, regulatory focus has also been observed to be influenced by immediate context, for example by structuring the situation so that subjects focus on prevention or on promotion (Higgins, Roney, Crowe and Hymes, 1994). Research also revealed that those who are promotion-focussed are more likely to recall information relating to the pursuit of success by others (Higgins and Tykocinski, 1992). Lockwood and her associates found that those who are promotion-focussed look for inspiration to positive role models who emphasise strategies for achieving success (Lockwood, Jordan and Kunda, 2002). Such individuals also show elevated motivation and persistence on tasks framed in terms of gains and non-gains (Shah, Higgins and Friedman, 1998). On the other side of the spectrum, individuals who are prevention-focussed tend to recall information relating to the avoidance of failure by others, are most inspired by negative role models who highlight strategies for avoiding failure and exhibit motivation and persistence on tasks that framed in terms of losses and non-losses. After being studied in intergroup relations (Shah, Higgins and Friedman, 1998), the regulatory focus theory was found to strengthen positive emotion related bias and behavioural tendencies towards the ingroup when in the context of a measured or manipulated promotion focus. Prevention-focus strengthens more negative emotion-related bias [haters] and behavioural tendencies against the outgroup (Shah, Brazy and Higgins, 2004).


On May 25, 2012, take off on a mind-blowing tour of the Universe in the Adler’s new space adventure Welcome to the Universe! Audiences travel a billion light-years and back in this live, guided tour unlike any other in the world. Visitors explore the breathtaking, seemingly infinite Universe as they fly through space to orbit the Moon, zoom into a canyon on Mars, and soar through the cosmic web where a million galaxies shower down upon them in the most immersive space environment ever created. (C) Adler Planetarium. (PRNewsFoto/Adler Planetarium)

The second way of learning about the concept of self is through the understanding of our “many selves” and multiple identities. In the book, The Concepf of Self, Kenneth Gergen (1971) depicts the self-concept as containing a repertoire of relatively discrete and often quite varied identities, each with a distinct body of knowledge. These identities have their origins in a vast array of different types of social relationships that form, or have formed, the anchoring points for our lives, ranging from close personal relationships with other professionals, mentors, trusted friends, etc and roles defined by skills, fields, divisions and categories, to relationships fully or partially defined by languages, geography, cultures [sub-cultures], groups values, philosophy, religion, gender and/or ethnicity. Linville (1985) also noted that individuals differ in terms of self-complexity, in the sense that some individuals have more diverse and extensive set of selves than othersthose with many independent aspects of selves have higher self-complexity that those with a few, relatively similar, aspects of self. The notion of self-complexity is given a rather different emphasis by Marilynn Brewer and her colleagues (Brewer and Pierce, 2005; Roccas and Bewer, 2002) who focussed on self that is defined in group terms (social identity) and the relationship among identities rather than number of identities individuals have.


They argued that individuals have a complex social identity if they have discrete social identities that do not share many attributes but a simple social identity if they have overlapping social identities that share many attributes [simple]. For example, when Cognitive Psychologists [cognitive psychology explores mental processes] study high-level functions such as problem solving and decision making, they often ask participants to think aloud. The verbal protocols that are obtained [heard] are then analysed at different levels of granularity: e.g. to look at the speed with which participants carry out mental processes, or, at a higher level of analysis, to identify the strategies being used. Grant and Hogg (2012) have recently suggested and empirically shown that the effect, particularly on group identification and group behaviours of the number of identities one has and their overlap may be better explained in terms of the general property of social identity prominencehow subjectively prominent, overall and in a specific situation, a particular identity is one’s self-concept. Social identity theorists (Tajfel and Turner, 1979) argued 2 broad classes of identity that define different types of self:

(i) Social Identity [which defines self in terms of a “particular” group membership (if any meaningful ones exist for the individual)], and

(ii) Personal Identity [which defines self in terms of idiosyncratic traits & close personal relationships with specific individuals/groups (if any) which may be more than physical/social, e.g. mental [strength of association with specific others on specific tasks/degrees]

The first main focus question here is asked by Brewer and Gardner (1996), ‘Who is this “we”?’ and distinguished three forms of self:

  • Individual self – based on personal traits that differentiate the self from all others
  • Relational self – based on connections and role relationships with significant/meaningful others
  • Collective self – based on group membership [can depend of many criteria] that differentiates ‘us’ from ‘them’

More recently it has been proposed that there are four types of identity (Brewer, 2001; Chen, Boucher and Tapias, 2006):

  • Personal-based social identities – emphasising the way that group properties are internalised by individual group members as part of their self-concept
  • Relational social identities – defining the self in relation to specific other people with whom one interacts [may not be physical or social only] in a group context – corresponding to Brewer and Gardner’s (1996) relational identity and to Markus and Kitayama’s (1991) ‘interdependent self’.
  • Group-based social identities – equivalent to social identity as defined above [sense of belonging and emotional salience for a group is subjective]
  • Collective identities – referring to a process whereby  those who consider themselves as “group members” not only share self-defining attributes, but also engage in social action to forge an image of what the group stands for and how it is represented and viewed by others.

China Collective

The relational self  [for those who choose to be defined by others at least] is a particularly interesting concept as it can also be considered a particular type of collective self. As Masaki Yuki (2003) observed, some groups and cultures (notable East-Asian cultures) define groups in terms of networks of relationships. Research also revealed that women tend to place a greater importance than men on their relationships with others in a group (Seeley, Gardner, Pennington and Gabriel, 2003; see also Baumeister and Sommer, 1997; Cross and Madson, 1997).

In search for the evidence for the existence of multiple selves which came from research where contextual factors were varied to discover that most individuals describe themselves and behave differently in different contexts. In one experiment, participants were made to describe themselves on very different ways by being asked loaded questions which prompted them to search from their stock of self-knowledge for information that presented the self in a different light (Fazio, Effrein and Falender, 1981). Other researchers also found, time and time again, that experimental procedures that focus on group membership lead people to act very differently from procedures that focus on individuality and interpersonal relationships. Even “minimal group” studies in which participants are either: (a) identified as individuals; or (b) explicitly categorised, randomly or by some minimal or trivial criterion as ‘group’ members (Tajfel, 1970; see Diehl, 1990), a consistent finding is that being categorised tends to lead people to being discriminatory towards an outgroup, conform to ingroup norms, express attitudes and feelings that favour ingroup, and indicate a sense of belonging and loyalty to the ingroup.


Furthermore, these effects of minimal group categorisation are generally very fast and automatic (Otten and Wentura, 1999). The idea that we may have many selves and that contextual factors can bring different selves into play, has a number of ramifications. Social constructionists have suggested that the self is entirely situation-dependent. An extreme form of this position argues that we do not carry self-knowledge around in our heads as cognitive representations at all, but rather that we construct disposable selves through talk (e.g. Potter and Wetherell, 1987). A less extreme version was proposed by Penny Oakes (e.g. Oakes, Haslam and Reynolds, 1999), who does not emphasise the role of talk but still maintains that self-conception is highly context-dependent. It is argued that most people have cognitive representations of the self that they carry in their heads as organising principles for perception, categorisation and action, but that these representations are temporarily or more enduringly modified by situational factors (e.g. Abrams and Hogg, 2001; Turner, Reynolds, Haslam and Veenstra, 2006).


Although we have a diversity of relatively discrete selves, we also have a quest: to find and maintain a reasonably integrated picture of who we are. Self-conceptual coherence provides us with a continuing theme for our lives – an ‘autobiography’ that weaves our various identities and selves together into a whole person. Individuals who have highly fragmented selves (e.g. some patients schizophrenia, amnesia or Alzheimer’s disease) find it very difficult to function effectively. People use many strategies to construct a coherent sense of self (Baumeister, 1998). Here is a list of some that we have used ourselves.

Sometimes we restrict our life to a limited set of contexts. Because different selves come into play as contexts keep changing, protections from self-conceptual clashes seem like a valid motive.

Other times, we continuously keep revising and integrating our ‘biographies’ to accommodate new identities. Along the way, we dispose of any meaningless inconsistencies. In effect, we are rewriting our own history to make it work to our advantage (Greenwald, 1980).

We also tend to attribute some change in the self externally to changing circumstances [e.g. educational achievements, professional circle, industry, etc] rather than simply internally to construct who we are. This is an application of the actor-observer effect (Jones and Nisbett, 1972).

In other case, we can also develop self-schemas that embody a core set of attributes that we feel distinguishes us from all other peoplethat makes us unique (Markus, 1977). We then tend to recognise these attributes disproportionately in all our selves, providing thematic consistency that delivers a sense of a stable and unitary self (Cantor and Kihlstrom, 1987). To sum up, individuals tend to construct their lives such that their self-conceptions are both steady and coherent.

One of major elements in the conception of self, is the ability to communicate through language and its varying degrees of granularity that hold a major role in social identity.

The remaining part of this essay will focus on the power and importance of language as the essence of the human being.


The Essence of the Modern Human Being: Language, Psycholinguistics & Self-Definition

Human communication is completely different from that of other species as it allows virtually limitless amounts of ideas to be expressed by combining finite sets of elements (Hauser, Chomsky, & Fitch, 2005; Wargo, 2008). Other species [e.g. apes] do have communicative methods but none of them compare with human language. For example, monkeys use unique warning calls for different threats, but never combine these calls on new ideas. Similarly, birds and whales sing complex songs, but creative recombination of these sounds in the expression of new ideas has not occurred to these animals either.


As a system of symbol, language lies at the heart of social life and all its multitude of aspects in social identity. Language may be at the essence of existence if explored from the philosopher Descartes most famous quote, “Cogito Ergo Sum” which is Latin for “I think, therefore I am.”, as thought is believed to be experienced and entertained in language. The act of thinking often involves an inner personal conversation with oneself, as we tend to perceived and think about the world in terms of linguistic categories. Lev Vygotsky (1962) believed that inner speech was the medium of thought and that it was interdependent with external speech [the medium of social communication]. This interdependence would lead to the logical conclusion that cultural differences in language and speech are reflected in cultural differences in thought.

In the theory of linguistic relativity devised by linguists Edward Sapir and Benjamin Whorf, a more extreme of that logic was proposed. Brown writes:

Linguistic relativity is the reverse of the view that human cognition constrains the form of language. Relativity is the view that the cognitive processes of a human being – perception, memory, inference, deduction – vary with structural characteristics – lexicon, morphology, syntax – of the language [one speaks].


Rene Descartes (1596-1659) was not only one of the most prominent philosophers of the 17th century but in the history of Western philosophy. Often referred to as the “father of modern philosophy”, Descartes profoundly influenced intellectuals across Europe with his writings. Best known for his statement “Cogito ergo sum” (I think, therefore I am), the philosopher started the school of rationalism which broke with the scholastic Aristotelianism. Firstly, Descartes rejected the mind-body dualism, arguing that matter (the body) and intelligence (the mind) are 2 independent substances (metaphysical dualism) and secondly rejected the causal model of explaining natural phenomena and replaced it with science-based observation and experiment. The philosopher spent a great part of his life in conflict with scholastic approach (historically part of the religious order and its adherents) which still dominated thoughts in the early 17th century.

Communication & Language

The study of communication is therefore an enormous undertaking that draws on a wide range of disciplines, such as psychology, social psychology, sociology, linguistics, socio-linguistics, philosophy and literary criticism. Social psychologists have tended to distinguish between the study of language and the study of non-verbal communication [where scholars agree both are vital to study communication (Ambady and Weisbuch, 2010; Holtgraves, 2010; Semin, 2007)]; with also a focus on conversation and the nature of discourse. However the scientific revolution has quickly turned our era into one hugely influenced by computer-mediated communication which is quickly turning into a dominant channel of communication for many (Birchmeier, Dietz-Uhler and Stasser, 2011; Hollingshead, 2001).

Communication in all its varieties is the essence of social interaction: when we interact we communicate. Information is constantly being transmitted about what we sense, think and feel – even about “who we are” – and some of our “messages” are unintentional [instinctive]. Communication among educated humans comprises of words, facial expressions, signs, gestures and touch; and this is done face-to-face or by phone, writing, texting, emails or video. The social factors of communication are inescapable:

  • It involves our relationship with others
  • It is built upon a shared understanding of meaning
  • It is how people influence each other

Spoken languages are based on rule-governed structuring of meaningless sounds (phonemes) into basic units of meaning (morphemes), which are further structured by morphological rules into words and by syntactic rules into sentences. The meanings of words, sentences and entire utterances are determined by semantic rules; which together represent “grammar”. Language has remained an incredibly and endlessly powerful medium of communication due to the limitless amount of meaningful utterances it can generate through the shared knowledge of morphological, syntactic and semantic rules. Meaning can be communicated by language at a number of levels, ranging from a simple utterance [a sound made by one person to another] to a locution [words placed in sequence, e.g. ‘It’s cold in this room’], to an illocution [the locution and context in which it is made: ‘It’s cold in this room’ may be a statement, or a criticism of the institution for not providing adequate heating, or a request to close the window, or a plea to move to another room (Austin, 1962; Hall, 2000)].


Linguistic mastery therefore involves dexterity at many levels of cultural understanding and therefore should likely differ from one individual to another depending on their personality, IQ, education and cultural proficiency in adaptation. This would lead to being able to navigate properly in the appropriate cultural context through language whilst knowing the appropriateness of the choice of words in term of “when, where, how and to whom say it.” Being able to master these, opens the doors to sociolinguistics (Fishman, 1972; also see Forgas, 1985), and the study of discourse as the basic unit of analysis (Edwards and Potter, 1992; McKinlay and McVittie, 2008; Potter and Wetherell, 1987). The philosopher John Searle (1979) has identified five sorts of meanings that humans can intentionally use language to communicate; the can use language:

  • To say how something is signified
  • To get someone to do something.
  • To express feelings and attitudes
  • To make a commitment
  • To accomplish something directly

Language is a uniquely human form of communication, as observed in the natural world, no other mammal has the elaborate form of communication in its repertoire of survival skills. Young apes have been taught to combine basic signs in order to communicate meaningfully (Gardner and Gardner, 1971; Patterson, 1978), however not even the most precocious ape can match the complexity of hierarchical language structure used by a normal 3-year-old child (Limber, 1977).


Language has been called a human instinct because it is so readily and universally learned by infants. At 10 months of age, little is said, but at 30-month-old infants speak in complete sentences and user over 500 words (Golinkoff & Hirsh-Pasek, 2006). Moreover, over this very 20 month period, the plastic infant brain reorganises itself to learn the language of its environment(s). At 10 months infants can distinguish the sounds of all languages, but by 30 months, they can readily discriminate only those sounds to which they have been exposed (Kraus and Banai, 2007). Once the ability to discriminate particular speech sounds is lost, it is very hard to regain in most, which is one of the reason why most adults tend to have difficulties with learning a new language without an accent.


Processes involved in the brain when speaking a heard word. Damage to areas of the Primary auditory cortex on the Left temporal lobe induce Language Recognition Problems & damage to the same areas on the Right produce deficits in processing more complex & delicate sounds [e.g. music, vocal performances, etc]. Hence, in Neuroscience, although it is not always the case, it can be generalised with a fair amount of confidence that Left is concerned with Speed, and Right is focused on Complex Frequency Patterns.

Most intellectuals researching the evolution of sophisticated human languages turned first to comparative studies of the vocal communications between human beings and other lesser primates [e.g. apes / monkeys]. For example, vervet monkeys do not use alarm calls unless other similar monkeys are within the vicinity, and the calls are more likely to be made only if the surrounding monkeys are relatives (Cheney and Seyfarth, 2005). Furthermore, chimpanzees vary the screams they produce during aggressive encounters depending on the severity of the encounter, their role in it, and which other chimpanzees can hear them (Slocombe and Zuberbuhler, 2005).

A fairly consistent pattern has emerged in the study of non-human vocal communication: There is a substantial difference between vocal production and auditory comprehension. Even the most vocal non-human primates can produce a relatively few calls, yet they are capable of interpreting a wide range of other sonic patterns in their environment. This seems to suggest that non-human primates’ ability to produce vocal language is limited, not by their inability to interpret sounds, but by their inability to exert ‘fine motor control’ over their voices – only humans have this distinct ability. It also confidently suggests that human language has likely evolved from a competence in comprehension already existing in our primate ancestors.


The species specificity to language has led to some linguistic theorist to assume that an innate component to language must be unique to humans, notably Noam Chomsky (1957) who argued that the most basic universal rules of grammar are innate [called a “Language Acquisition Device”] and are activated through social interaction which enables the “code of language” to be cracked. However some other theorists argue for a different proposal, believing that the basic rules of language may not be innate as they can be learnt from the prelinguistic parent-child interaction (Lock, 1978, 1980), furthermore the meanings of utterances are so dependent on social context that they seem unlikely to be innate (Bloom, 1970; Rommetveit, 1974; see Durkin, 1995).

Motor Theory of Speech Perception

The motor theory of speech perception proposes that the perception of speech depends on the words activating the same neural circuits in the motor system that would be activated if the listener said the words (see Scott, McGettigan, and Eisner, 2009). Support for this theory has come from evidence that simply thinking about performing a particular task often activates the similar brain areas as performing the action itself, and also the discover of mirror neurons, motor cortex neurons that fire when particular responses are either observed or performed (Fogassi and Ferrari, 2007).


Broca’s area: Speech production & Language processing // Wernicke’s area: Speech Comprehension

This seems to make perfect sense when solving the equation on the simple observation that Broca’s Area [speech area] is a part of the left premotor cortex [motor skills/movement area]. And since the main thesis of the motor theory of speech perception is that the motor cortex is essential in language comprehension (Andres, Olivier, and Badets, 2008; Hagoort and Levelt, 2009; Sahin et al., 2009), the confirmation comes from the fact that many functional brain-imaging studies have revealed activity in the primary or secondary motor cortex during language tests that do not involve language expression at all (i.e., speaking or writing). This may also suggest that fine linguistic skills may be linked to fine motor skills. Scott, McGettigan, and Eisner (2009) compiled and evaluated results of recorded activity in the motor cortex during speech perception and concluded that the motor cortex is active during conversation.

Gestural Language

Since the unique ability of a high degree of motor control over the vocal apparatus is present only in humans, communication in lesser non-human primates are mainly gestural rather than vocal.


Image: Reuters

This hypothesis was tested by Pollick, and de Waal in 2007, who compared the gestures and the vocalisations of chimpanzees. They found a highly nuanced vocabulary of hand gestures being used in numerous situations with a variety of combinations. To conclude, chimpanzees gestures were much more comparable to human language that were their vocalisations. Could this simply suggest that primate gestures have been critical stage in the evolution of human language (Corballis, 2003)?

On this same note, we may focus on the already mentioned “Theory of Linguistic Relativity” (Whorf, 1956) which states that our internalised cognitions as a human being, i.e. perception, memory, inference, deduction, vary with the structural characteristics, i.e. lexicon, morphology and syntax of the language we speak [cultural influence shapes our thoughts].


In support of of Sapir and Whorf’s position, Diederik Stapel and Gun Semin (2007) refer poetically to the “magic spell of language” and report their research, showing how different categories in the language we speak guide our observations in particular ways. We tend to use our category of language to attend to different aspects of reality. The strong version of the Sapir-Whorf hypothesis is that language entirely determines thought, so those who speak different languages actually perceive the world in entirely different ways and effectively live in entirely different cognitive-perceptual universes. However extreme this suggestion may seem, a good argument against this assumption would be to consider whether the fact that we can distinguish between living and non-living things in English means that the Hopi of North-America, who do not, cannot distinguish between a bee and an aeroplane? Japanese personal pronouns differentiate between interpersonal relationships more subtly than do English personal pronouns; does this mean that English speakers cannot tell the difference between relationships? [What about Chong, Khan, Balaraggoo, Tyrone, Vodkadinov, Jacob, Obatemba M’benge and Boringski – where would you attribute their skills in the former question?]

The strong form of the Sapir-Whorf hypothesis is believed to be the most extreme version to be applicable to the mainstream, so a weak form seems to better accord with the quantitative facts (Hoffman, Lau and Johnson, 1986). Language does not determine thought but allows for the communication of aspects of the physical or social environment deemed important for the community. Therefore in the event of being in a situation where the expertise in snow is deemed essential, one would likely develop a rich vocabulary around the subject. Similarly, should one feel the need to have a connoisseur’s discussion about fine wines, the language of the wine masters would be a vital requisite in being able to interact with flawless granularity in the expression finer experiences.


Although language may not determine thought, its limitations across cultures may entrap those ‘cultured’ to a specific one due to its limited range of available words. Logically, if there are no words to express a particular thought or experience we would not likely be able to think about it. Nowadays such an idea based on enhancing freedom of expression and the evolution of human emancipation, a huge borrowing of words across languages has been noted over the years: for example, English has borrowed Zeitgeist from German, raison d’être from French, aficionado from Spanish and verandah from Hindi. This particular concept is powerfully illustrated in George Orwell’s novel 1984, in which a totalitarian regime based on Stalin’s Soviet Union is described as imposing its own highly restricted language called “Newspeak” designed specifically to prohibit people from even thinking non-orthodox or heretical thoughts, because the relevant words do not exist.

Further evidence over the impact of language on thought-restriction comes from research led by Andrea Carnaghi and her colleagues (Carnaghi, Maas, Gresta, Bianchi, Cardinu and Arcuri, 2008). In German, Italian and some other Indo-European languages [such as English], nouns and adjectives can have different effects on how we perceive people. Compare ‘Mark is gay’ [using an adjective] with ‘Mark is a gay’ [using a noun]. When describing an individual, the use of an adjective suggests an attribute of that individual; whereas a noun seems to imply a social group and being a member of a ‘gay’ group. The latter description with a noun is more likely to invoke further stereotypic/prejudicial inferences and an associated process of essentialism (e.g. Haslam, Rothschild and Ernst, 1998) that maps attributes onto invariant, often bio-genetic properties of the particular social category/group.

Paralanguage and speech style

The impact of language on communication is not only dependent on what is said but also by how it is said. Paralanguage refers to all the non-linguistic accompaniment of speech – volume, stress, pitch, speed, tone of voice, pauses, throat clearing, grunts and sighs (Knapp, 1978; Trager, 1958). Timing, pitch and loudness (the prosodic features of language; e.g. Argyle, 1975) play major roles in communication as they can completely change the meaning of utterances: a rising intonation at the end of a statement turns it into a question or communicates uncertainty, doubt or need for approval (Lakoff, 1973). Underlying emotions are often revealed in prosodic features of speech: low pitch could signify sadness or boredom, while high pitch could communicate anger, fear or surprise (Frick,1985). Naturally fast speech often reflects power and control (Ng and Bradac, 1993).


To gain further understanding of the feelings elicited by different paralinguistic features, Klaus Scherer (1974) used a synthesizer to vary short neutral utterances and has had individuals identify the emotions that were being communicated. Fig. A shows how different paralinguistic features communicate information about the speaker’s feelings.

In addition to paralinguistic cues, communication can also happen in different accents, different language varieties and different languages altogether. These are important speech style differences that have been well researched in social psychology (Giles and Coupland, 1991). From social psychology, the focus in language is mainly on how something is said rather than on what is said, with speech style instead of speech content; whereas discourse analytic approaches also place importance on what is said.

Table D2

Fig. A | Emotions displayed through paralinguistic cues

Social Markers in Speech

Most individuals have a repertoire of speech styles that is automatically or deliberately tailored depending on the context of the communicative event. For example, one would tend to speak slowly, use short words and simple grammatical constructions when dealing with foreigners and children (Clyne, 1981; Elliot, 1981). Longer, more complex constructions along with formalised language varieties or standard accents tend to be used in more formal contexts such as an interview or a speech.

In 1979, Penelope Brown and Colin Fraser categorised different components of a communicative situation that may influence speech style and distinguished between two broad features:

  • The scene (e.g. its purpose, time of day, whether there are bystanders or an audience, etc)
  • The participants (e.g. their personality, ethnicity, chemistry between them)

It is important to note however that individual differences have a major role to play in this objective classification of situations as different individuals may not define the similar “objective” situations similarly. For example, what is deemed formal for some may simply be common place to others; this subjective perception of objective situations has an effect on one’s chosen speech style.


One amazing point raised by Adrian Furnham (1986) is the fact that not only does one adjust speech styles to subjectively perceived situational demands, but one also seeks out situations that are appropriate to a preferred speech style. Contextual variations in speech style contains information about who is speaking to whom, in what context and on what topic? Speech contains social markers (Scherer and Giles, 1979). The most researched markers in social psychology are of group “memberships” such as society, social class, ethnicity, education, age and sex. Social markers are in most cases clearly identifiable and act as reliable clues to group membership. For example, most of the English can easily identify Americans, Australians and South Africans from their speech style alone, and (see Watson, 2009) are probably even better at identifying people who have been cultured in Exeter, Birmingham, Liverpool, Leeds and Essex! Speech style generally elicits a listener’s attitude towards the group that the speaker “represents” [at the exception of some non-mainstream individuals – as in any other group]. A mainstream media example could be the actress Eliza Doolittle’s tremendous efforts in the film My Fair Lady to acquire a standard English accent in order to hide her Cockney origins. This idea or concept is known as the match-guise technique, one of the most widely used research paradigms in the social psychology of language – devised to investigate language attitudes based on speech alone (Lambert, Hodgson, Gardner and Fillenbaum, 1960). The method involves individuals rating short speech extracts similar in paralinguistic, prosodic and content respects, differing ONLY in speech style (accent, dialect, language). All the speech extracts were spoken by the very same individual – who was fluently bilingual. The speaker is rated on a number of evaluative dimensions, which fall into 2 clusters reflecting competence and warmth as the 2 most basic dimensions of social perception (Fiske, Cuddy and Glick, 2007).

  • Status variables (e.g. intelligent, competent, powerful);
  • Solidarity variables (e.g. close, friendly, warm).

The matched-guise technique has been used extensively in a wide range of cultural contexts to investigate how speakers of standard and non-standard language varieties are evaluated. The standard language variety is the one that is associated with high economic status, power and media usage – in England, for example, it is what has been called received pronunciation (RP) English. Non-standard varieties include regional accents (e.g. Yorkshire, Essex), non-standard urban accents (e.g. Birmingham, North/South London) and minority ethnic languages (e.g. Afrikaan, Urdu, Arab, Hindi, Mandarin and other foreign minority languages in Britain). Research reveals that standard language varieties are more favourably evaluated on status and competence dimensions (such as intelligence, confidence, ambition) than non-standard varieties (e.g. Giles and Powesland, 1975).


There is also a tendency for non-standard variety speakers to be more favourably evaluated on solidarity dimensions. For example, Cindy Gallois and her colleagues (1984) found that both white Australians and Australian Aborigines upgraded Aboriginal-accented English on solidarity dimensions (Gallois, Callan and Johnstone, 1984). Hogg, Joyce and Abrams (1984) found that a similar scenario occurs in other linguistic cultures, for e.g. Swiss Germans upgraded speakers of non-standard Swiss German relative to speakers of High German on solidarity dimensions.

Language, Identity & Ethnicity

Matched-guise technique and other studies in linguistics have revealed how our speech style [accents, language, grammatical proficiency & voice] can affect how others evaluate us socially. This is unlikely to be due to the fact that some speech styles are aesthetically more pleasant than others, but more likely to be because speech styles are associated with particular social groups that are consensually evaluated more or less positively in society’s scale. Unless being acted, a person speaking naturally in the speech style of lower-status groups may lead to an evaluation similar to that of the group and their image [in terms of way of life] in society [for most mainstream cases & not expert assessors of individuality]. This simply suggests that processes associated with intergroup relations and group memberships may affect language and social behaviour among the mainstream crowd.

A Scholar at His Desk

Howard Giles and Richard Bourhis and their colleagues employed and extended principles from the social identity theory to develop an intergroup perspective on the social psychology of language (Giles, Bourhis and Taylor, 1977; Giles and Johnson, 1981, 1987). Since the original analysis focussed mainly on ethnic groups that differ in speech style, the theory is called ethnolinguistic identity theory; however, the wider intergroup analysis of language and communication casts a much wider net to embrace all manner of intergroup contexts (e.g. Giles, 2012; Giles, Reid and Harwood, 2010). 

Speech Style and Ethnicity

Although it is well know that ethnic groups differ in appearance, dress, cultural practices, and religious beliefs, language or speech style is often one of the most distinct and clear markers of ethnic identitysocial identity as a member of an ethnolinguistic group (an ethnic group defined by language or speech-style). For instance, the Welsh and the English in the UK are most distinctive in terms of accent and language. Speech style, then, is an important and often central stereotypical or normative property of group identity: one of the most powerful ways to display your Welshness is to speak English with a marked Welsh accent – or, even better to simply speak Welsh.


Language or speech style cues ethnolinguistic identity. Therefore, whether people accentuate or de-emphasise their ethnic language is generally influenced by the extent to which they see their ethnic identity as being a source of self-respect or pride. This perception will in turn be influenced by the real nature of the power and status relations between ethnic groups in society. Research in England, on regional accents rather than ethnic groups, illustrates this (e.g. Watson, 2009) – some accents are strengthening and spreading and others retreating or fading, but overall despite mobility, mass culture and the small size of England, the accent landscape is surprisingly unchanged. Northern accents in particular such as Scouse and Geordie have endured due to low immigration and marked subjective regional pride of these respective communities. Brummie is slowly spreading into the Welsh Marches due to population spread, and Cockney-influenced Estuary English popular due to it being portrayed in mainstream middle-class films has luckily not influenced East Anglia and South East England – that have kept their grammar and granularity.

It should be noted that almost all major societies have a multicultural component with ethnic groups, however all contain a single dominant high-status group whose language is the lingua franca of the nation with ethnic groups whose languages are subordinate. However, in major immigrant economies such as the United States, Canada and Australia some of the biggest variety of large ethnic groups occur. Unsurprisingly, most of the research on ethnicity and language comes from these countries, in particular, Australia and Canada. In Australia for example, English is the lingua franca, but there are also large ethnic Chinese, Italian, Greek and Vietnamese communities – language research has been carried out on all these communities (e.g.  Gallois, Barker, Jones and Callan, 1992; Gallois and Callan, 1986; Giles, Rosenthal and Young, 1985; Hogg, D’Agata and Abrams, 1989; McNamara, 1987; Smolicz, 1983)

Speech Accommodation

Social categories such as ethnic groups may develop and maintain or lose their distinctive languages or speech style as a consequence of intergroup relations. However, categories do not speak. People speak, and it is generally done with one another, usually in face-to-face interaction. As mentioned earlier, when people interact conversationally, they tend to adapt their speech style to the context – the situation, and in particular the listener. This concept is the foundation of the speech accommodation theory (Giles, 1984; Giles, Taylor and Bourhis, 1973), which invokes specific motivations to explain the ways in which people accommodate their speech style to those who are present. Motivation involved for such adaptations may be a desire to help the listener to understand what is being said or to promote specific impressions of oneself.

Oxford Radcliffe Square at night by Y_Song2

Radcliffe Square at Night, Oxford [Image: Y. Song]

Speech Convergence and Divergence 

Since most conversations involve individuals who are potentially of unequal social status, speech accommodation theory describes the type of accommodation that might occur as a function of the sort of social orientation that the speakers may have towards one another (See Fig. B). Where a simple interpersonal orientation exists (e.g. between two friends), bilateral speech convergence occurs. Higher-status speakers shift their accent or speech style ‘downwards’ towards that of the lower-status speakers, who in turn shift ‘upwards’. In this scenario, speech convergence satisfies a need for approval or liking. The act of convergence increases interpersonal speech style similarity and this enhances interpersonal approval and liking (Bourhis, Giles and Lambert, 1975), particularly if the convergence behaviour is clearly intentional (Simard, Taylor and Giles, 1976). The process is based on the supported idea that similarity typically leads to attraction in most cases (e.g. Byrne, 1971).

Table D1

Fig. B | Speech accommodation as a function of status, social orientation and subjective vitality

Consider a particular scenario where an intergroup orientation exists. If the lower status group has low subjective vitality coupled with a belief is social mobility (i.e. one can pass, linguistically, into the higher status group), there is unilateral upward convergence on the part of the lower status speaker and unilateral speech divergence on the part of the higher status speaker. In intergroup contexts, divergence achieves psycholinguistic distinctiveness: it differentiates the speaker’s ingroup on linguistic grounds from the outgroup. Where an intergroup orientation exists and the lower status group has high subjective validity coupled with a belief in social change (i.e. one cannot pass into the higher status group), bilateral divergence occurs. Both speakers pursue psycholinguistic distinctiveness.

Speech accommodation theory has been well supported empirically (Gallois, Ogay and Giles, 2005; Giles and Coupland, 1991). Bourhis and Giles found that Welsh adults accentuated their Welsh accent in the presence of RP English speakers (i.e. the standard non-regional variety of English).  Bourhis, Giles, Leyens and Tajfel (1979) obtained a similar finding in Belgium, with Flemish speakers in the presence of French speakers. In both cases, a language revival was under way at the same time, and thus an intergroup orientation with high vitality was salient. In a low-vitality social mobility context, Hogg (1985) found that female students in Britain shifted their speech style ‘upwards’ towards that of their male partners. Accommodation in intergroup contexts reflects an intergroup or social identity mechanism in which speech style is dynamically governed by the speakers’ motivation to adopt ingroup or outgroup speech patterns. These motivations are in turn formed by perception of:

  • The relative status and prestige of the speech varieties and their associated groups;and
  • The vitality of their own ethnolinguistic group

Stereotyped Speech 

One important factor that may actually govern changes in speech style is conformity to stereotypical perceptions of the appropriate speech norm. Thakerar, Giles and Cheshire (1982) distinguished between objective and subjective accommodation. People converge on or diverge from what they perceive to be the relevant speech style. Objective accommodation may reflect this, but in some circumstances it may not: for instance subjective convergence may resemble objective divergence if the speech style stereotype is different from the actual speech behaviour of the other speaker.

Even the “Queen’s English” is susceptible to some accommodation towards a more popular stereotype (Harrington, 2006). An analysis of the phonetics in the speech of Queen Elizabeth II from her Christmas broadcasts to the world since 1952 show a gradual change in the Royal vowels, moving from ‘upper-class’ RP to a more ‘standard’ and less aristocratic RP. This may simply reflect a softening of the once strong demarcation between the social classes – social change may sometimes be a catalyst for speech change. Where once she might have said “thet men in the bleck het”, she would now say “that man in the black hat”.


Speech accommodation theory has been extended in recognition of the role of non-verbal behaviour in communication – now called communication accommodation theory (Gallois, Ogay and Giles, 2005; Giles, Mulac, Bradac and Johnson, 1987; Giles and Noels, 2002), which acknowledges that convergence and divergence can occur non-verbally as well as verbally. Anthony Mulac and his colleagues found that women in mixed-sex dyads converged towards the amount of eye contact (now called ‘gaze’) made by their partner (Mulac, Studley, Wiemann and Bradac, 1987). While accommodation is often synchronised in verbal and non-verbal channels, this is not necessarily the case. Frances Bilous and Robert Kraus (1988) found that women in mixed-sex dyads converged towards men on some dimensions (e.g. total words uttered and interruptions) but diverged on others (e.g. laughter).

Bilingualism and second-language acquisition 

Due to the excessive and culturally destructive waves of migration caused by the exploitation of diplomacy and some corrupt mainstream media and politicians to promote mass migration, most major countries are now bilingual or multilingual, meaning that people need to be able to speak two or more languages with a fair amount of proficiency to communicate effectively and successfully achieve their goals in different contexts. These countries contain a variety of ethnolinguistic groups with a single dominant group whose language is the lingua franca – very few countries are effectively monolingual (e.g. Portugal and Japan) anymore – which may be reflected in the rise in cultural conflict and lack of social coherence.


The acquisition of a second language is rarely a matter of acquiring basic classroom proficiency, as one might in order to ‘get by’ on holiday – in fact, it is a wholesale acquisition of a language embedded in a highly cultural context with varying degrees of granularity to reach the levels of flawless/effective communication (Gardner, 1979). Second-language acquisition requires native-like mastery (being able to speak like a native speaker), and this hinges more on the motivations of the second-language learner than on linguistic aptitude or pedagogical factors. Failure to acquire native-like mastery can undermine self-confidence and cause physical and social isolation, leading to material hardship and psychological suffering. For example, Noels, Pon and Clément (1996) found low self-esteem and marked symptoms of stress among Chinese immigrants in Canada with poor English skills. Building on earlier models (Gardner, 1979; Clément, 1980), Giles and Byrne (1982) proposed an intergroup model of second language acquisition. There are five socio-psychological dimensions that influence a subordinate group member’s motivational goals in learning the language of a dominant group (see Fig. C):

  • Strength of ethnolinguistic identification
  • Number of alternative identities available
  • Number of high-status alternative identities available
  • Subjective vitality perceptions
  • Social beliefs regarding whether it is or is not possible to pass linguistically into the dominant group

Low identification with one’s ethnic ingroup, low subjective vitality and a belief that one can ‘pass’ linguistically, coupled with a large number of other potential identities of which many are high-status, are conditions that motivate someone to acquire native-like mastery in the second language. Proficiency in the second language is seen to be economically and culturally useful; it is considered additive to our identity. Realisation of this motivation is facilitated or inhibited by the extent to which we are made to feel confident or anxious about using the second language in specific contexts. The converse set of socio-psychological conditions motivates people to acquire only classroom proficiency. Through fear of assimilation, the second language is considered subtractive in that it may attract ingroup hostility and accusations of ethnic betrayal. Early education, individual Intelligence, personality and aptitude may also affect proficiency.

Table D3

Fig. C | Intergroup model of second-language acquisition | Note: Learning a second language is influenced by motivational goals formed by the wider context of social identity and intergroup relations. [Source: Giles and Byrne (1982)]

This analysis of second-language acquisition grounds language firmly in its cultural context and thus relates language acquisition to broader acculturation processes. John Berry and his colleagues distinguished between integration (individuals maintain ethnic culture and relate to dominant culture), assimilation (individuals give up their ethnic culture and wholeheartedly embrace the dominant culture), separation (individuals maintain their ethnic culture and isolate themselves from the dominant culture) and marginalisation (individuals give up their ethnic culture and fail to relate properly to the dominant culture (Berry, Trimble and Olmedo, 1986).


Human brain specimen being studied in neuroscience professor Ron Kalil’s Medical School research lab. © UW-Madison News & Public Affairs 608/262-0067 Photo by: Jeff Miller

While the only effective forms of adjustments that completely benefit a system remain “native citizens” [in terms of designing culturally fitting human organisms from the lower to the upper scale of society], and assimilation [the small number of culturally & educationally worthwhile & proficient organisms that manage to], the remaining could simply be qualified as burden to most systems, specially children deriving from economic migration [who are already being born in mass (in some cases) due to the higher fertility culture from their parents’ cultural origins, and who seem to want native treatment while not being able to culturally navigate with native-like proficiency (illogical demands with illogical cultural belonging). This ‘nomadic‘ generation of children whose parents initially moved from land to land for nothing else but the simple rush for cash from a socio-economic system with better financial prospects may unfortunately [at the exception of some illogical mainstream college-educated far-left human rights activists] be a scenario fitting with a parasitic ‘metaphoric example’, while to others [such as left wing economic policy makers], this could be what they cheaply describe as “modernism” & “cultural-enrichment“.

In a ‘psychological’ reality, from a social-psychologist’s perspective this may simply be described as a mass phenomenon that society is not used to dealing with and has not been monitoring effectively since the 1950s, to a point where confusion and sheer desperation sets for both native citizens and authorities when thinking of a “rational” solution that seems to be constantly destroyed by outdated, irrational and illogical human rights laws, forever unfavourable to major western societies while defending cheap unskilled migration originally from culturally and economically disastrous systems [e.g. the third world, middle east & some parts of Southern and Eastern Asia].


Thus, the consequences for second language learning can indeed be very dramatic and have a life changing impact. Most major economies today are fragmented due to linguistic barriers and cultural differences, furthermore, since language is refined from interactions, the lack of chemistry and coherence may well be a major factor in the drop in cultural and educational standards – not to mention a generation that does not seem to represent any values [cultural or philosophical] – but simply regional classroom proficiency and barely any granularity or refinement in the linguistic and cultural context of a heritage that comes with traditions ‘developed’ over centuries of civilisation.


Majority group members do not generally have the motivation to acquire native-like mastery of another language. According to John Edwards (1994), it is precisely the international prestige and utility, and of course widespread use of English that makes native English speakers such poor language students: they simply lack the motivation to become proficient. Itesh Sachdev and Audrey Wright (1996) pursued this point and found that English children were more motivated to learn languages from the European continent (e.g. French, German, Italian) than those from the Asian continent (e.g. Mandarin, Hindi, Russian, Urdu, Tamil, Arabic, etc) even though a fair amount of children in the sample were exposed to more Asian & African immigration [due to years of mediocre policies linked to cheap democratic governments & leftist agendas bent on promoting alien invasions – fragmenting societies & destructively shifting geographical compositions] than languages & cultures from Europe. A possible reason would be that English children perceive more prestige and desirability in mastering additional languages & cultures such as French, German & Italian instead of far-flung incompatible foreign ones [e.g. African Third world, Middle-East, Asia etc].

Communicating without words

Speech rarely happens in complete isolation from non-verbal cues. Even on a phone, individuals tend to automatically use a variety of gestures [body language] that cannot be ‘seen’ by the recipient at the other end of the phone line. In a similar fashion, phone and computer-mediated communication (CMC) conversations can be difficult precisely because many non-verbal cues are not accessible [e.g. users may interpret some messages as ‘cold’, ‘short’ or ‘rude’ when a participant might simply not be proficient at expressing themselves on a keyboard]. However, non-verbal channels do not always work in combination with speech to facilitate understanding. In some cases, non-verbal message starkly contradicts the verbal message [e.g. threats, sarcasm and other negative messages accompanied by a smile; Bugental, Love and Gianetto, 1971; Noller, 1984].

Agony, Torture, and Fright by Charles Darwin

Agony, Torture, and Fright | Charles Darwin, 1868

Human beings can produce about 20,000 different facial expressions and about 1,000 different cues based on paralanguage. There are also about 700,000 physical gestures, facial expressions and movements (see Birdwhistell, 1970; Hewes, 1957; Pei, 1965). Even the briefest interaction may involve the fleeting and simultaneous use of a huge number of such devices in combination, making it unclear even to code behaviour, let alone analyse the causes and consequences of particular non-verbal communications. However, their importance is now acknowledged in social psychology (Ambady and Weisbuch, 2010; Burgoon, Buller and Woodall, 1989; DePaulo and Friedman, 1998), and doing research in this area has remained a major challenge. Non-verbal behaviour can be used for a variety of purposes, one may use it to:

  • Glean information about feelings and intentions of others (e.g. non-verbal cues are often reliable indicators of whether someone likes you, is emotionally suffering, etc);
  • Regulate interactions (e.g. non-verbal cues can signal the approaching end of an utterance, or that someone else wishes to speak)
  • Express intimacy (e.g. touching and mutual eye contact);
  • Establish dominance or control (non-verbal threats);
  • Facilitate goal attainment (e.g. pointing)

These functions are to be found in most aspects of non-verbal behaviour such as gaze, facial expressions, body language, touch and interpersonal distance. Non-verbal communications has a large impact, yet it goes largely ‘unnoticed’ – perhaps since we acquire them unaware, we tend not to be conscious when using them. Most individuals acquire non-verbal skills without any formal training yet manage to master a rich repertoire of non-verbal behaviour very early in life – suggesting that huge individual differences in skills and uses should be noticed. Social norms can have a strong influence on our use of non-verbal language, for example, if one is delighted at the demise of an arrogant narcissist or foe, one would be unlikely to smile at their funeral – Schadenfreude is not a noble emotion to express [at least in most situations].


Individual and group differences also have an influence on, or are associated with, non-verbal cues. Robert Rosenthal and his colleagues (Rosenthal, Hall, DiMatteo, Rogers and Archer, 1979) devised a profile of non-verbal sensitivity (PONS) as a test to chart some of these differences. All things equal, non-verbal competence improves with age, is more advanced among successful people and is compromised among individuals with a range of psychopathologies (e.g. psychosis, autism).

Gender Differences 

Reviews conclude that women are generally better than men at decoding both visual cues and auditory cues, such as voice tone and pitch (E. T. Hall, 1979; J. A. Hall, 1978, 1984). The explanation for this seems to be rather social than evolutionary (Manstead, 1992), including child-rearing strategies that encourage girls more than boys to be emotionally expressive and attentive. One major question remains whether women’s greater competence is due to greater knowledge about non-verbal cues. According to Janelle Rosip and Judith Hall (2004), the answer seems to be ‘yes’ – women have a slight advantage, based on results from their test of non-verbal cue knowledge (TONCK). A meta-analysis by William Ickes has shown that when motivated to do so, women can become even more accurate: for example when women think they are being evaluated for their empathy or when gender-role expectations of empathy are brought to the fore (Ickes, Gesn and Graham, 2000).


Most individuals can improve their non-verbal skills (Matsumoto and Hwang, 2011), that can be useful for improving interpersonal communication, detecting deception, presenting a good impression and hiding our feelings [when required in some situations]. Practical books have been written and courses on communications has always had an enduring appeal. Why not try yourself out on the TONCK?

Non-verbal behaviour differs among individuals since most have different attachment styles thus different relationships too. In the case of intimate relationships, we would tend to assume that partners would enhance each other’s emotional security through accurate decoding of their individualistic non-verbal cues and responding appropriately (Schachner, Shaver and Mikulincer, 2005). Although there are data dealing with non-verbal behaviour in parent-child interactions and how they relate to the development of attachment styles in children (Bugental, 2005), there is less research focussing on how adult attachment styles are reflected ‘non-verbally’ in intimate relationships.



The concept of self is not an overnight process but a gradual and intelligent process involving calculated, precise and minute adjustments to one’s inner thoughts, thus, over time, changing one’s cognitive schemas, personality, identity and linguistic proficiency. It is a process hugely dependent on individual motivation, education, dedication, capability, IQ and cultural proficiency. Along with it, languages are the essence of identity as it also leads to cultural belonging and thus, cognitive schemas related to inner thoughts that allow one to navigate efficiently within the particular cultural theme and be part of the societies related to the languages. Together, psychology, linguistic culture, personality and education are the core of individual conception – to sum it up beautifully for colleagues in innovation, science and psychology out there, “It is not what is in the head that counts, but the ability to turn it into a believable logical reality and a psychologically valid human concept/identity.”

Arthur Hughes Self-Portrait 1851

Image: Arthur Hughes (1832 – 1915), “Arthur Hughes



    1. Abrams, D. and Hogg, M. A. (2001). Collective identity. Group membership and self-conception. In M. A. Hogg and R. S. Tindale (eds),Blackwell handbook of social psychology: Group processes (pp. 425-60). Oxford, UK: Blackwell
    2. Ambady, N. and Weisburg, M. (2010). Nonverbal behaviour. In S. T. Fiske, D.T. Gilbert, and G. Lindzey (eds), Handbook of Social Psychology (5th edn, Vol. 1, pp. 464-497). New York: Wiley
    3. Andres, M., Olivier, E. and Badets, A. (2008). Actions, words, and numbers: A motor contribution to semantic processing? Current Directions in Psychological Science, 17, 313-317.
    4. Argyle, M. (1975). Bodily communication. London: Methuen.
    5. Ashmore, R. D. and Jussim, L. (1997). Towards a second century of the scientific analysis of self and identity. In R. Ashmore and L. Jussim (eds),Self and identity: Fundamental issues (pp. 3-19). New York: Oxford University Press.
    6. Austin, J. L. (1962). How to do things with words. Oxford, UK: Clarendon Press.
    7. Baumeister, R. F. (1987). How the self became a problem: A psychological review of historical research.Journal of Personality and Social Psychology, 52,163-176
    8. Baumeister, R. F. (1998). The self. In D. T. Gilbert, S. T. Fiske, and G. Lindzey (eds),Handbook of Social Psychology(4th edn, Vol. 1, pp. 680-740). New York McGraw-Hill.
    9. , R. F. and Sommer, K. L. (1997). What do men want? Gender differences and two spheres of belongingness: Comment on Cross and Madson.Psychological Bulletin,122, 38-44.
    10. Berry, J. W., Trimble, J. E. and Olmedo, E. L. (1986). Assessment of acculturation. In W. J. Lonner and J. W. Berry (eds), Field methods in cross-cultural research (pp. 290-327). Beverly Hills, CA: SAGE.
    11. Bilous, F. R. and Krauss, R. M. (1988). Dominance and accommodation in the conversational behaviours of same- and mixed-gender dyads. Language and Communication, 8, 183-194
    12. Birchmeier, Z., Dietz-Uhler, B. and Stasser, G. (eds) (2011). Strategic uses of social technology: An interactionist perspective of social psychology. Cambridge, UK: Cambridge University Press.
    13. Birdwhistell, R. (1970). Kinesics and context: Essays on body movement communication. Philadelphia, PA: University of Pennsylvania Press.
    14. Bloom, L. (1970). Language development: Form and function in emerging grammars. Cambridge, MA: MIT Press.
    15. Bourhis, R. Y., Giles, H. and Lambert, W. E. (1975). Social consequences of accommodating one’s style of speech: A cross-national investigation. International Journal of the Sociology of Language, 6, 55-72.
    16. Bourhis, R. Y., Giles, H., Leyens, J. P. and Tajfel, H. (1979). Psycholinguistic distinctiveness: Language divergence in Belgium. In H. Giles and R. St Clair (eds), Language and social psychology (pp. 158-185). Oxford, UK: Blackwell.
    17. Breckler, S. J., Pratkanis, A. R. and McCann, C. D. (1991). The representation of self in multidimensional cognitive space.British Journal of Social Psychology, 30, 97-112
    18. Brewer, M. B. (2001). The many faces of social identity: Implications for political psychology.Political Psychology, 22, 115-125
    19. Brewer, M. B. and Gardner, W. (1996). Who is this? ‘We’? Levels of collective identity and self representation.Journal of Personality and Social Psychology, 71, 83-93
    20. Brewer, M.B. and Pierce, K. P. (2005). Social identity complexity and outgroup tolerance.Personality and Social Psychology Bulletin,31, 428-437
    21. Brown, P. and Fraser, C. (1979). Speech as a marker of situation. In K. R. Scherer and H. Giles (eds), Social markers in speech (pp. 33-108). Cambridge, UK: Cambridge University Press.
    22. Bugental, D. E., Love, L. R. and Gianetto, R. M. (1971). Perfidious feminine faces. Journal of Personality and Social Psychology, 17, 314-318.
    23. Burgoon, J. K., Buller, D. B. and Woodall, W. G. (1989). Nonverbal communication: The unspoken dialogue. New York: Harper and Row.
    24. Byrne, D. (1971). The attraction paradigm. New York: Academic Press.
    25. Cantor, N. and Kihlstrom, J. F. (1987).Personality and social intelligence. Englewood Cliffs, NJ: Prentice Hall
    26. Carnaghi, A., Maass, A., Gresta, S., Bianchi, M., Cardinu, M. and Arcuri, L. (2008). Nomina sunt omina: On the inductive potential of nouns and adjectives in person perception. Journal of Personality and Social Psychology, 94, 839-859.
    27. Chen, S., Boucher, H. C. and Tapias, M. P. (2006). The relational self revealed: Integrative conceptualization and implications for interpersonal life.Psychology Bulletin, 132, 151-179
    28. Cheney, D. L., and Seyfarth, R. M. (2005). Constraints and preadaptations in the earliest stages of language evolution. Linguistic Review, 22, 135-159.
    29. Chiu, C.-Y. and Hong, Y.-Y. (2007). Cultural processes: Basic principles. In A. W. Kruglanski and E. T. Higgins (eds),Social Psychology: Handbook of basic principles (2nd edn, pp. 785-804). New York: Guilford Pres
    30. Chomsky, N. (1957). Syntactic structures. The Hague: Mouton.
    31. Clément, R. (1980). Ethnicity, contact and communication competence in a second language. In H. Giles, W. P. Robinson and P. M. Smith (eds), Language: Social psychological perspectives (pp. 147-154). Oxford, UK: Pergamon Press.
    32. Clyne, M. G. (1981). ‘Second generation’ foreigner talk in Australia. International Journal of the Sociology of Language, 28, 69-80.
    33. Corballis, M. (2003). From mouth to hand: Gesture, speech, and the evolution of right-handedness. Behavioral and Brain Sciences, 26(02).
    34. Cross, S. E. and Madson, L. (1997). Models of the self: Self-construals and gender.Psychological Bulletin,122, 5-37
    35. DePaulo, B. and Friedman, H. S. (1998). Nonverbal communication. In D. T. Gilbert, S. T. Fiske and G. Lindzey (eds), The handbook of social psychology (4th edn, Vol. 2, pp. 3-40). New York:McGraw-Hill.
    36. Diehl, M. (1990). The minimal group paradigm: Theoretical explanations and empirical findings.European Review of Social Psychology, 1, 263-292
    37. Durkin, K. (1995). Developmental social psychology: From infancy to old age. Oxford, UK: Blackwell.
    38. Edwards, D. and Potter, J. (eds) (1992). Discursive psychology. London: SAGE.
    39. Edwards, J. (1994). Multilingualism. London: Routledge.
    40. Elliot, A. J. (1981). Child language. Cambridge, UK: Cambridge University Press.
    41. Fazio, R. H., Effrein, E. A. and Falender, V. J. (1981). Self-perceptions following social interactions.Journal of Personality and Social Psychology, 41, 232-242
    42. Fishman, J. A. (1972). Language and nationalism. Rowley, MA: Newbury House.
    43. Fiske, S. T., Cuddy, A., and Glick, P. (2007). Universal dimensions of social perception: Warmth and competence. Trends in Cognitive Science. 11, 77-83.
    44. Fogassi, L. and Ferrari, P.F. (2007). Mirror neurons and the evolution of embodied language. Current directions in Psychological Science, 16, 136-141.
    45. Forgas, J. P. (1985). Interpersonal behaviour. Sydney: Pergamon Press.
    46. Frick, R. W. (1985). Communication emotions: The role of prosodic features. Psychological Bulletin, 97, 412-429
    47. Furnham, A. (1986). Some explanations for immigration to, and emigration from, Britain. New Community, 13, 65-78.
    48. Gallois, C. and Callan, V. J. (1986). Decoding emotional messages: Influence of ethnicity, sex, message type, and channel. Journal of Personality and Social Psychology, 51, 755-762.
    49. Gallois, C., Barker, M., Jones, E. and Callan, V. J. (1992). Intercultural communication: Evaluations of lecturers and Australian and Chinese students. In S. Iwakaki, Y. Kashima and K. Leung (eds), Innovations in cross-cultural psychology (pp. 86-102). Amsterdam: Swets and Zeitlinger.
    50. Gallois, C., Callan, V. J. and Johnstone, M. (1984). Personality judgements of Australian Aborigine and white speakers: Ethnicity, sex and context. Journal of Language and Social Psychology, 3, 39-57.
    51. Gallois, C., Ogay, T. and Giles, H. (2005). Communication accommodation theory: A look back and a look ahead. In W. Gudykunst (ed.), Theorizing about intercultural communication (pp. 121-148). Thousand Oaks, CA:Sage.
    52. G. (1996). Self and other: A Chinese perspective on interpersonal relationships. In W. B. Guddykunst, S. Ting-Toomey and T. Nishida (eds),Communication in personal relationships across cultures (pp. 81-101). Thousand Oaks, CA: SAGE
    53. Gardner, R. A. and Gardner, B. T. (1971). Teaching sign language to a chimpanzee. Science, 165, 664-672.
    54. Gardner, R. C. (1979). Social psychological aspects of second language acquisition. In H. Giles and R. St Clair (eds), Language and social psychology (pp. 193-220). Oxford, UK: Blackwell.
    55. Geertz, C. (1975). On the nature of anthropological understanding.American Scientist,63, 47-53
    56. Gergen, K. J. (1971).The Concept of Self. New York: Holt, Rinehart and Winston.
    57. Giles, H. (ed.) (1984). The dynamics of speech accommodation theory. International Journal of the Sociology of Language, 46, whole issue.
    58. Giles, H. (ed.) (2012). The handbook of intergroup communication. New York Routledge.
    59. Giles, H. and Byrne, J. L. (1982). The intergroup model of second language acquisition. Journal of Multilingual and Multicultural DevelopmentI, 3, 17-40.
    60. Giles, H. and Coupland, N. (1991). Language Contexts and consequences. Milton Keynes, UK: Open University Press.
    61. Giles, H. and Johnson, P. (1981). The role of language in ethnic group relations. In J. C. Turner and H. Giles (eds), Intergroup behaviour (pp. 199-243). Oxford, UK: Blackwell.
    62. Giles, H. and Johnson, P. (1987). Ethnolinguistic identity theory: A social psychological approach to language maintenance. International Journal of the Sociology of Language, 68, 66-99.
    63. Giles, H. and Noels, K. A. (2002). Communication accommodation in intercultural encounters. In T. K. Nakayama and L. A. Flores (eds), Readings in cultural contexts (pp. 117-126). Boston, MA: McGraw-Hill.
    64. Giles, H. and Powesland, P.F. (1975). Speech style and social evaluation. London: Academic Press.
    65. Giles, H., Bourhis, R. Y. and Taylor, D. M. (1977). Towards a theory of language in ethnic group relations. In H. Giles (ed), Language, ethnicity and intergroup relations (pp. 307-48). London: Academic Press.
    66. Giles, H., Mulac, A., Bradac, J. J. and Johnson, P. (1987). Speech accommodation theory: The next decade and beyond. In M. McLaughlin (ed.), Communication yearbook (Vol. 10, pp. 13-48). Newbury Park, CA: SAGE.
    67. Giles, H., Reid, S. and Harwood, J. (eds) (2010). The dynamics of intergroup communication. New York: Peter Lang.
    68. Giles, H., Rosenthal, D. and Young, L. (1985). Perceived ethno-linguistic vitality: The Anglo-and Greek-American setting. Journal of Multilingual and Multicultural Development, 6, 253-69.
    69. Giles, H., Taylor, D. M. and Bourhis, R. Y. (1973). Towards a theory of interpersonal accommodation through language: Some Canadian data. Language in Society, 2, 177-192.
    70. Golinkoff, R. M., & Hirsh-Pasek, K. (2006). Baby wordsmith: From associationist to social sophisticate. Current Directions in Psychological Science, 15, 30-33.
    71. Grant, F. and Hogg, M. A. (2012). Self-uncertainty, social identity prominence and group identification.Journal of Experimental Social Psychology, 8, 538-542
    72. Greenwald, A. G. (1980). The totalitarian ego: Fabrication and revision of personal history.American Psychologist, 35, 603-618
    73. Hagoort, P. and Levelt, W. J. M. (2009). The speaking brain. Science, 326, 372-374
    74. Hall, E. T (1979). Gender, gender roles, and nonverbal communication. In R. Rosenthal (ed.), Skill in nonverbal communication (pp. 32-67). Cambridge, MA: Oelgeschlager, Gunn and Hain.
    75. Hall, J. A (1984). Nonverbal sex differences: Communication accuracy and expressive style. Baltimore, MD: Johns Hopkins University Press.
    76. Hall, J. A. (1978). Gender effects in decoding nonverbal cues. Psychological Bulletin, 85, 845-857.
    77. Hall, K. (2000). Performativity. Journal of Linguistic Anthropology, 9, 184-187.
    78. Harrington, J. (2006). An acoustic analysis of ‘happy-tensing’ in the Queen’s Christmas broadcasts. Journal of Phonetics, 34, 439-57.
    79. Haslam, N., Rothschild, L. and Ernst, D. (1998). Essentialist beliefs about social categories. British journal of Social Psychology, 39, 113-127.
    80. Hauser, M. D., Chomsky, N., & Fitch, W. T. (2005). The faculty of language: What is it, who has it, and how did it evolve? Science, 298, 1569-1579.
    81. S. J. (2010). Cultural Psychology. In S. T. Fiske, D. T. Gilbert, and G. Lindzey (eds),Handbook of social psychology(5th edn, Vol. 2, pp. 1423-1464). New York: Wiley
    82. S. J. (2012).Cultural Psychology (2nd edn). New York: Norton.
    83. Hewes, G. W. (1957). The anthropology of posture. Scientific American, 196, 123-132.
    84. Higgins, E. T. (1997). Beyond pleasure and pain.American Psychologist, 52, 1280-1300
    85. Higgins, E. T. (1998). Promotion and prevention: Regulatory focus as a motivational principle. In M. P. Zanna (ed.),Advances in experimental social psychology (Vol. 30, pp. 1-46). New York: Academic Press
    86. Higgins, E. T. and Silberman, I. (1998). Development of regulatory focus: Promotion and prevention as ways of living. In J. Heckhausen and C. S. Dweck (eds),Motivation and self-regulation across the lifespan (pp. 78-113). New York: Cambridge University Press
    87. Higgins, E. T. and Tykocinski, O. (1992). Self-discrepancies and biographical memory: Personality and cognition at the level of psychological situation.Personality and Social Psychology Bulletin, 18, 527-535
    88. Higgins, E. T., Van Hook, E. and Dorfman, D. (1988). Do self-attributes form a cognitive structure?Social Cognition, 6, 177-207
    89. Higgins, E.T., Roney, C., Crowe, E. and Hymes, C. (1994). Ideal versus ought predilections for approach and avoidance. Distinct self-regulatory systems.Journal of Personality and Social Psychology, 66, 276-286
    90. Hoffman, C., Lau, I. and Johnson, D. R. (1986). The linguistic relativity of person cognition: An English-Chinese comparison. Journal of Personality and Social Psychology, 51, 1097-1105.
    91. Hogg, M. A. (1985). Masculine and feminine speech in dyads and groups: A study speech style and gender salience. Journal of Language and Social Psychology, 4, 99-112.
    92. Hogg, M. A., D’Agata, P. and Abrams, D. (1989). Ethnolinguistic betrayal and speaker evaluations among Italian Australians. Genetic, Social and General Psychology Monographs, 115, 153-181
    93. Hogg, M. A., Joyce, N. and Abrams, D. (1984). Diglossia in Switzerland? A social identity analysis of speaker evaluations. Journal of Language and Social Psychology, 3, 185-196.
    94. Hollingshead, A. B. (2001). Communication technologies, the internet, and group research. In M. A. Hogg and R. S. Tindale (eds), Blackwell handbook of social psychology: Group processes (pp. 557-573). Oxford, UK: Blackwell.
    95. Holtgraves, T. (2010). Social psychology and language: Words, utterances and conversations. In S. T. Fiske, D.T. Gilbert, and G. Lindzey (eds), Handbook of social psychology (5th edn, Vol. 2, pp. 1386-1422). New York: Wiley.
    96. Ickes, W., Gesn, P. R. and Graham, T. (2000). Gender differences in empathic accuracy: Differential ability or differential motivation? Personal Relationships, 7, 95-109.
    97. Jones, E. E. and Nisbett, R. E. (1972). The actor and the observer: Divergent perceptions of the causes of behaviour. In E. E. Jones, D. E. Kanouse, H. H. Kelley, R. E. Nisbett, S. Valins and B. Weiner (eds),Attribution: Perceiving the causes of behaviour (pp. 79-94). Morristown, NJ: General Learning Press.
    98. Knapp, M. L. (1978). Nonverbal communication in human interaction (2nd edn). New York: Holt, Rinehart and Winston.
    99. Kraus, N., and Banai, K. (2007). Auditory-processing malleability: Focus on language and music. Current Directions on Psychological Science, 16, 105-110.
    100. Lakoff, R. (1973). Language and women’s place. Language in Society, 2, 45-80.
    101. Lambert, W. E., Hodgson, R. C., Gardner, R. C. and Fillenbaum, S. (1960). Evaluation reactions to spoken language. Journal of Abnormal and Social Psychology, 60, 44-51.
    102. Leary, M. R. and Tangney, J. P. (2012).Handbook of self and identity (2nd edn) New York: Guildford.
    103. Limber, J. (1977). Language in child and chimp? American Pschologist, 32, 280-295.
    104. Linville, P. W. (1985). Self-complexity and affective extremity: Don’t put all your eggs in one cognitive basket.Social Cognition, 3, 94-120
    105. Linville, P. W. (1987). Self-complexity as a cognitive buffer against stress-related depression and illness.Journal of Personality and Social Psychology, 52, 165-188
    106. Lock, A (1980). The guided reinvention of language. London: Academic Press.
    107. Lock, A (ed.) (1978). Action, gesture and symbol: The emergence of language. London: Academic Press.
    108. Lockwood, P., Jordan, C. H. and Kunda, Z. (2002). Motivation by positive or negative role models; Regulatory focus determines who will best inspire us.Journal of Personality and Social Psychology, 83, 854-864
    109. Lorenzi-Cioldi, F. and Clémence, A. (2001). Group processes and the construction of social representations. In M. A. Hogg and R. S. Tindale (eds),Blackwell handbook of social psychology: Group processes (pp. 311-333). Oxford, UK: Blackwell
    110. Manstead, A. S. R. (1992). Gender differences in emotion. In A. Gale and M. W. Eysenck (eds), Handbook of individual differences: Biological perspectives (pp. 355-387). Oxford, UK: Wiley.
    111. Markus, H. (1977). Self-schemata and processing information about the self.Journal of Personality and Social Psychology, 35, 63-78
    112. Markus, H. (1977). Self-schemata and processing information about the self.Journal of Personality and Social Psychology, 35, 63-78
    113. Markus, H. and Kitayama, S. (1991). Culture and the self: Implications for cognition, emotion, and motivation.Psychological Review, 98, 224-253.
    114. Markus, H. and Nurius, P. (1986). Possible selves.American Psychologist, 41, 954-969
    115. Markus, H. and Sentis, K. P. (1982). The self in social information processing. In J. Suls (ed.),Psychological perspectives on the self (Vol. 1, pp. 41-70). Hillsdale, NJ: Erlbaum.
    116. Markus, H. and Wurf, E. (1987). The dynamic self-concept: A social-psychological perspective.Annual Review of Psychology, 38, 299-337
    117. Matsumoto, D. and Hwang, H. S. (2011). Evidence for training the ability to read microexpressions of emotions. Motivation and Emotion, 35, 181-191.
    118. McKinlay, A and McVittie, C. (2008). Social psychology and discourse. Oxford, England: Wiley-Blackwell
    119. McNamara, T. F. (1987). Language and social identity: Israelis abroad. Journal of Language and Social Psychology, 6, 215-28.
    120. Moscovici, S. (1961).La psychanalyse: Son image et son public. Paris: Presses Universitaires de France.
    121. Mulac, A., Studley, L. B., Wiemann, J. M. and Bradac, J. J. (1987). Male/female gaze in same-sex and mixed-sex dyads: Gender-linked differences and mutual influence. Human Communication Research, 27, 121-152.
    122. Ng, S. H. and Bradac, J. J. (1993). Power in language. Thousand Oaks, CA:SAGE.
    123. Noels, K. A., Pon, G. and Clément, R. (1996). Language and adjustment: The role of linguistic self-confidence in the acculturation process. Journal of Language and Social Psychology, 15, 246-264.
    124. Noller, P. (1984). Nonverbal communication and marital interaction. Oxford, UK: Pergamon Press.
    125. Oakes, P. J., Haslam, S. A. and Reynolds, K. J. (1999). Social categorization and social context: Is stereotype change a matter of information or of meaning? In D. Abrams and M. A. Hogg (eds),Social Identity and social cognition (pp. 55-79). Oxford, UK: Blackwell
    126. Otten, S. and Wentura, D. (1999). About the impact of automaticity in the minimal group paradigm: Evidence from affective priming tasks.European Journal of Social Psychology, 29, 1049-1071
    127. Oyserman, D., Coon, H. M. and Kemmelmeier, M. (2002). Rethinking individualism and collectivism. Evaluation of theoretical assumptions and meta-analyses.Psychological Bulletin, 128, 3-72
    128. Patterson, F. (1978). Conversations with a gorilla. National Geographic, 154, 438-465.
    129. Pei, M. (1965). The story of language (2nd edn). Philadelphia, PA: Lippincott.
    130. Pollick, A. and de Waal, F. (2007). Ape gestures and language evolution. Proceedings of the National Academy of Sciences, 104(19), pp.8184-8189.
    131. Potter, J. and Wetherell, M. S. (1987).Discourse and social psychology: Beyond attitudes and behaviour. London: SAGE
    132. Roccas, S.  and Brewer, m. B. (2002). Social identity complexity.Personality and Social Psychology Review, 6, 88-109
    133. Rommetveit, R. (1974). On message structure: A framework for the study of language and communication. New York: Riley.
    134. Rosenthal, R., Hall, J. A., DiMatteo, J. R., Rogers, P. L. and Archer, D. (1979). Sensitivity to nonverbal communication: The PONS test. Baltimore, MD: Johns Hopkins University Press.
    135. Rosip, J. C. and Hall, J. A. (2004). Knowledge of nonverbal cues, gender and nonverbal decoding accuracy. Journal of nonverbal behaviour, 28, 267-286.
    136. Sachdev, I. and Wright, A. (1996). Social influence and language learning: An experimental study. Journal of Language and Social Psychology, 15, 230-245.
    137. Sahin, N. T., Pinker, S., Cash, S. S., Schomer, D., and Halgren, E. (2009). Sequential processing of lexical, grammatical, and phonological information within Broca’s area. Science, 326, 445-450.
    138. Saxe, R., Moran, J. M., Scholz, J. and Gabrieli, J. (2006). Overlapping and non-overlapping brain regions for theory of mind and self-reflection in individual subjects.Social Cognitive and Affective Neuroscience,1, 229-234.
    139. Schachter, D. A., Shaver, P. R. and Mikulincer, M. (2005). Patterns of nonverbal behaviour and sensitivity in the context of attachment relationships. Journal of Nonverbal Behaviour, 29, 141-169.
    140. Scherer, K. R. and Giles, H. (eds) (1979). Social markers in speech. Cambridge, UK: Cambridge University Press.
    141. Scott, S. K., McGettigan, C. and Eisner, F. (2009). A little more conversation, a little less action – candidate roles for the motor cortex in speech production. Nature Reviews Neuroscience, 10, 295-302.
    142. Searle, J. R. (1979). Expression and meaning: Studies in the theory of speech acts. Cambridge, UK: Cambridge University Press.
    143. Sedikides, C. and Brewer, M. B. (eds) (2001).Individual self, relational self, and collective self. Philadelphia, PA: Psychology Press.
    144. Seeley, E. A., Gardner, W. L., Pennington, G. and Gabriel, S. (2003). Circle of friends or members of a group? Sex differences in relational and collective attachment to groups.Group Processes and Intergroup Relations,6, 251-263
    145. Semin, G. (2007). Grounding communication: Synchrony. In A.W. Kruglanski and E. T. Higgins (eds), Social Psychology: Handbook of basic principles (2nd edn, pp 630-649). New York: Guilford Press.
    146. Shah, J. Y., Brazy, P. C. and Higgins, E. T. (2004). Promoting us or preventing them: Regulatory focus and manifestations of intergroup bias.Personality and Social Psychology Bulletin, 30, 433-446
    147. Shah, J. Y., Higgins, E. T. and Friedman, R. S. (1998). Performance incentives and means: How regulatory focus influences goal attainment.Journal of Personality and Social Psychology, 74, 285-293
    148. Simard, L., Taylor, D. M. and Giles, H. (1976). Attribution processes and interpersonal accommodation in a bi-lingual setting. Language and Speech, 19, 374-387.
    149. Slocombe, K. E., and Zuberbuhler, K. (2007). Chimpanzees modify recruitment screams as a function of audience composition. Proceedings of the National Academy of Sciences, USA, 104, 17228-17233
    150. Smolicz, J. J. (1983). Modification and maintenance: Language among school children of Italian background in South Australia. Journal of Multilingual and Multicultural Development, 4, 313-337.
    151. Stapel, D. A. and Semin, G. R. (2007). The magic spell of language: Linguistic categories and their perceptual consequences. Journal of Personality and Social Psychology, 93, 23-33.
    152. Swann, W. B. Jr, and Bosson, J.K. (2010). Self and identity. In S. T. Fiske, D. T. Gilbert, and G.Lindzey (eds),Handbook of social psychology (5th edn, Vol. 1, pp. 589-628). New York: Wiley.
    153. Tajfel, H. (1970). Experiments in intergroup discrimination.Scientific American, 223, 96-102
    154. Tajfel, H. and Turner, J. C. (1979). An integrative theory of intergroup conflict. In W.G. Austin and S. Worchel (eds),The social psychology of intergroup relations (pp. 33-47). Monterey, CA: Brooks/Cole
    155. Thakerar, J. N., Giles, H. and Cheshire, J. (1982). Psychological and linguistic parameters of speech accommodation theory. In C. Fraser and K. R. Scherer (eds), Advances in the social psychology of language (pp. 205-255). Cambridge, UK: Cambridge University Press.
    156. Trager, G. L. (1958). Paralanguage: A first approximation. Studies in Linguistics, 13, 1-12.
    157. Triandis, H. C. (1989). The self and social behaviour in differing cultural contexts.Psychological Review,96, 506-520
    158. Turner, J. C., Reynolds, K. J., Haslam, S. A.and Veenstra, K. E. (2006). Reconceptualizing personality: Producing individuality by defining the personal self. In T. Postmes and J. Jetten (eds),Individuality and the group: Advances in social identity (pp. 11-36). London: SAGE
    159. Vignoles, V. L., Chryssochoou, X. and Breakwell, G. M. (2000). The distinctiveness principle: Identity, meaning, and the bounds of cultural relativity.Personal and Social Psychology Review, 4, 337-354
    160. Vygotsky, L.S. (1962). Thought and language. New York: Wiley
    161. Watson, K. (2009). Regional variations in English accents and dialects. In J. Culpeper, F. Katamba, P. Kerswill, R. Wodak, and T. McEnery (eds), English language: Description, variation and context (pp. 337-357). Basingstoke, UK: Palgrave Macmillan.
    162. Whorf, B. L. (1956). Language, thought and reality. Cambridge, MA: MIT Press.
    163. Yuki, M. (2003). Intergroup comparison versus intragroup relationships: A cross-cultural examination of social identity theory in North American and East Asian cultural contexts. Social Psychology Quarterly, 66, 166-183.

Updated 11.05.2017 | Danny J. D’Purb |


While the aim of the community at has  been & will always be to focus on a modern & progressive culture, human progress, scientific research, philosophical advancement & a future in harmony with our natural environment; the tireless efforts in researching & providing our valued audience the latest & finest information in various fields unfortunately takes its toll on our very human admins, who along with the time sacrificed & the pleasure of contributing in advancing our world through sensitive discussions & progressive ideas, have to deal with the stresses that test even the toughest of minds. Your valued support would ensure our work remains at its standards and remind our admins that their efforts are appreciated while also allowing you to take pride in our journey towards an enlightened human civilization. Your support would benefit a cause that focuses on mankind, current & future generations.

Thank you once again for your time.

Please feel free to support us by considering a donation.


The Team @

Donate Button with Credit Cards

Essay // Psychology: Causes of Aggressive Behaviour in Human Primates

Aggression has been studied in experimental and naturalistic settings, however its definition has caused a lot of controversy in terms of precision among researchers. Some behaviours such as physically pushing, shoving and striking may be qualified as aggressive, while in other situations it may also include ostracizing individuals – which has proven to produce aggressive reactions (DeWall, Twenge, Bushman, Im and Williams, 2010; Wesselmann, Butler, Williams and Pickett, 2010; Warburton, Williams and Cairns, 2006; Williams and Warburton, 2003). What is defined as aggressive is believed to be partly shaped by societal and cultural norms, for example, among the Amish of Pennsylvania – ostracism is considered as incredibly harsh, whereas among gang subcultures mutilation and murder may be seem as commonplace [situational].

Uk Riots

Riot police face a mob in Hackney, North London on August 8, 2011. Riot police faced off with youths in fresh violence in London in the third day of disorder after some of the worst rioting in the British capital in years. The riots broke out in the North London district of Tottenham on August 6, following a protest against the death of a local man in a police shooting, and the violence spread to other parts of the city on August 7. AFP PHOTO/KI PRICE

Finding the reasons behind acts of violence among humans is generally explored through three major theoretical perspectives:

  • Biological Theories

  • Biosocial Theories

  • Social Theories

The biological explanations sides with the nature debate of the nature-nurture controversy where most social psychologists tend to disagree, since some theories seem like a threat to any form of social theory. Biological researchers assume aggression to be part of every human organism as is thus an innate action tendency where modification of the behaviour is possible [but not the organism itself]; an instinct defined by Riopelle (1987) that is goal directed and terminates in a specific outcome, beneficial to the individual and its species, adapted to a normal environment, shared by most members of the species, developed in clear ways during maturation, and is unlearned on the basis of individual experience. All major three approaches relating to the biological model have argued that aggression is an inherent part of human nature programmed at birth to manifest in a particular way.


The first biological theory proposed by Sigmund Freud in Beyond the pleasure principle (1920/1990) proposed from the model of the Psychodynamic theory that human aggression stems from Thanatos, the ‘Death Instinct’ which is the opposition to Eros, ‘Life Instinct’. Thanatos is believed to initially be self-destructive but is later redirected towards others during development. Freud attributed the death instinct as a response to the atrocities of the First World War. Similarly to sexual urges stemming from Eros, Thanatos releases an aggressive urge believed to be built up naturally from bodily tensions which has to be released [one factor theory]. Later neo-Freudians theorists revised the idea and defined aggression as more rational, yet innate, process whereby individuals sought a healthy release to primitive survival instincts basic to all animal species (Hartmann, Kris and Loewenstein, 1949).


Ethology, which is a branch of biology dedicated to the study of instincts among all members of a species when living in their natural environment lead to three books [Konrad Lorenz’s On Aggression (1966), Robert Ardrey’s The Territorial Imperative (1966) and Desmond Morris’s The Naked Ape (1967)] which made the case for the instinctual basis of human aggression on the grounds of comparison with animal behaviour. Similar to neo-Freudians beliefs of aggression as an innate instinct, the behaviour itself is elicited by specific environmental stimuli known as ‘releasers. Lorenz attributed survival value as an ethological argument to justify aggression, e.g. animals behave aggressively towards members of its species in the distribution of individuals and/or family units to make most efficient use of available resources [sexual selection, mating, food and territory]. Lorenz (1966) extended the argument to humans arguing an inherited fighting instinct.

chuck palahniuk

Chuck Palahniuk’s 1996 novel, Fight Club, a transgressive work of fiction, captured the gritty battle of modern men dealing with their emotions in a restrictive & often rigid industrialised civilization that hardly accommodates the “humane” side of humans

Finally, the Evolutionary social psychology theory is an ambitious model that not only assumes an innate basis for aggression but also claim a biological basis to ALL social behaviours (Caporeal, 2007; Kenrick, Maner and Li, 2005; Neuberg, Kenrick and Schaller, 2010; Schaller, Simpson and Kenrick, 2006). Derived from rigid Darwinian logic, this theory proposes that specific behaviour evolved as it promotes survival of genes that allow an individual to live long enough to pass his/her genes to the next generation. Aggression is assumed to be adaptive and linked to living long enough to procreate, for example, the defence and protection of self and/or others [helpful to an individual and its species].


Type A personality is a pattern of behaviour that has been classified after a research by Matthews (1982) which classified individuals of the category as overactive and excessively competitive in their encounters and may be more aggressive towards others competing on an important task (Carver and Glass, 1978). Under stress, Type A individuals generally prefer working alone as such arrangements may shield them from incompetent others and not place them in control of the situation (Dembroski and MacDougall, 1978). The characterisation has also been linked to proneness to abuse children (Strube, Turner, Cerro, Stevens and Hinchley, 1984) and to more conflicts in managerial roles in organisational settings with peers and subordinates but not supervisors [controlled aggression].


Image: / Testosterone in men

To conclude with biological theories, the last aggression trigger explored the effects of hormones. Testosterone was found to have a small correlation of 0.14 with aggression in a study conducted by Berman, Gladue and Taylor (1993) where the testosterone levels were measured and the participants divided into Type A and B personality groups. More convincing evidence came from the Netherlands from two studies (Cohen-Kettenis and Van Goozen, 1997; Van Goozen, Cohen-Kettenis, Gooren, Frijda and Van der Poll, 1995) where increased or decreased proneness to aggression was observed in sex reassignment hormonal administration in transsexuals depending on whether gender change was female to male or male to female.

Biosocial theories emphasize on both nature and nurture sides of the controversy that emphasize the role of the Social learning theory and context, in some cases incorporating a biological element. Derived from the work of Yale psychologists in the 1930s, one of the theories is the frustration-aggression hypothesis. This links aggression to an antecedent condition of frustration and was used to explain prejudice. Dollard, Doob, Miller, Mowrer and Sears (1939) proposed that aggression was always caused by some kind of frustrating event or situation, a reasoning applied to the effects of job loss on violence (Catalano, Novaco and McConnell, 1997) and the role of socio-economic deprivation in the ethnic cleansing of Kurds in Iraq and non-Serbs in Bosnia (Dutton, Boyanowsky and Bond, 2005; Staub, 1996, 2000).

ernesto che guevara

Ernesto “Che” Guevara (June 14, 1928 – October 9, 1967) // A doctor, author, diplomat, guerilla leader & military theorist who lead a revolutionary uprising against an oppressive & abusive system’s dictatorial military regime, and who became a symbol of human struggle for justice & fairness worldwide [A 2-part movie by Steven Soderbergh based on his diaries was released in 2008 (Trailer Available Here)]

Speculation over the ineffectiveness of other mechanisms to achieve socioeconomic and cultural goals is also believed as a cause of militant/terrorist aggression [e.g. individuals unlikely to resort to violence unless all channels of social improvement have proved ineffective (e.g. prejudice, adaptative failure, lack of skills, etc)].The second biosocial theory invokes the concept of drive and is known as Dolf Zillman’s (1979, 1988) excitation-transfer model. This approach defines aggression as a function of a learned behaviour, an arousal or excitation from another source, and an individual’s interpretation of the arousal state such that aggressive responses seem appropriate. Residual arousal transfers are believed to be transferred from one situation to another in a way that promotes likelihood of an aggressive response.

Lastly, Social Learning Theory [SLT] assumes aggression can be learnt as observed in the gradual control of aggressive impulses in early infants (Miles and Carey, 1997). It also features processes responsible for acquisition of a behaviour (or sequence), the instigation of overt acts, and maintenance of a behaviour. It was applied to understand aggression by Bandura (1973) where it was noted that if antisocial behaviours can be learnt, so can prosocial ones. French sociologist Gabriel Tarde’s (1890) book, Les lois de l’imitation, asserted that ‘Society is imitation’. SLT is unique in proposing that imitated behaviours must be seen as rewarding ‘in some way’ [learning can come from models: peers, parents, siblings, but also extended to media exposure]. Bandura believed aggression from an individual in a particular situation depends on his/her previous experiences of others’ aggressive behaviour, how successful aggressive behaviour has been in the past, the current likelihood that aggression will be rewarded or punished, and the complexity involving cognitive, social and environmental factors in a given situation.

Other issues relating to aggression include catharsis, which refers to the process of aggression as an outlet or release for pent-up emotion [the cathartic hypothesis]. Although it is associated to Sigmund Freud, the idea can be tracked back to Aristotle and the ancient Greek tragedy: by acting out their emotions, people can purify their feelings (Scherer, Abeles and Fischer, 1975).

Alcohol has also been linked to aggression through the disinhibition hypothesis which explains how the cortical control is compromised by alcohol that leads to increased activity in the more primitive areas of the brain. Link between alcohol and aggressive behaviour seems firmly established (Bartholow, Pearson, Gratton and Fabiani, 2003; Bushman and Cooper, 1990; Giancola, 2003) and controlled behavioural studies suggest a causal relationship (Bailey and Taylor, 1991; LaPlace, Chermack and Taylor, 1994).




  1. Bailey, D.S. and Taylor, S.P. (1991). Effects of alcohol and aggressive disposition on human physical aggression. Journal of Research in Personality, 25, 334-342
  2. Bandura, A. (1973). A social learning analysis. Englewood Cliffs, NJ: Prentice Hall.
  3. Bartholow, B.D., Pearson, M.A., Gratton, G. and Fabiani, M. (2003). Effects of alcohol on person perception: A social cognitive neuroscience approach. Journal of Personality and Social Psychology, 85, 627-638.
  4. Berman, M., Gladue, B. and Taylor, S. (1993). The effects of hormones, Type A behavior pattern, and provocation on aggression in men. Motivation and Emotion, 17(2), pp.125-138.
  5. Bushman, B.J. and Cooper, H.M. (1990). Effects of alcohol on human aggression: An integrative research review. Psychological Bulletin, 107, 341-354
  6. Caporael, L. R. (2007). Evolutionary theory for social and cultural psychology. In A. W. Kruglanski and E. T. Higgins (eds), Social psychology: Handbook of basic principles (2nd edn, pp. 3-18. New York: Guilford Press.
  7. Carver, C. and Glass, D. (1978). Coronary-prone behavior pattern and interpersonal aggression. Journal of Personality and Social Psychology, 36(4), pp.361-366.
  8. Catalano, R., Novaco, R. and McConnell, W. (1997). A model of the net effect of job loss on violence. Journal of Personality and Social Psychology, 72, 1440-1447
  9. Cohen-Kettenis, P.T. and Van Goozen, S.H.M. (1997). Sex reassignment of adolescent transsexuals: A follow-up study. Journal of the American Academy of Child and Adolescent Psychiatry, 36, 263-71
  10. Dollard, J., Doob, L.W., Miller, N.E., Mowrer, O.H. and Sears, R.R. (1939). Frustration and aggression. New Haven, CT: Yale University Press
  11. Dutton, D.G., Boyanowksy, E.H. and Bond, M.H. (2005). Extreme mass homicide: From military massacre to genocide. Aggression and Violent Behavior, 10, 437-473
  12. Giancola, P.R. (2003). Individual differences and contextual factors contributing to the alcohol-aggression relation: diverse populations, diverse methodologies: An introduction to the special issue. Aggressive Behaviour, 29, 285-287
  13. Hartmann, H., Kris, E. and Lowenstein, R. M. (1949). Notes on a theory of aggression. Psychoanalytic Study of the Child, 3-4, 9-36
  14. Kenrick, D. T., Maner, J. K. and Li, N. P. (2005). Evolutionary social psychology. In D. M. Buss (ed.), Handbook of evolutionary psychology (pp 803-827. New York: Wiley.
  15. LaPlace, A.C., Chermack, S.T. and Taylor, S.P. (1994). Effects of alcohol and drinking experience on human physical aggression. Personality and Social Psychology Bulletin, 20, 439-444Lorenz,K. (1966). On aggression. New York: Harcourt, Brace and World.
  16. Neuberg, S.L., Kenrick, D.T. and Schaller, M. (2010). Evolutionary social psychology. In S.T. Fiske, D.T. Gilbert, and G. Lindzey (eds), Handbook of social psychology (5th edn, Vol. 2, pp. 761-796). New York: Holt, Rinehart and Winston
  17. Schaller, M., Simpson, J. and Kenrick, D. (2006). Evolution and social psychology. New York, NY: Psychology Press.
  18. Staub, E. (1996). Cultural-societal roots of violence: The example of genocidal violence and contemporary youth violence in the United States. American Psychologist, 51, 117-132
  19. Staub, E. (2000). Genocide and mass killings: Origins, prevention, healing and reconciliation. Political Psychology, 21, 367-382
  20. Strube, M.J., Turner, C.W., Cerro, D., Stevens, J. and Hinchey,F. (1984). Interpersonal aggression and the type A coronary-prone behaviour pattern: A theoretical distinction and practical implications. Journal of Personality and Social Psychology, 47, 839-847
  21. Tarde, G. (1890). Les lois de l’imitation. Paris: Libraire Felix Alcan.
  22. Van Goozen, S.H.M., Cohen-Kettenis, P.T., Gooren, L.J.G., Frijda, N.H. and Van der Poll, N.E. (1995). Gender differences in behaviour: Activating effects of cross-sex hormones. Psychoneuroendocrinology, 20, 343-363
  23. Zillmann, D. (1979). Hostility and aggression. Hillsdale, NJ: Erlbaum
  24. Zillmann, D. (1988). Cognition-excitation interdependencies in aggressive behaviour. Aggressive Behaviour, 14, 51-64


22.01.2016 | Danny J. D’Purb |


While the aim of the community at has  been & will always be to focus on a modern & progressive culture, human progress, scientific research, philosophical advancement & a future in harmony with our natural environment; the tireless efforts in researching & providing our valued audience the latest & finest information in various fields unfortunately takes its toll on our very human admins, who along with the time sacrificed & the pleasure of contributing in advancing our world through sensitive discussions & progressive ideas, have to deal with the stresses that test even the toughest of minds. Your valued support would ensure our work remains at its standards and remind our admins that their efforts are appreciated while also allowing you to take pride in our journey towards an enlightened human civilization. Your support would benefit a cause that focuses on mankind, current & future generations.

Thank you once again for your time.

Please feel free to support us by considering a donation.


The Team @

Donate Button with Credit Cards

Essay // Biopsychology: Frontal Brain Damage & The Wisconsin Card Sorting Test (WCST)


(Photo: Jez C Self / Frontal Lobe Gone)

The Wisconsin Card Sorting Test (WCST; Grant & Berg, 1948; Heaton, Chelune,Talley, & Curtis, 1993) has long been used in Neuropsychology and is among the most frequently administered neuropsychological instruments (Butler, Retzlaff, & Vanderploeg, 1991).

The test was specifically devised to assess executive functions mediated by the frontal lobes such as problem solving, strategic planning, use of environmental instructions to shift procedures, and the inhibition of impulsivity. Some neuropsychologists however, have questioned whether the test can measure complex cognitive processes believed to be mediated by the Frontal lobes (Bigler, 1988; Costa, 1988).

The WCST test, until this day remains widely used in clinical settings as frontal lobe injuries are common worldwide. Performance on the WCST test is believed to be particular sensitive in reflecting the possibilities of patients having frontal lobe damage (Eling, Derckx, & Maes, 2008). On each Wisconsin card, patterns composed of either one, two, three or four identical symbols are printed. Symbols are either stars, triangle, crosses or circles; and are either red, blue, yellow or green.

At the start of the test, the patient has to deal with four stimulus cards that are different from one another in the colour, form and number of symbols they display. The aim of the participant would be to correctly sort cards from a deck into piles in front of the stimulus cards. However, the participant is not aware whether to sort by form, colour or by number. The participant generally starts guessing and is told after each card has been sorted whether it was correct or incorrect.

Firstly they are generally instructed to sort by colour; however as soon as several correct responses are registered, the sorting rule is changed to either shape or number without any notice, besides the fact that responses based on colour suddenly become incorrect. As the process continues, the sorting principle is changed as the participant learns a new sorting principle.

potbIt has been noted that those with frontal lobe area damage often continue to sort according to only one particular sorting principle for 100 or more trials even after the principle has been deemed as incorrect (Demakis, 2003). The ability to correctly remember new instructions with for effective behaviour is near impossible for those with brain damage: a problem known as ‘perseveration’.

Another widely used test is the ‘Stroop Task’ which sets out to test a patient’s ability to respond to colours of the ink of words displayed with alternating instructions. Frontal patients are known for badly performing to new instructions. As the central executive is part of the frontal lobe, other problems such as catatonia – a condition where patients remain motionless and speechless for hours while unable to initiate – can arise. Distractibility has also been observed, where sufferers are easily distracted by external or internal stimuli. Lhermite (1983) also observed the ‘Utilisation Syndrome’ in some patients with Dysexecutive Syndrome (Normal & Shallice, 1986), who would grab and use random objects available to them pathologically.



Butler, M., Retzlaff, P., & Vanderploeg, R. (1991). Neuropsychological test usage. Professional Psychology: Research and Practice, 22, 510-512

Demakis, G. J. (2003). A meta-analytic review of the sensitivity of the Wisconsin Card Sorting Test to frontal and lateralized frontal brain damage. Neuropsychology, 17, 255-264

Eling, P., Derckx, K., & Maes, R. (2008). On the historical and conceptual background of the Wisconsin Card Sorting Test. Brain and Cognition, 67, 247-253

Grant, D.A. and Berg, E.A. (1948). A Behavioural Analysis of Degree Impairment and Ease of Shifting to New Responses in Weigh-Type Card Sorting Problem. Journal of Experimental Psychology, 39, 404-411

Heaton, R.K., Chelune, G.J., Talley, J.L., Kay, G.G., & Curtis, G. (1993). Wisconsin Card Sorting Test manual: Revised and expanded. Odessa, FL: Psychological Assessment Resources

Lhermitte, F. (1983) “Utilization Behaviour” and its relation to lesions of the frontal lobes. Brain, 106, 237-255

Norman, D.A., & Shallice, T. (1986). Attention to action: Willed and automatic control of behaviour. (Center for Human Information Processing Technical Report No. 99, rev. ed.) In R.J. Davidson, G.E. Schartz, & D. Shapiro (Eds.), Consciousness and self-regulation: Advances in research, (pp. 1-18). New York: Plenum Press


18.08.2014 | Danny J. D’Purb |


While the aim of the community at has  been & will always be to focus on a modern & progressive culture, human progress, scientific research, philosophical advancement & a future in harmony with our natural environment; the tireless efforts in researching & providing our valued audience the latest & finest information in various fields unfortunately takes its toll on our very human admins, who along with the time sacrificed & the pleasure of contributing in advancing our world through sensitive discussions & progressive ideas, have to deal with the stresses that test even the toughest of minds. Your valued support would ensure our work remains at its standards and remind our admins that their efforts are appreciated while also allowing you to take pride in our journey towards an enlightened human civilization. Your support would benefit a cause that focuses on mankind, current & future generations.

Thank you once again for your time.

Please feel free to support us by considering a donation.


The Team @

Donate Button with Credit Cards

Essay // Biopsychology: How our Neurons work

(Image: WPClipArt)

(Image: WPClipArt)

As vast as our universe is, so are its complexities. One of the most complex of objects in it remains the human brain, an organ which when fully grown requires 750 millilitres of oxygenated blood every minute to maintain normal activity – of the total amount of oxygen delivered to the body’s tissues by the arteries, 20 % is consumed by the brain which only makes up 2% of the body’s weight. It also has 100 billion neurons with each connected to 7000 others, leading to a surprising 700 trillion connections. This complexity is far from excessive as we study the importance of the construction of the brain for civilisation and all life on our planet. This fascinating organ is not only at the basis of low-level biological tasks such as heart rate monitoring, respiration and feeding, but it is also vital in the evolution of our behaviours for survival (e.g. perceiving, learning and making rapid decisions). At the heart of human existence, it is also the organ allowing the human organism to explore abilities unique to its kind such as thoughts, emotions, consciousness and love.

While nearly 50% of the Central Nervous System and Peripheral Nervous System are neurons, they are supported by glial cells. The ratio of Neurons to glial cells in the human brain is close to 1:1 (Azevedo et al., 2009) and glial cells come in 3 important types:

Firstly, astrocytes [also known as ‘star cell’] produce chemicals needed for neurons to function such as extracellular fluid, provide nourishment [linked to blood vessel] and clean up dead neurons. They also help keep the neuron in place.

Secondly, oligodendrocytes support the axon by creating a myelin coating which increases the speed and efficiency of axonal conduction [in the PNS myelin is produced by Schwann cells].

Thirdly and lastly, microglia works with the immune system by protecting the brain from infections while also being responsible for inflammation in cases of brain damage.

Neurons are cells that are devised to ensure the reception, conduction, and transmission of electrochemical signals and come in several types depending on their structure and function. The 3 main types are Multipolar, Bipolar and Unipolar neurons.


Most neurons in the brain are multipolar, and these have many extensions from their body: one axon and several dendrites. Bipolar neurons have two extensions: one consisting of dendrites and one of axon and are typically specialised sensory pathways (e.g. vision, smell, sight and hearing).  Unipolar neurons are cells with a single extension (an axon) from their body and are mostly somatosensory (e.g. touch, pain, temperature, etc). Although existing in variety, all neurons perform the same overall function: to process and transmit information.

The neuron is composed of three main parts, firstly the cell body [also known as the ‘soma’] is a primary component of the neuron that integrates the inputs received by the neurons to the axon hillock. The body or soma is between 5 and 100 microns in diameter (a micron is one-thousandth of a millimetre) surrounded by a membrane and hosts the cytoplasm, the nucleus and a number of organelles. The cytoplasm ressembles jelly-like substances and is in continuous movement, with the nucleus containing the genetic code of the neuron that is used for protein synthesis (e.g. of some types of neurotransmitters). The neuron’s metabolism is dependent on the organelles that perform chemical synthesis, generate and store energy; and provide the structural support (similar to a skeleton) for the neuron.

Secondly, we have the dendrites [derived from Greek ‘Dendron’] which are branched cellular extensions emanating from the cell body that receive most of the synaptic contacts from other neurons. It is important to note that dendrites only receive information from other neurons and cannot transmit any of it to them.. Their purpose is to propagate information to the axon.

Thirdly, axons which can measure from up to a few millimetres to one metre in length, transmit information from the soma to other neurons, ending with the terminal buttons which store chemicals used for inter-neuron communication. There are 2 types of axons. The first type – myelinated axons – are covered with a fatty, white substance known as myelin which is a sheath that has gaps at places known as the nodes of Ranvier. Myelin acts as a catalyst in making electric transmission faster and more efficient by insulating the axon. Hence, with myelinated axons, myelin is vital for effective electric transmission, and its loss leads to serious neurological diseases such as multiple sclerosis, The second type of axons are not covered by myelin, resulting in a slower electric transmission.ANeuron


Neurons are always active, even when no information is being received from other neurons, and must feed themselves (through blood vessels), maintain physiological parameters within a certain range (homeostasis), and maintain their electrical equilibrium, which is essential in the transmission of information.

The terminal buttons [also known as Axon terminals] are button-like endings of the axon branches which release the information to other neurons via neurotransmitter molecules through synaptic vesicles stored within itself. The neurotransmitter is then diffused across the synaptic cleft [gap between 2 membranes] where a depolarisation from incoming action potentials lead to the opening of Calcium channels and Ca+ triggers vesicles to fuse with pre-synaptic membrane, releasing the neurotransmitter into the synaptic cleft which diffuses across and binds with receptors of the next neuron’s post-synaptic membrane’s receptors; causing particular ion channels to open.

synaptic cleft

Post synaptic potentials further defines the opening credentials. Excitatory Post Synaptic Potential (EPSP) is the result of depolarisation (+ve) which increases the positive charge after allowing Sodium (Na+) ions inside. Another result could be an Inhibitory Post Synaptic Potential (IPSP) which would be caused by the hyperpolarisation (-ve) due to the opening of Chloride (Cl-) channels. The summation carried out by the Axon Hillock calculates whether it reaches the threshold, if it does; an Action Potential in the Postsynaptic Neuron is triggered and excess neurotransmitter is taken back by the pre-synaptic neuron and degraded by enzymes.

Learning is assumed to be the result of changes in the synapses between neurons – a mechanism called long-term potentiation (LTP), which is the strengthening of connections between two neurons by the synaptic chemical change.  Hebbian learning is a key principle for long-term potentiation (LTP): “neurons that fire together, wire together” (Hebb, 1949) – and recent studies seem to also suggest that the growth of new synapses foster learning.

A neuron codes information  through its “spiking rate” [response rate] which is the number of action potentials propagated per second. Some neurons may have  a high spiking rate in some situations (e.g. during speech), but not others (e.g. during vision), while others may simply have a complementary profile. Neurons that respond to the same type of information are generally grouped together, this leads to the functional specialisation of brain regions. The input a neuron receives and the output that it sends to another neuron is related to the type of information a neuron carries. For example, information about sounds is only processed by the primary auditory cortex because this region’s inputs are are from a pathway originating in the cochlea and they also send information to other neurons involved in a more advanced stage of auditory processing (e.g. speech perception). For example, if it were possible to rewire the brain such that the primary auditory cortex was to receive inputs from the retinal pathway instead of the auditory pathway (Sur & Leamey, 2001), the function of that part of the brain would have changed [along with the type of information it carries] even if the regions themselves remained static [with only inputs rewired]. This is worthy of being noted as when one considers the function of a particular cerebral region: the function of any brain region is determined by its inputs and outputs – hence, the extent to which a function can only be achieved at a particular location is a subject open to debate.

Gray matter, white matter and cerebrospinal fluid

Neurons in the brain are structured to form white matter [axons and support cells: glia] and gray matter [neuronal cell bodies]. The white matter lies underneath the highly convoluted folded sheet of gray matter [cerebral cortex]. Beneath the white matter fibers, there is another collection of gray matter structures [subcortex], which includes the basal ganglia, the limbic system, and the diencephalon. White matter tracts may project between different regions of the cortex within the same hemisphere [known as association tracts) and also between regions across different hemispheres [known as commissures; with the most important being the corpus callosum]; or may project between cortical and subcortical regions [known as projection tracts]. A number of hollow chambers called ventricles also form part of the brain, these are filled with cerebrospinal fluid (CSF), which serves important functions such as carrying waste metabolites, transferring messenger signals while providing a protective cushion for the brain.

Reflections: From biology to psychology

In the classic essay on the “Architecture of Complexity”, Simon (1996) noted that hierarchies are present everywhere at every level in natural systems – taking the field of physics as an example, in particular the way elementary particles form atoms, atoms form molecules, and molecules form more complex entities such as rocks. Furthering this metaphor as an example, we may also wish to look at the organisation of a book: letters, words, sentences, paragraphs, sections and finally chapters.

In biological systems, a similar type of hierarchical structure can be found at many levels, particularly in the way the brain is organised. Simon seems to convincingly argue that complex systems’ evolution would have had to have benefited from some degree of stability, which is precisely enabled by hierarchical organisation. The main idea is that hierarchical organisations typically have a degree of redundancy – that is, the same functions at the particular level can be carried out by different components; and if one component fails, the system is only slightly affected since other components could perform the functions to some extent. Systems that lack systematic hierarchical organisation tend to lack this degree of flexibility, and a system as complex as the human brain must have a strong hierarchical organisation, or it would not have been able to evolve into such a complex organ.HierarchyOfTheCentralNervousSystemTheHumanLimbicSystem.jpgUsing the Limbic system [diagram above] as an example of each level’s specialisation, it is possible to understand how it is responsible for a particular set of functions related but also separate from other parts of the brain. The Limbic system is essential in allowing the human organism to relate to its environment based on current needs and the present situation with experience gathered. This very intriguing part of the brain may in fact be the source of – what many might call – “Humanity” in man as it is responsible for the detection and subsequent expression of emotional responses. One of its parts, the amygdala is implicated in the detection of fearful or threatening stimuli, while parts of the cingulate gyrus are involved in the detection of emotional and cognitive conflicts. Another part, the hippocampus is of major importance in learning and memory; it lies buried in the temporal lobes of each hemisphere along with the amygdala. Other structures of the Limbic system are only visible from the ventral surface [underside] of the brain; the mamillary bodies are two small round protrusions that have traditionally been implicated in memory (Dusoir et al., 1990), while the olfactory bulbs are located under the surface of the frontal lobes with their connections to the limbic system underscoring the importance of smell for detecting environmentally salient stimuli (e.g. food, animals, cattle, cars, etc) and its influence on mood and memory.

One of the main insight of Simon’s analysis is that scientists should be thankful to nature for the existence of hierarchies, since they make the task of understanding the mechanisms involved easier. It can be achieved by simply focusing on one specific level rather than trying to understand the phenomena in all its complexities – because each level has its own laws and principles. On initial approximation, what happens at lower levels may end up being averaged without taking into account all the details and the happenings at the higher levels, which may unfairly be considered as constant.


Naturalist, David Attenborough / Image: Darwin & the tree of life (2009)

Focusing on a popular example, we could look at the biologist and naturalist Charles Darwin when he formulated his theory of evolution. At that time, the structure of DNA [which would be discovered 70 years later] was not a major concern of his, furthermore the latter did not have to consider the way the Earth came to exist. Instead, what the biologist did was to focus on an intermediate level in the hierarchy of natural phenomena (e.g. primates, animals, birds, insects, etc): how species evolved over time. Such example also seems to illustrate a vital point in this analysis: the processes involved at the level we are interested in can be understood by analysing the constraints provided by the levels below and above. What happens at the low levels (e.g. the biochemical level) and what happens at high levels (e.g. the cosmological level) limit how any species evolve;if the biochemistry of life had been disrupted, and if our planet did not provide the appropriate environmental elements and conditions for life to flourish, evolution would simply not have happened. And, as science progresses and shatters many outdated perspectives at looking at life & nature on planet Earth, links are being made between these different levels of explanation.

It is now firmly accepted among intellectuals from evidence gathered in Biopsychology (also known as Neuroscience) that the acquisition of skills is dependent on an organism’s ability to learn and develop throughout its lifetime, and DNA is an important factor at the biochemical level for the transmission of heredity traits postulated by Charles Darwin. Hence, human evolution is a process that is continuous, multifaceted, complex, creative & ongoing; and intelligent design [e.g. psychological, educational, linguistic, biological, genetic, philosophical, environmental, dietary, etc] is an undeniably important factor for the intelligent evolution of human societies.



  1. Azevedo, F.A.C., Carvalho, L.R.B., Grinberg, L.T., Farfel, J.M., Ferretti, R.E.L., Leite, R.E.P., Jacob Filho, W., Lent, R. & Herculano-Houzel, S. (2009) Equal numbers of neuronal and nonneuronal cells make the human brain an isometrically scaled-up primate brain. Journal of Comparative Neurolology , 513 , 532-541.
  2. Dusoir, H., Kapur, N., Byrnes, D. P., McKinstry, S., & Hoare, R. D. (1990). The role of diencephalic pathology in human-memory disorder-evidence from a penetrating paranasal brain injury. Brain , 113 , 1695-1706.
  3. Gobet, F., Chassy, P. and Bilalic, M. (2011). Foundations of cognitive psychology. 1st ed. New York: McGraw-Hill Higher Education.
  4. Hebb, D. O. (1949). Organization of behaviour. NJ: Wiley and Sons.
  5. Pinel, J. (2014). Biopsychology 8th ed. Harlow: Pearson.
  6. Simon, H. A. (1996). The sciences of the artificial (3rd edn). Cambridge: The MIT Press.
  7. Sur, M. & Leamey, C. A. (2001). Development and plasticity of cortical areas and networks. Nature Reviews Neuroscience , 2 , 251-262.

Updated: 09.04.2017 | Danny J. D’Purb |


While the aim of the community at has  been & will always be to focus on a modern & progressive culture, human progress, scientific research, philosophical advancement & a future in harmony with our natural environment; the tireless efforts in researching & providing our valued audience the latest & finest information in various fields unfortunately takes its toll on our very human admins, who along with the time sacrificed & the pleasure of contributing in advancing our world through sensitive discussions & progressive ideas, have to deal with the stresses that test even the toughest of minds. Your valued support would ensure our work remains at its standards and remind our admins that their efforts are appreciated while also allowing you to take pride in our journey towards an enlightened human civilization. Your support would benefit a cause that focuses on mankind, current & future generations.

Thank you once again for your time.

Please feel free to support us by considering a donation.


The Team @

Donate Button with Credit Cards