公司新闻 news
您现在的位置:首页 > 公司新闻 > AI Philosophy: Intersubjective Symbiosis 凡事交互主体共生
最新文章

New Articles

共生思想理论前沿

THE THEORY

AI Philosophy: Intersubjective Symbiosis 凡事交互主体共生

发布时间:2024/05/01 公司新闻 浏览次数:52

凡事交互主体共生

 

AI Philosophy:Intersubjective Symbiosism

 

钱  宏Archer Hong Qian

 

 

Given the rapid development and uncertainty surrounding AGI (Artificial General Intelligence), as well as various challenges, it is essential for industry professionals and the international community to initiate a “new Dartmouth Conference ” to address the urgent philosophical question of how humans and AI can coexist as “Intersubjective Symbiosism” in this era.

——the Symbiosism Culture Think-Tank Foundation in Canada

 

 

 

It is well known that philosopher Martha C. Nussbaum advocates for Al ethics – respecting individual dignity, promoting human well-being and social justice, and supporting marginalized groups. Therefore, the development of AI requires interdisciplinary collaboration, bringing together philosophers, ethicists, jurists, social scientists, technical experts, and policymakers in a dialogical engagement to harness the uncertainty and dangers posed by AI, ensuring that AI benefits all of humanity!

 

Martha C. Nussbaum’s call, ‘live and let live,’ undoubtedly holds phenomenal and indispensable significance. However, it is imperative to achieve breakthroughs in philosophical concepts (ways of thinking and value orientations) to form directional and actionable new cognition and broad consensus on how to address the rapid development of AI, lest we fall into the trap of ‘ what goes around comes around.’

 

This is because the issues of certainty and uncertainty in transitioning from artificial intelligence (AI) to artificial consciousness (AC), or Artificial general intelligence (AGI), in other words, the crisis implied therein, ultimately stem from human problems or crises – namely, the philosophical problem or crisis of ‘know thyself’ in contemporary times.

 

Of course, I am not an AI design expert, but as a philosophical thinker, I happen to have an inexplicable affinity for AI philosophy.

 

In 1984, I began participating in the exploration of cognitive science advocated by Xuesen Qian, focusing on the study of the ‘internal mechanisms of theoretical thinking’ and the possibilities of ‘artificial intelligence fruits and the tree of life’ (related articles were published in the 8th issue of ‘Philosophical Dynamics’ in 1985). It was at that time that I came into contact with neuroscience and module recognition technology – the technical foundation of AI biomimetic logical thinking + neural networks.

 

In the summer vacation of 1985, I traveled from Harbin to Beijing to attend the Second National Conference on Cognitive Science and met several exchange students from Beijing Industrial College and Tsinghua University who were about to study at the University of California, Berkeley, USA. We discussed the future of AI for one night:

 

Can humans create robots with the same physical, mental, and spiritual automation as humans – equivalent to today’s embodied artificial intelligence?

 

Regarding this question, these exchange students, who were about to embark on the forefront of AI, made various possible simulation hypotheses from a technical perspective, while I made evaluations and prospects from a philosophical perspective. The results were as follows:

 

First, it is believed to be impossible. Because all creations of mankind so far are functionally biomimetic in the sense of multiple convergent energy shaping simulations. Even cloning technology or holographic imaging and 3D printing are far from the truth. In fact, humans cannot create a renewable grape seed or a tomato.

 

Second, under certain conditions, it is entirely possible but not necessary. Because going through a long process, having children is enough, and moreover, the children born naturally are, from a creative perspective, full of uncertainties and contain infinite possibilities for perfect unfolding (sorry for my point of view: every newborn child may be the teacher of brainwashed adults, bringing forth a new hope for the world).

 

Third, if possible and necessary (after all, the current version of humans has many flaws, such as dependence on constant oxygen, constant nutrition, constant temperature, and constant pressure). Then, as new version of AI humans, they must not only have the structural functional image (Embody) meaning of objects or slaves like humans (Object), but also be like the current version of humans – ‘the heart of heaven and earth’ – possessing conscious, sound body-mind-spirit subjects (Subject).”

 

In 1950, Alan Turing proposed a benchmark for determining machine consciousness called the “Turing Test”: three roles are set up, a human, a machine, and a human “interrogator.” The “interrogator” needs to be separated from the other two in physical space. The “interrogator” initiates questioning and distinguishes between the machine and the human based on their pure text responses (to avoid interference from voice answers). If a machine can communicate with a human and make it difficult for the “interrogator” to distinguish between a human and a machine, then this machine is considered to have intelligent consciousness. However, from the Turing era to the present day, no machine has been able to pass such a test.

 

The Turing Test provides a simple standard for determining whether a machine possesses intelligent consciousness and has helped shape the philosophical orientation of AI.

 

In 1955, John McCarthy, an assistant professor of mathematics at Dartmouth College, coined the term “Artificial Intelligence” (AI) to encompass various possible manifestations of machine intelligence, such as neural networks and natural language processing, considered by scientists around the world. In the summer of 1956, McCarthy, together with Marvin Minsky, a junior researcher in mathematics and neurology at Harvard University, Nathaniel Rochester, head of IBM’s information research, and Claude Shannon, a mathematician and information theory pioneer at Bell Labs, convened a group of top researchers at Dartmouth College to discuss many potential areas of development in AI research, including learning and search, vision, reasoning, language and cognition, games (especially chess), and human-computer interaction (such as personal robots), giving birth to the AI revolution known to everyone on Earth. This is known as the Dartmouth Summer Research Project on Artificial Intelligence. The seven topics set at the time (automatic computers, how to program computers to use languages, neural networks, the theory of computational scales, self-improvement, abstraction, randomness and creativity) have influenced all aspects of AI development to this day.

 

 

In the following 70 years, the three major schools of AI – Symbolism, Connectionism, and Behaviorism – have taken turns to demonstrate their strengths, each reflecting the philosophical orientation of AI. Symbolism advocates that artificial intelligence originates from mathematical logic, that is, expressing clear and interpretable forms of thinking through logical symbols, mainly applied to natural language processing and knowledge representation and reasoning, but it is limited in dealing with fuzzy and uncertain problems. Connectionism simulates the brain’s ability to process information by using the connections between neurons as the basis of artificial neural networks, mainly applied to image and speech recognition, but its drawback is that the training of the network requires a large amount of time and computing resources and lacks interpretability. Behaviorism focuses on application and physical simulation, dealing with real-time environmental information, believing that cybernetics and perception-action type control systems are the keys to artificial intelligence, mainly applied to robots and autonomous control systems, but its drawback is that it requires a large amount of data and computation.

 

The future development of AI in fields such as deep learning, reinforcement learning, and natural language processing will continue to evolve with the superimposition and application of the three major AI schools, but according to this line of thinking, it is still difficult for artificial intelligence to pass the Turing Test in the visible future. As I mentioned earlier, from artificial intelligence (AI) to artificial consciousness (AC), or the issues of certainty and uncertainty of general artificial intelligence (AGI), the underlying crisis ultimately lies in human problems or crises – namely, the philosophical problem or crisis of “knowing thyself” in contemporary times. The future of AI urgently needs a new philosophical orientation.

 

So, it seems that the problem of AI or AGI has returned from the scientific, technological, humanistic, and religious aspects to the most basic philosophical question: What is consciousness? This leads to questions about the origin of consciousness, and even the origin of life and the universe, as well as the existence of so-called dark matter, dark energy, and black holes.

 

Because consciousness, life, and the universe do not exist as individual entities (particles, cells, organs, systems) or as subjects (such as gods, saviors, extraordinary objects), consciousness, life, and the universe are self-sufficient yet not self-contained. Self-organization dynamic and external connection equilibrium (harmony) forces. Therefore:

 

Consciousness can only exist philosophically in the spacetime intersubjective symbiosis.

 

Therefore, everything – physical, physiological, psychological, ethical, rational, you and I, the entire ecology, matter-energy-information holographic states, data algorithms, computational power, language, mind, will, vision, imagination, consciousness, mission, covenant – exists in intersubjective symbiosis. In English, it is called Everything Intersubjective Symbiosism.

 

Since everything exists in intersubjective symbiosis, looking back on the 70-year history of AI-AGI development and the future evolutionary path of AGI or AI-AC (artificial intelligence), it may be like this:

 

From the Turing benchmark, through symbolism-connectionism-behaviorism → the symbiosis of self-organization and external connection harmony (equilibrium)…

 

If so, the future of general artificial intelligence is long and winding, and AI lovers still need to be humble and seek. So, it is still too early to be optimistic or pessimistic about the development of AI.

 

Perhaps, AI enthusiasts interested in AI philosophy urgently need to hold a “Dartmouth Conference” (1956) style in-depth dialogue on the determinism and indeterminacy of the future of AI, which is very necessary!

 

First of all, to avoid cognitive biases towards AI and to prevent human from being pulled by data, algorithms, computational power, and energy, and unable to develop autonomously and freely. Participants focused on discussing such an almost routine problem: Does wisdom come from programming based on logical thinking + algorithms + combining AI power based on neural networks?

 

If we only consider the efficiency generated by the “multiplier effect,” it seems to be a deterministic equation, and this kind of equation is also the basis for the widespread optimism among innovators and users of AI technology. However, have you noticed that this determinism of “data, algorithms, and computing power” is not very amusing, even if AGI can be quickly deduced? Why? Because this determinism has at least three major problems:

 

Firstly, the AI giants all know that compute-intensive AI consumes huge amounts of energy, adding to the already unbearable burden on the Earth’s ecology, exacerbating ecological competition and conflict.

 

Secondly, conflicts arise more from the potential reinforcement of self-centric functions of corporate, governmental, imperial, and other Trust organization platforms and ambitious villains (ever heard of “digital authoritarian tools”?), reinforcing capital monopolies and privileged manipulation, directly threatening the survival, livelihoods, and even lives of different individuals, groups, and ethnicities.

 

Thirdly, it completely ignores, and severs, the intrinsic value of humans as the driver of flesh and blood to wisdom, not only blocking the channel for artificial intelligence (Artificial Intelligence) to upgrade to artificial consciousness (Artificial Consciousness), but also mistaken understanding “embodied general artificial intelligence (Embody AI-AC)” as a combination of physical hardware and mental software. This is so absurd that even Max Tegmark, the founder of the Future of Life Institute dedicated to avoiding the risks to human survival posed by advanced artificial intelligence, and a renowned lifelong professor at MIT, believes that the future form of the cyborg (Life 3.0), which he calls it, may not require a physical body to be designed anew to better convey and share thoughts!

 

In the context of intersubjective symbiosis, self-discipline is the mother of freedom. In the spirit of “extreme self-discipline leads to extreme freedom,” we advocate for a broad philosophical dialogue to reach an “AI Charter” that includes content similar to Asimov’s Four Laws of Robotics (1, 2, 3, 0). It explicitly requires that, even if the realization envisioned by Richard S. Sutton, the father of reinforcement learning and a professor at the University of Alberta, Canada, allows AI to search and discover on its own without relying on human-fed knowledge, it is neither creating something different from “tool AI (Tool AI)” – “agent AI (Agent AI)” with the integrated power to independently process intelligent agent interactions with the environment (such as the strategy and value functions in reinforcement learning), action power and involvement of learning, planning, and execution (such as the Dyna architecture), nor viewing AGI as a new type of “post-human” replacing humans, but rather regarding Agent AI as the same subject (Subject) as humans, not simply as an object.

 

That is, by incorporating the relationship between AI-AC and humans into the three major relationships of humans with nature, humans with other humans, and humans with themselves (body, mind, and spirit), in the context of the norm of interactive behavior, we confirm and determine the “Beauty-Pursuing Symbiosism” way.

 

Based on this, we recently sent an open letter to the six giants of AI, namely: Musk, Altman, Feifei Li, Pichai, Zuckerberg, and Renxun Huang.

 

 

Now, I want to make this letter public and send it to the distinguished professor of law and ethics at the University of Chicago’s Department of Philosophy and Law, Ernst Freund, the Fellow of the American Academy of Arts and Sciences, the Fellow of the British Academy, and the Fellow of the Finnish Academy of Science, Martha C. Nussbaum, the Canadian author of “Life 3.0”, Richard S. Sutton, who returned from Google’s executive to the University of Toronto, Geoffrey Hinton, and the author of “Life 3.0”, Max Tegmark, a lifelong professor at MIT, especially the British mathematical physicist, economist, and philosopher, Roger Penrose!

 

This open letter is an invitation to the “New Dartmouth Conference”.

 

Archer Hong Qian

Founder of Symbionomics共生经济学创建者

April 18, 2024, in Vancouver

+1 (604) 6906088; hongguanworld@gmail.com

 

 

 

AI Philosophy:Intersubjective Symbiosism

 

凡事交互主体共生

Everything Intersubjective Symbiosism

 

钱 宏Archer Hong Qian

 

 

鉴于当前的AGI迅猛发展的不确定性,以及诸般难题,业内人士、国际社会有必要展开一次“新达特茅斯会议”,以解决我们这个时代紧迫的人与AI如何进行交互主体共生的哲学问题。

——共生文明智库基金会

 

 

众所周知,哲学家Martha C. Nussbaum倡导Al伦理——尊重个人尊严、推动人类福祉与社会公正、扶持弱势群体——为此,AI的发展需要跨学科合作,集合哲学家、伦理学家、法学家、社会科学家、技术专家和政策制定者的对话式投入,才能驾驭AI带来的不确定性和危险性,确保AI造福人类全体!

 

Martha C. Nussbaum的呼吁,即你我他Live and let live,无疑具有现象级不可或缺的重要意义。不过,对迅猛发展的AI如何形成具有方向性和可操作性的新认知与广泛共识,显然亟需哲学观念(思维方式与价值取向)上能一以贯之的突破,才不致于顾此失彼,“按下葫芦浮起瓢”……

 

这是因为,从人工智能(AI)到人工意识(AC),或者说通用人工智慧(AGI)的确定性与不确定性问题或蕴含的危机,说到底是人的问题或危机——即当代人“认识你自己”的哲学问题或危机。

 

我当然不是AI设计专家,然而作为一个哲学思行者,我碰巧与AI哲学有着不解之缘。

 

1984年,我始參加钱学森組織倡導的思維科学探索,著重研究“理论思維的内部机制”与“人造智慧果、生命树”的可能(有关文章发表在1985年第8期《哲学动态》),而就在那时,接觸到神经科學和模块識别技术——也就是AI仿生模拟逻辑思维+神经网络的技术基础。

 

1985年暑假,我从哈尔滨参加第二届全国思维科学研讨会途径北京,在当时的北京工业学院和清華大學几位即將赴美國加大伯克利分校学习的留學生相遇,討論过一个晚上的AI未来的问题:

 

人工能不能造出与人类一樣具有身心灵自动化的机器人——相当于今天说的具身人工智慧?

 

对于这个问题,这几位即将踏进AI前沿的留学生,从技术上作出了多种可能的模擬設想,我则从哲学上作出评估展望,结果是这样式的:

 

第一,认为不可能。因為人类至今所有創造都是功能仿生学意义上的多重聚能塑形模擬,即使克隆技術或全息摄像、3D打印也离真相很远,人其实造不出一颗可再生的葡萄种子🍇,或一只西红柿🍅;

 

第二,在某种确定条件下,完全可能,但沒有必要。因为绕一大弯,男女生孩子就行,而且,人自然生出来的孩子,从创造性上看,充满各种不确定性,也蕴含着无限可能的臻美展开(不好意思我的观点:每一个新生儿孩子,都可能是被文化洗脑的大人的老师,为这个世界展开一种新的希望);

 

第三,如果可能,且必要(毕竟现版人有很多缺陷,如对恒氧、恒养、恒温、恒压的依存度)。那么,作为AI新版人的他、她、它、祂,一定不只是具有人一样结构功能形象(Embody)意义上的客体对象(Object)行为工具或奴隶出现,而是和现版人一样作为“天地之心”——富有意识的健全的身心灵的主体(Subject)存在。

 

1950年,Alan Turing提出了一个判断机器意识的基准即“图灵测试”:设置三个角色,人、机器和人类“询问者”。“询问者”需要与其余二者在物理空间上分隔开。“询问者”发起提问,且根据二者的纯文本回应(避免声音回答产生干扰),区分机器和人。如果一台机器能够与人类沟通,且让“询问者”难以分辨人与机器,那么这台机器就被认为具有智能意识。然而,从图灵时代,直到今天,没有一台机器能够通过这样的测试。

 

但图灵测试为判定机器是否具有智能意识,提供了一个简单的标准,也帮助人们塑造了AI的哲学倾向。

 

1955年,达特茅斯学院(Dartmouth College)数学助理教授约翰·麦卡锡(John McCarthy)又创造了“人工智能”Artificial Intelligence(AI)这个术语,来囊括世界各地的科学家思考机器智能的神经网络和自然语言等可能的呈现形式。

 

 

McCarthy于1956年夏季与哈佛大学数学与神经学初级研究员马文·明斯基(M. L. Minsky)、IBM信息研究主管纳·罗切斯特(N. Rochester)、贝尔电话实验室数学家信息论提出者香农(C. E. Shannon)一道发起,邀请了一批顶尖科研人员到特茅斯礼堂,讨论AI研究诸多的潜在发展领域,包括学习和搜索、视觉、推理、语言和认知、游戏(尤其是国际象棋),以及人机交互(比如个人机器人),催生了后来地球人都知道的AI革命。史称Dartmouth Summer Research Project on Artificial Intelligence(达特矛斯夏季人工智能研究计划)。当时设定的7个议题(自动计算机、如何对计算机进行编程以使用语言、神经网络、计算规模理论、自我改进、抽象、随机性与创造性),影响了此后至今AI发展方方面面。

 

此后的70年里,AI世界的符号主义(Symbolism)、联结主义(Connectionism)、行为主义(Behaviorism)三大流派——AI的哲学倾向,轮番登台各显神通相互映衬。符号主义主张人工智能源于数理逻辑,即通过逻辑符号来表达清晰而易解释的思维形式,主要应用于自然语言处理和知识表示推理,但其局限在于难以处理模糊和不确定性的问题。联结主义模拟人脑处理信息的能力,即将神经元之间的联结关系作为人工神经网络的基础,主要应用于图像和语音识别,但其缺点在于网络的训练需要大量的时间和计算资源,并且缺乏可解释性。行为主义注重应用和身体模拟,处理实时的环境信息,认为控制论和感知-动作型控制系统是人工智能的关键,主要应用于机器人和自主控制系统,但其缺点在于需要大量的数据和运算。

 

三大AI流派,未来在AI深度学习、强化学习和自然语言处理等领域,将趋于叠加应用继续发展,但是,照此思路,人工智能在可见的将来,依旧很难通过图灵测试即判断机器意识的基准。我前面说过,从人工智能(AI)到人工意识(AC),或者说通用人工智慧(AGI)的确定性与不确定性问题或蕴含的危机,说到底是人的问题或危机——即当代人“认识你自己”的哲学问题或危机。AI未来亟需新的哲学导向。

 

那么,这样看来,AI或AGI问题,又从科技人文宗教回到了最基本的哲学:意识是什么?这就要追问意识起源,乃至生命起源和宇宙起源,甚至所谓的暗物质、暗能量、黑洞的存在方式。

 

因为,意识、生命、宇宙,都既非作为单个个体(质点、细胞、器官、系统)存在,亦非作为某个主体(Subject),比如神明、救主、殊圣的客体对象(Object)存在,意识、生命、宇宙本自具足又非独存,富有生命自组织灵动力与外连接平衡(勰和)力。因而:

 

意识,只能哲学地存在于交互主体共生的时空意间。

 

所以,凡人、事、物——物理、生理、心理、伦理、道理、你我他全生態、质能信全息状、数据算法算力、語言、心智、意志、愿景、想象、意识、使命、孞约——交互主体共生。用英语表达,就是Everything Intersubjective Symbiosism.

 

既然凡事交互主体共生,那么,回顾70年来AI-AGI发展的心路历程,AGI或AI-AC(人工智慧)将来的演化路径,或许是这样式的:

 

自图灵基准始,经符号-联结-行为主义→生命自组织与外连接勰和(平衡)之共生主义……

 

若此,通用人工智慧未来,路漫漫其修远兮,愛(AI)家们尚需Be Humble上下求索……所以,对于AI发展的乐观或悲观,其实都为时过早。

 

也许,对AI哲学有兴趣的AI爱好者,现在亟需就AI未来的确定性与不确定性问题,举行一次“Dartmouth”(1956)式集思广益的深入对话,十分必要!

 

首先是避免对AI的认知偏蔽(Cognitive Bias)以及人的意志被数据、算法、算力、耗能牵引,而不能自律且自由地发展。与会者集中探讨这样一个几乎形成套路的问题:基于逻辑思维的编程+算法+结合基于神经网络的AI算力=智慧吗?

 

如果仅仅从“乘数效应”产生的效率看,似乎是确定的等式关系,这种等式关系,也正是目前AI技术创新和应用者普遍乐观的依据。但是,有没有发现,这种“数据、算法、算力决定论”的确定性,即使很快就能推出AGI,也不是很好玩,为什么?因为这种确定性至少存在三大问题:

 

第一,AI巨头们都知道,算力型AI耗能巨大,增加已经不堪重负的地球生態背负,加剧生態竞争和冲突;

 

第二,冲突更来自可能强化公司、政府、帝国等Trust组织平台和野心家恶棍的自我轴心功能(听过“数字集权工具”?),强化资本垄断和特权操纵,对不同个体、群体、族裔的生存生活,甚至生命,产生直接威胁;

 

第三,完全忽略了,也割裂了,人作为血肉之驱对于智慧的内在价值,不仅阻断了人工智能(Artificial Intelligence)向人工意识(Artificial Consciousness)升级的通道,并错误地将“具身通用人工智慧(Embody AI-AC)”,简单理解为肉体硬件+思想软件的组合,荒谬到连致力于规避高级人工智能带来的人类生存风险的Future of Life Institute创始人著名的麻省理工终身教授Max Tegmark都认为,人机合体(Cyborg)的未来形式,即他所谓的生命3.0,可以不需要肉身而去设计新的躯壳,就能更好地传递和分享思想!

 

在交互主体共生的语境中,自律是自由之母,本着“极致自律,才有极致自由”的精神,我们倡议早日通过一次广泛哲学对话,达成一个包含类似Asimov’s Four Laws of Robotics(1、2、3、0)内容在内的《AI宪章》。其中明确要求,即使如强化学习之父、加拿大阿尔伯特大学教授Richard S Sutton展望的实现让AI自己去搜索和发现,而不是靠人类喂养知识,从而出现了不同于“工具AI(Tool AI)”的“代理AI(Agent AI)”——具有独立处理智能代理与环境之间的交互(例如强化学习中的策略和价值函数)行动力和涉及学习、规划和执行(例如 Dyna 架构)的整合力——也不是如Sutton那样将AGI视为取代人类的新型“人类后代”,而是要将Agent AI视为与人类一样的主体(Subject),即不是简单的客体或对象(Object)。

 

也就是,把AI-AC与人之间的关系,纳入人与自然、人与人、人与自己(身心灵)相互关联、相互作用的三大关系之中,以交互行为规范的“臻美共生”(Beauty-Pursuing Symbiosism)方式,加以确认与确定。

 

基於此,我们日前刚给Al六巨头写了一封公開信,他們是:马斯克、奥特曼、李飞飞、劈柴、扎克伯格、黄仁勋。

 

现在,我想把这封公开孞,也寄予美国著名哲学家、芝加哥大学哲学系和法学院的厄恩斯特·弗罗因德法学与伦理学杰出贡献教授,美国文理科学院、英国国家学术院和芬兰国家学术院院士Martha C. Nussbaum女士,还有加拿大阿尔伯特大学的Richard S Sutton先生、从谷歌高管回到多伦多大学的Geoffrey Hinton先生,《生命3.0》作者、麻省理工终身教授Max Tegmark,特别是英国数学物理学家、经济学家、哲学家Roger Penrose先生!

 

这封公开信,就算是“新Dartmouth会议”邀请函。

 

 

Archer Hong Qian钱 宏

Founder of Symbionomics共生经济学创建者

2024.4.18于Vancouver

+1(604)6906088;hongguanworld@gmail.com

 

 

图解经济学简史An Illustrated Brief History of the Economy

您好!请登录

点击取消回复

已有1评论

  • Archer
    Archer 回复

    钱先生:
    久未联系了,近有什么大作,我愿恭身拜读。
    用中国的阴阳学说,解析A丨的发展,能否得出,A丨不会毁灭人类的结论?因为阴阳转化,总归是会达到平衡的。例如,一战、二战、冷战与当前的俄-乌、巴-以之战,都逃不出阴阳学说的掌控。
    ​ 阴阳学说,对自然界,对人类都是适用的。阴阳学说是中国的大学一统理论,这是我的认识。
    ​ 对此,钱先生何高见?
    ​ 老朋友 王文光
    ​ 2024.5.1于北京

    [害羞][害羞][害羞]人类思維模式呈遞進关系——从轴心时代到共生时代

    巫术思維:大而全概率,永遠正確而难切實際

    科學思維或系統思維:小而精概率,切實際,但有信源、信道、信果問題

    共生思維:生命(認知与愿行)自组织交互连接勰和(动態平衡),全息共生

    请王老指教[拥抱][拥抱][拥抱]


    2024年05月02日上午12:01

购物盒子

igs002@symbiosism.com.cn

周一8:00至周五17:00,可以点击咨询