Journal

Learning how to ask 學習提問

In my experience, Asian students are not used to asking questions. This is likely because many are never encouraged or even allowed to ask questions in a variety of settings, including classrooms. On the other hand, Western students likely ask more questions. But honestly, not all questions are helpful. Perhaps this book sheds some light on how to ask good questions, or what good questions are.

A curious note is that the word for question 問題 also means problem in modern Chinese. So a “person with questions” also means a “person with problems.” To a lesser extent, such usage is also found in English, e.g., a “questionable person.” But conflating questions with problems may be one of the reasons why Asians do not like to openly ask questions, which are perceived to be problematic and confrontational.

This conflation did not seem to exist in classical Chinese. I wonder if it is a transfer from the Sino-Japanese “mondai” 問題 in the early 20th century.

https://book.douban.com/subject/37154260/

Did DeepSeek plagiarise ChatGPT?

Did DeepSeek plagiarise ChatGPT?

Certain media outlets give the impression of an accusation: DeepSeek, like certain Chinese goods, is a cheap imitation that violates intellectual property rights, threatening US economy and security.

But such reports are in fact missing the point and are misleading. Of course DeepSeek learned from ChatGPT. In fact, as a new product, it learned from many other older models. The point is that it is doing better and is more efficient, with a different training approach.

What is quirky about DeepSeek is its Chinese censorship. As many have observed, DeepSeek would churn out replies and that one can see that sensitive answers get deleted or redacted in real time!

There is in fact no hard evidence for plagiarism since the data source for all LLMs ultimately belong to everyone on the internet. But plagiarism of codes does happen. The case of Llama3-V, developed by a small team of Stanford students, stealing the codes of Tsinghua’s MiniCPM in June 2024 surprisingly went unnoticed and unreported in Western media. The creators Aksh Garg and Siddharth Sharma, shocked the open-source community with their Llama3-V, a powerful multimodal AI, with an even more shocking budget: $500! Garg and Sharma initially denied the allegation, but at the end apologised publicly. How were they busted? Their AI model was able to analyse ancient Chinese bamboo slip texts, a function that was uniquely developed by the Chinese team, producing the same answers and even mistakes. They blamed it on a certain Mustafa who wrote the codes and now disappeared. Christopher Manning, Director of StanfordAILab tweeted on X condemning the plagiarism, sort of downplayed the case as something “seems done by a few undergrads” and that he knows “nothing”…

DeepSeek is just a private Chinese AI company and one of the many. Somehow the media portrayed it as if it were a “Chinese” scheme with some sinister motives.

Coming to think about it. Printing, compass, gunpowder were all first invented in China. As Needham pointed out, Europeans never realised that and at least since Francis Bacon’s Novum organum 1620, most people thought that they were invented in the West. The Protestant missionaries (and possibly the Jesuits also) taught this to the Chinese as late as the nineteenth century and some even believed it (as the case of Shaou Tĭh in my earlier post). Strangely, Chinese never accused Europeans for stealing their technology. Believe it or not, Chinese who know history should be glad to see how ideas developed by their ancestors spread and had contributed to the progress of human societies and civilisation in general.

ChatGPT: Boon or bane?

In the past few years, we have seen how our education system has come to terms with AI and ChatGPT. Students from primary level to graduate school, and even teachers themselves, reached a consensus that, to put simply, resistance is futile and we may as well embrace the change. My mailbox is delightfully filled with tips of how these new tools can do the “heavy-lifting” and leave us more time for creative thinking and value-added works. During the pandemic, I have experimented, or simply played, with ChatGPT. Amusing as the results are, I wonder what harm they may bring as AI gradually encroaches on our daily life, from AI customer service to algorithm-generated feeds. The temptation to use ChatGPT for work is great, from translation to generating powerpoint presentations. I have seen the Chinese translation of the abstracts of a proceedings of an international conference completely done by ChatGPT and properly acknowledged as such. AI-generated word salads are certainly not the best but they may be better than nothing or something done by an incompetent human being.

According to some, tools like ChatGPT turn people intellectually dull. More alarming, research shows that there appears to be now a generation of anxious young people who are more skilled at persuasive slogans than having genuine understanding and empathy, and these technological innovations are some of the main causes. Chomsky continues to heed us how our media culture promotes anti-intellectualism and the sinister nature of social-media filter bubbles, soundbites, influencers, and algorithm. ChatGPT reinforces all these negative tendencies, as Chomsky has pointed out. In a MasterClass video, Chomsky commented:

“[GPT] has almost no intellect interest, doesn’t teach you anything about understanding, cognition, intelligence. I think it can be a tremendous tool of defamation and distortion that can be used destructively very easily, and will be. We can be sure of that. I don’t see any way to protect against it.”

Collage essays may be unsalvageable as a result of the rampant use of ChatGPT. But as Zaretsky observed, writing in our “post-literate” world has been on a declining trajectory for decades and the debate on the harm of new technology is as old as Plato. In my few years in Hong Kong, a city supposedly bilingual and multicultural, I am astonished that the majority of the population cannot write properly in either Chinese or English. By “properly” I mean a definition given by the standards of the society itself – education, media, professional standards in the business world. The way American professors deplore at the “dwindling number of students who can write a declarative sentence” applies equally to the Chinese in Hong Kong, who could write in neither English (supposedly professional working language) nor Chinese (supposedly their native language). The reasons are multifold and entangled, complicated by the role of Cantonese which is in fact the mother tongue of the majority of the population. I said “complicated,” not “exacerbated” because I strongly believe that multilingualism can be achieved and is immensely useful. The IB school I worked in is one the many success stories. There are school children who are fully trilingual, in Cantonese, Putonghua, and English, by the age of six or seven, while there are those who are competent in none even in adulthood. The disparity is astonishing and the authorities in Hong Kong are utterly helpless.

But I diverged. When it comes to writing, the means could be as important as the goal, if it is not the goal itself. In other words, the purpose of writing could be heuristic, as evident in journal writing like what I am now doing. I am thinking as I write and the reason why it communicates with the reader is precisely because the writer is thinking and attempting to gain a deeper understand on the subject. Whether it is in the style of a soliloquy or an imagined dialogue with you, the reader, this element of understanding cannot be absent. Working with people who do not think or make no effort in thinking or understanding can be a painful experience. Reading words that are superficially correct or even eloquent but without genuine understanding is just as torturous. ChatGPT, as Chomsky predicted, may easily turn out to be a nail in the coffin in an era with alarming growth of anti-intellectualism, faked news, and shameless deceit.

All doom and gloom? Maybe not entirely. In Plato’s Phaedrus, King Thamus was warned about the gift of writing which would damage our ability to memorise and increase forgetfulness. It’s true that writing for millennia led to the demise of oral culture, but literacy emerged and it catapulted all the civilisations that embraced it to a whole new level. Of course, this comes with certain preconditions. The main one is that writing must be accompanied or even preceded by thinking. In this regard, ChatGPT is extremely worrying. Unlike writing, which is after all a human activity, the output of ChatGPT is completely non-human, reaping on past human outputs. Writing, however dumb the writer is (sometimes I imagine myself to be one on a therapeutic journey!), requires one to put together ideas into intelligible and aesthetically pleasing words and transfer them into written form through fine motor skills, a mentally and neurologically complicated task that could not be described as anything less than human genius. Working with ChatGPT is surprisingly facile — just punching keywords strung together with minimal grammar and a click of the return key. As I am typing these words, I also wonder how much calligraphic skills I have lost with the invention and widespread adoption of personal computer. There will always be inevitable and irrevocable losses every time a society undergoes a technological shift. Will there be an enlightened culture that somehow embraces AI and ChatGPT? I cannot entirely preclude the possibility but at the same time, I cannot yet quite see it.

As an exercise, I will write in Chinese on the same topic and see if the content would differ. The title: ChatGPT 弊多利少,荼毒心靈與智慧?