The requested article has expired, and is no longer available. Any related articles, and user comments are shown below.
© KYODOAward-winning author's AI use revelation roils Japan's literary world
TOKYO©2025 GPlusMedia Inc.
The requested article has expired, and is no longer available. Any related articles, and user comments are shown below.
© KYODO
48 Comments
dagon
Rie Kudan is being savvy in this case.
Hiro
Writing stories has never become easier. Is a thing to celebrate. You cannot stop progress. Only adapt and embrace it otherwise you will get left behind.
virusrex
Why would this be the case? if you read a book expecting to be entertained, learn something or experience a different point of view and this goal is accomplished, why would people feel cheated? What if the author is a genious and wrote the whole thing in a single day? Would the people that expect it to be the product of weeks or even months of work be right if they feel cheated?
Gaijinmunkey
I get the concern but it's a ridiculous argument if you ask me. People are going to use AI, even writers. As long as they are infusing any AI input with their own ideas and creativity then there is no problem. I think if someone published something that was entirely or mostly written by AI we would know something is off and it wouldn't be popular. It's not like AI art which blatantly rips off artists' work.
collegepark30349
This is an interesting ethical question. I think author's should have to cite any passages or content that they used AI to write, especially if they copied the passages verbatim, or even revised, or used an idea from AI that was not their own. If I copied a passage word-for-word, paraphrased or took an idea from another source without attribution in one of my research papers, I'd be in some pretty hot water.
*The Hoshi Shinichi Award for sci-fi literature has set out detailed requirements for the use of AI-generated content in submissions, including prohibiting its inclusion as is, without significant additions or revisions to the generated text, as well as keeping records of the process, among other rules.*
I agree. We're not talking "inspiration" here - this painting, song, passage, work, author...inspired me to write this / this way. We are basically talking about a new form of plagarism. Taking credit for someone (something?) else's work.
masterblaster
It shouldn't be tolerated. The use of AI should be a separate category for literary prizes. Whether it's five words, five paragraphs or five pages it isn't original work. When you write an article you have to cite references that aren't original. Same with a novel, if you use lyrics you need permission. If writing isn't original you have to say so.
virusrex
There is a huge difference here, not only because a novel is not a scientific paper, but because plagiarism would be “appropriation of another person’s ideas, processes, results, or words without giving appropriate credit”. What person ideas, words are you appropriating in this context? Obviously this do not apply when you are "stealing" from nobody, getting an idea without an attributable source is terribly common and there would not be any problem presenting it as such.
JRO
The use of AI in many fields are inevitable, but what needs to happen before that is strict rules on machine learning, These companies would need to have a list of all materials used, and all data that has been used would needs to be properly licensed. Before that there is no question whether it's ethical or not, It's just a smart way to launder what you have stolen.
GBR48
I wouldn't use GAI at all. My style in my books is my own. Some readers may not like it, but that's OK. It's part of the deal. I always tell people that if they aren't enjoying one of my books they should just stop. Authors inevitably recycle stuff that they may have read, seen or heard, but it should be coming from their own creative process, not from software. Those who use GAI, as with those who use ghost writers, should include a statement that they have.
Cymbaline
Her candour should be admired. She could have kept quiet.
rainyday
To be honest, if this is all she was using it for then I don't really have a problem with it. But I do worry about the future as this technology develops, this risks driving us into some truly dystopian crap.
Well, on a simple level if the book says "XYZ" is the author and you expect to read a book written by XYZ but instead find it was written by AI then yeah, I think anyone would reasonably feel cheated by that.
Beyond the question of being cheated though is the question of how much we, as a society, should be comfortable with our creative, imaginative narrative space being taken over by machines that within a few years are going to be far smarter, far more capable of spewing stuff out in limitless quantities, and far more capable of shaping our collective culture than we mere humans are. The potential for our society to be destroyed by this stuff is staring us in the face and we really should be way more concerned about it than we seem to be.
HopeSpringsEternal
Just MORE efficient research, NO MORE $clickbait google search needed!
virusrex
That applies the same if the author was using a pseudonym, feeling cheated by that is not exactly rational. We are not at the point AI can write anything of even elementary quality, that is why people would expect a book that wins a contest to be written by someone with talent, the ability to coerce AI to give answers that fit the book and incorporate those answers in the novel without people noticing is also part of that talent.
rainyday
Not at all. If you are a fan of XYZ then you are entitled to expect that a book purporting to be written by XYZ would actually be written by XYZ.
True, but we are likely to reach that point in the very near future so its worth discussing now.
I don't think that most people's definition of "talent" by a writer includes the ability to obscure the fact that they are secretly getting AI to write parts of their work from readers (anymore than it would the ability to hide the fact that they are having a ghostwriter do parts of their work, etc).
Its a bit like saying an athlete's ability to cheat on doping tests is a measure of their "talent" in their sport.
virusrex
That would apply only if the novel was promoted as not having any single detail written by AI, in this case it is undeniable the author is still the person being recognized as such.
By using valid arguments that actually apply to the examples being discussed, not trying to apply things that are by all means impossible right now (and maybe ever looking at how the degradation of products by AI is already happening).
Why not? he managed to do that by making a good, entertaining novel that managed to be recognized by its quality. Not by making something to horribly bad that would be at the same level as what AI can write right now.
Only if by "cheat" you mean using perfectly valid but not yet recognized ways to improve his performance without any extra negative effects on his health. The use of AI was not forbidden, it was not even considered during the evaluation process, there is no evidence it gives any unfair advantage, so following your example it would be like an athlete "cheating" by maximizing the effect of his training with a perfect schedule made by AI.
rainyday
The presumption you are suggesting makes no sense as a rule. The governing presumption should be that works are written by their authors unless otherwise noted, not that works are written by AI unless otherwise noted.
Its debatable how much is actually possible right now. AI can't write a good novel entirely autonomously, but its clear that it can contribute significant elements to one with an ever decreasing need for human involvement.
The specific example in this article involved the writer using AI to produce text that was going to be spoken by an AI character in her novel. This seems like a reasonable use of the technology to me, but I don't think that its a typical case that we are going to see, even with the current technology.
And while there might be "degredation of products by AI" happening, I don't think its even remotely realistic to think that development of this technology isn't going to advance much beyond its current level. I'm not an expert in the field, but the amounts being invested are massive and the paths forward seem relatively clear.
Yup. But her (not his) use of AI was for an extremely limited purpose and used in a way that made it easy to completely sequester and identify her contribution from AI's contribution. This is unlikely to be the case with other writers, even using current technology.
This makes no sense. The absence of a rule on something that didn't exist when the rules were made doesn't by itself render the use of that thing "perfectly valid" if it violates other rules. Looking around at what actual literary scholars are saying about the ethics of using AI to produce creative content they certainly don't seem to have the same opinion as you - passing off the work of others as your own has a long history of being against the rules in that field. None of them are comfortable with your position that we should just assume everything is written with AI unless specifically promised otherwise.
Whether it gives an "unfair" advantage or not is irrelevant (since anyone can use AI its not an issue of unequal access). If you let every athlete use steroids none of them would necessarily be at an unfair advantage either. The question is whether one wants the rules of the game to allow it or not, and if so under what conditions. The rules on that are what is being debated right now. I don't know what those rules should be now, but I do know that there are serious concerns that go way beyond mere unfair advantages being conferred on individual writers that we need to be worried about in coming up with them.
virusrex
But that would be necessary for your example to work, there is no deceiving if there is no realistic expectation for something to apply. You are the one trying to argue from that expectation to be considered as normal by default, not me. This work was written by the author without doubt. It remains so even after his declarations.
It is not debatable that AI can't write a prize winning novel, even short articles in the media are readily exposed as of terrible quality and it is not believable they can increase in quality to the degree of being worth recognition as good.
That is can contribute importantly is heavily dependent on the talent of the author, but that is also something that applies to many other things, people have written nice works by being constricted by topics, formats or even keywords, that does not make the topic, format of keywords what made their works remarkable but the talent in writing. This author talent is evident in how it could use a limited input from AI to his novel without reducing the quality, it could be argued that even how it coaxed the AI to give the answer is part of the talent.
Specially for artistic purposes this is a very strong limitation that has no solution in place, one thing is for AI to be better in analyzing curated information that can only originate from humans, another completely different to produce artistic value when the pool of things it is using to make things is slowly but irreversibly including things already produced by AI, a vicious circle of low quality things being used to produce even lower quality things.
Of course it does, because precisely because to "violate the rules" it needs to be first defined as a violation of those rules. What makes no sense is to pretend the rules disqualify something that was not even considered in them in the first place. That would be like saying he cheated because he was wearing blue pants while writing and now you want to consider that against the rules. He did not violate any rules by his use of AI.
Which scholar have said the author cheated? I can't find anybody making this claim but you.
So which scholar have argued that you have to make authors obey rules you did not specify and only included after the prize was given?
Of course it is relevant, at least for your example. Anybody competing is expected to use anything that can give an advantage to win, that is the whole purpose of competing, what is not valid is to have unfair advantages. So if something is not unfair by default should be allowed.
That would apply if you were talking about things written AFTER a rule against the use of AI is written (which would still require to argue how AI is unfair first) but your argument is about an author what wrote a novel with this tool without any rule that would make it cheating, nor an argument where he is demonstrated to have an unfair advantage for using it. Once again, this is about your own example, by accepting the rules allowed for the tool by default you are also accepting your example of cheating by doping is not valid, after all there are countless drugs that offer no advantage and therefore can be used by athletes without any penalty.
rainyday
We seem to be talking in circles, but I'm going to push back on this because it mischaracterizes how rules are made in human society, which is my field.
Most rules (created by the legal system or informally) aren't defined so narrowly that they cannot accommodate technological, social and other changes that didn't exist at the time they were made. Formal rule-makers throughout history have known this to be a problem and thus prefer to draft rules as standards that can be adaptable rather than overly specific. Making new rules is difficult and requires lengthy processes. Interpreting existing rules to apply to new situations is much easier. When new technologies, or social changes, create new types of disputes its up to some arbiter (in legal disputes the courts) to determine whether the existing rule applies to the new situation or whether a completely new rule is required.
AI in general is going to require a lot of completely new rules because so many of the issues it presents (not just with respect to its use by novelists) is going to be fundamentally incompatible with our existing rules that govern many things. But here it seems simple. The existing rule is:
"Authors must not pass off the work of others as their own." (or some similar variation).
You don't need a completely new rule to deal with that, you just need to know if "the work of others" includes the work of AI. That is a question of interpretation, not something that needs a change in the existing rules per se (ie the factual situation is capable of being fit within the existing framework).
So this isn't a question of retroactively trying to apply some new rule that someone has aribtrarily created post facto to behavior that was allowed at the time it occurred. Rather its a question of whether the pre-existing rule applies to this new situation or not. There is nothing controversial about that, its how most rules operate.
geronimo2006
On one hand, the integration of AI in the creative process raises questions about the role of human intuition, emotion, and experience in storytelling. Critics may argue that relying on algorithms and machine learning diminishes the essence of genuine human expression and the profound connection between authors and their readers. Additionally, concerns about the potential commodification of literature, where AI-generated content could be manipulated for commercial success, may emerge.
On the other hand, supporters of the author's AI use might see it as a groundbreaking experiment pushing the boundaries of literary innovation. AI has the capacity to analyze vast datasets, identify patterns, and generate unique narratives that could challenge conventional storytelling norms. This revelation could spark a broader conversation about the evolving nature of creativity and the ways in which technology can augment or redefine artistic processes.
Stephen Chin
Artificial Intelligence to WRITE a Novel?
I need a special bed, with special mattress, pillow, Dutchwife and 6000 duvert to put my NATURAL intelligence to sleeeeeep.
Stephen Chin
6000 Canada Goose Duvert.......!
virusrex
Accommodating new things is perfectly fine, but applying the changes retroactively just because you feel like it is not, you are not talking about changing the rules for the next prize, you are accusing someone of cheating for not following a rule you would like to be implemented, that is obviously not valid.
The worst part is that you are trying to skip completely the part where AI is demonstrated as a unfair advantage for a competition and just impose it as if was a fact, just to justify a rule (and a retroactive one) against it. You even tried to argue it was irrelevant for something to be unfair for it to be addressed in the rules, that makes very evident the flaw in your position.
Which for anybody with a common sense would be impossible, nobody can interpret AI as "others" specially with the clear antecedent of AI not being recognized as an author. Forcing this as if it was an unaddressed problem even with the many different articles explaining the legality is trying to argue from ignorance about something well known.
Yes, it is. Trying to force AI into personhood is just a way to excuse doing it without having a real basis. AI is not someone you can steal things from. Is a tool and nothing more.
rainyday
You don't seem to be reading what I wrote, i already addressed this concern.
I'm not doing anything of the sort. I already said that I'm fine with what this specific author did if you'd actually care to read what I've written before responding. What I am saying is that I don't know what the rule on this should be (unlike you who are certain that you know). Nowhere did I say that this person was in breach of anything.
As i said, we are talking in circles here, you aren't actually addressing what I'm writing.
Sorry but this misconstrues the nature of the problem. About the legality aspect YES, copying from AI is not illegal because AI is not capable of possessing copyright over the works created by it (which distinguishes it from copying from a human being's work). You are conflating two different things there however - copyright infringement and plagiarism. The two overlap but are not synonymous. Plagiarism by itself is not a legal concept, its an ethical one that is applied in different professions (academia, journalism, the arts, etc) and generally one can't sue for it unless its also a violation of copyright (or some other legal right such as in a contract).
One can however commit acts of plagiarism without violating copyright. Stealing ideas isn't a violation of copyright for example but is often considered plagiarism when its done without attribution. Its an ethical violation rather than a legal one, and one which might breach the rules of given institutions (universities, publishing houses, etc) without violating copyright established by the legal system.
Universities I should note are applying this exact same rule against students who are submitting essays written wholly or in part by Chat GPT. Passing off work composed by AI as it if were your own is plagiarism, pure and simple. Look up any university policy on this and its pretty uniform, some random examples:
Cambridge University:
https://www.plagiarism.admin.cam.ac.uk/what-academic-misconduct/artificial-intelligence
University of Edinburgh
https://www.ed.ac.uk/sites/default/files/atoms/files/universityguidanceforstudentsonworkingwithgenerativeai.pdf
All of these are based on the exact same rule: You cannot pass off the work of others as your own. Its not retroactive, its just universities clarifying how conventional rules on plagiarism which have existed for generations are applied to this new technology.
Now there might be differences between writing novels and writing academic essays, particularly with the latter where the discovery of the truth is the ultimate goal the standards are probably higher. But I'm pretty sure that most publishing houses are going to frown on authors of novels submitting works created by AI with minimal input of their own.
This actually isn't just an ethical issue, we can circle back to the copyright issue I mentioned above, since its a double edged sword for authors. If AI generated content cannot be copyrighted (which seems to be the direction most jurisdictions are heading in, US courts have already ruled that this is the case) that doesn't just mean that authors who use AI aren't infringing copyright, it also means that the copyright status of their own works is going to be unclear at best, or possibly completely unprotected depending on how heavy their reliance, something that publishing houses are going to be very concerned about.
In the example of the novel mentioned in the article, the portions of the text produced by AI - since they lack a human author - are likely not protected by copyright and thus any other author can freely copy and use them without being sued (they might be committing plagiarism in doing so, but not a copyright violation).
In other cases where the reliance on AI is heavier (say where the AI writes an entire novel by itself, then the human author merely edits it into something that makes it more readable, etc), the human author might not be able to claim any copyright at all over it. Publishing houses are not going to be happy with that, and I can't imagine any of them think your "anything goes, just assume its all written by AI" approach makes any sense.
It should be obvious that I'm talking here about general concerns raised by this and not just the facts of this one case, so please read my comment in that light.
virusrex
Using an invalid excuse is not addressing an opposite argument.
I quoted your own comment, trying to argue that something not being an unfair advantage is irrelevant for this thing to be or not allowed. The article clearly explains that use of AI was not even considered, that solves completely the mystery of it being or not against the rules.
When you compared him with an athlete cheating by doping you did precisely that, argued that he did something comparable, which obviously means being in breach of the rules.
Not at all, your whole argument is that this would be cheating based on the text being stolen from someone else, which is completely invalid the moment AI is not someone else from whom things can be stolen. Without this very important distinction your argument would be like saying an author using sentences randomly put together by throwing dice was in breach because he was using "the work of others".
Because the reason why there is no copyright nor plagiarism is the same, AI can't be considered a person, so according to your own argument the author can't be taking the work of anybody else. This is a perfectly valid argument that defeats yours. It is not that both things are the same, it is that both things become impossible when there is no person from whom the work is being stolen.
Your own sources contradict you, they call the use of AI academic misconduct, not plagiarism. And explicitly call the occurrence of plagiarism exclusively the "use of words and ideas from human authors without referencing them" Ai is not a person from whom the text is being stolen, is the tool used to steal the words or ideas.
This means that as long as you can't prove the author of this article copied the texts attributed to AI from other people your whole argument falls flat. Using AI by itself is not plagiarism when there is nobody else from whom anything was stolen. There is no "others" in this case only the author and the AI tool.
rainyday
As I've said about five times, I'm not trying to argue that the author of this article did anything wrong. Literally the very first thing I said in my very first comment on this article was:
I don't know how I could be any more clear about that. This is why I have said several times we are arguing in circles - you want to defend what this author did, but I'm not attacking what this author did so I don't see the point of you constantly framing your responses like that.
This isn't my argument at all. I have not said, please do not misrepresent what I am actually saying. Passing something that you did not create off as if it were your own is, I think, something that should not be allowed. This author did NOT do that, she has openly acknowledged that she used AI and identified the specific parts that she used it in. Her rationale for doing so makes perfect sense to me, and as I've said again and again I have no problem with it.
What I do have a concern about is the open question of how we define what is acceptable and what is not acceptable going forward. This is what I'm trying to discuss, to no avail.
You are framing this as a descriptive question of the rules (what the rules are) in this specific instance. The article doesn't actually tell us what the rules are (we can infer from the note that they will change the rules going forward that they didn't address this AI issue directly) but as I said I'm not making an argument about whether this specific person's entry constituted a violation of this specific contest's specific rules. Rather I'm trying to address the normative questions raised in the article about what the rules should be, not just in this one contest but for the use of AI in the arts in general.
And I was pretty clear about what I meant when I said giving an unfair advantage is "irrelevant". Its irrelevant because no unfair disadvantage exists - anyone can use AI so I don't see there being any concerns that someone would be unfairly advantaged by it. If it did give someone an unfair advantage that would be a different story, but I don't see it. But the mere lack of an unfair disadvantage does not by itself suggest to me that AI poses no problems whatsoever. Rather I see those problems as manifesting in different forms unrelated to competitive advantage.
You are getting far too tangled up in semantics (specifically the meaning of "others") here, using a definition that I just typed in off the top of my head, as though it were the only authoritative definition in the world rather than just a rough approximation of a rule whose precise definition varies from institution to institution. Many use the term copying from another "source" which does not imply a human author, making your laser-like focus on that irrelevant.
Here for example is how Oxford University defines plagiarism:
Pretty crystal clear.
https://www.ox.ac.uk/students/academic/guidance/skills/plagiarism#:~:text=Information%20about%20what%20plagiarism%20is,your%20work%20without%20full%20acknowledgement.
Cambridge, Harvard and others also make it very clear in their plagiarism policies that passing off the work composed by AI is not allowed:
https://extension.harvard.edu/registration-admissions/for-students/student-policies-conduct/academic-integrity/#:~:text=Plagiarism%20is%20the%20theft%20of,and%20are%20not%20properly%20cited.
https://www.hps.cam.ac.uk/students/plagiarism
Your point about the Cambridge source referring to it as "academic misconduct" is something of a red herring given that it defines plagiarism as a form of academic misconduct.
The point is that the purpose of rules on plagiarism, as part of larger policies on academic misconduct, is not just to protect the original (human) authors from having their works stolen, its also to impose ethical obligations on authors themselves to honestly present only their own work as their own as an end in itself. No university on the planet that I know of has policies that deviate from this.
virusrex
But you still used a clear example of cheating and going against the rules to explain the situation you think the author is in. Why else use that example for the use of AI while writing? Not to mention you still are trying to argue this is plagiarism even if your own sources contradicted that position.
It was until it was proved invalid, you yourself wrote
As proved there is no "others" from which the work is being stolen, which means this is not against the rules as you claimed.
And therefore your whole criticism of the event on this article is invalid.
Because that is what I argued and against what you tried to use this invalid argument. Contrary to what you claim now, you did not write your comment quoting anything in the article but what I commented, that means you are also replying against this framing.
You repeatedly based your whole argument on the invalid use AI on plagiarism of the words or ideas of "others" even when from the very beginning there is no other. Why else repeatedly use this argument if not?
But it proves there is need for this unfair advantage to fundament making it against the rules because the other reason, (presenting the work of others) does not exist.
You completely misrepresented your sources as if they said use of AI was plagiarism, it was easy to prove it is not, it is not semantics but proving your own sources do not support your point, they even contradict it. You just proved that AI is not to be considered someone you can steal ideas
Which again do not claim in any way that AI is someone from whom ideas can be stolen, which was your original point before you began to move goal posts.
Not at all, because plagiarism is not the ONLY form of academic misconduct, that means your source do not make the false equivalence, that comes completely from your misrepresentation, collusion or fabrication is also a form of academic misconduct, that of course does not mean that anything qualified as misconduct automatically becomes collusion or fabrication (nor plagiarism) as you tried to misrepresent.
rainyday
I honestly thought we could engage in a debate about what the boundaries of the use of AI in the arts should be but you seem obsessed with treating this as a zero sum points based scoring exercise which seems unproductive.
Its true that the Cambridge source I cited does not specifically say “using AI is plagiarism”, fair enough, but you are really splitting hairs here. Its a recognized form of academic misconduct that -yes - Cambridge’s description is vague about in terms of what form of academic misconduct it falls under but its hard to infer what other category it would fit into (collusion and fabrication seem quite different) and the same site also refers specifically to AI on its page about plagiarism.
And the Oxford definition I cited doesn’t suffer from any such ambiguity, its crystal clear that the use of AI constitutes plagiarism for them. What is your response to that?
AsI said repeatedly, I never claimed she did anything that violated any rules. My comments from start to finish are not about her specific case and any suggestion otherwise is due to either poor phrasing on my part or misinterpretation on yours. You can accept that or reject that, if you think I’m lying to you about it there is no point in further discussion.
I’m not moving goal posts, I’m not trying to “win” this debate, I’m trying to engage in constructive dialogue about a topic of interest to me. You are correct, there is nothing to “steal” when an idea has no owner. Point acknowledged, I’ve incorporated the insight you provided into my world view and moved on. Thank you for that.
Moving on, and just to clarify, is it your position that copying from non human sources like AI is ethically acceptable in all circumstances? Or are there limits? I’m clear about what your position on the use of AI in this one case mentioned in the article is, but I’m wondering what the overall principles are that you think should apply in general are?
>
virusrex
I am only arguing against misrepresentations and false arguments you are using, I cannot be made responsible for you willingly choosing to do that in order to discuss.
So it is not a red herring to clarify that your claim that it did was false, you are now contradicting your previous comment. This is not splitting hairs, you made a claim about your references that was false. You would be as correct as saying using AI is collusion, which obviously is not the case either.
That your previous references do not make that distinction and since your argument depends on this being an automatic equivalence without exception that means your argument is still defeated, my position only requires examples where use of AI is declared as against the rules for reasons different from being a "person" from which content is stolen to be proved, you yourself provided those examples.
Yet you used a very clear unambiguous example of someone cheating by breaking the rules to explain the situation, you repeatedly try to ignore this counterargument without refuting it as if you wrote no such a thing, you did.
When you make an argument, it gets disproved by your own sources and then you make a different one that is precisely how it is called. The same way as misrepresenting those sources as if they supported your claim when it was very easy to demonstrate that was not the case.
My position is that your argument that copying from AI is taking words and ideas from another person, and therefore always the same as plagiarism is mistaken.
rainyday
Yes, yes, I get that. But that wasn’t the main question I was asking. I’m not trying to be argumentative here, I’m just curious if you have an opinion on what principles should govern the use of AI in relation to writing.
Well, if the sports analogy is still bothering you, it was in response to your definition of “talent” in writing to include the talent of using AI seemlessly without being noticed. These seem to be seperate skill sets to me, the same way that the way to cheat on a doping test is a seperate skill set from playing the actual sport. I did not mean that to be an analogy to the specific incident in the article, but rather to the fact that in general if we accept the use of AI as part of writing talent as you suggest its also going to include cases that probably go way more against what people would consider within bounds of what is acceptable (which is likely different from what the rules might say, thus the problem).We thus need a discussion on where those boundaries lie, which is what I’ve been trying to engage in.
You can certainly pick that apart for contradictions, inaccuracies, goal post shifting or whatever but this is the comment section on JT, not a peer reviewed journal and I’m making these comments on my ipad while multitasking a few other things.
virusrex
You asked what was my position, that is it. That is the purpose and the meaning of my comments, it is not like anybody can force anybody else to discuss something specific instead of what they come to comment about.
Precisely because it is not against the rules and fulfills completely the purpose of writing a novel, against which you used an invalid analogy where a clear, explicit breach of the rules was done. Nobody can be blamed for taking that as you accusing the author from cheating simply because he would still write a prize winning novel even if including answers from AI. The author in this article did not disobeyed any rule, did nothing that was explicitly or implicitly forbidden and reached completely the goal writing an entertaining book enough to win a prize thanks to his ability with the language.
Again, looking at the current situation and how the abilities of AI to write things are already declining because of inevitable corruption, that is a huge leap of faith. There are already books written by AI, terrible, mistaken, nonsensical books that even as simple guides (not something to be recognized by artistic talent even) fail completely in their purpose. Thinking that AI can magically solve the expected degradation and begin to write things of quality without a huge help from a real writer with talent is not realistic. But even if this was magically solved the solution would be simple, make it a rule (even if arbitrary in absence of advantages) for the writers not to use it (and hope that the magical solution do not make it also impossible to detect it).
That is no excuse to mispresent a source and pretend it said something contrary to what actually said, it would be a perfectly valid reason for example to not replying to something, precisely because this is a comment section where people are free to write (as long as inside of the rules of the site) without any problem.
rainyday
No, that is not it, I also asked:
This was out of curiosity, if you don’t want to answer fine, move on, but don’t pretend the question wasn’t put to you.
Asking your opinion about something - which I should note is the central question raised by the article and not some random irrelevant thing I’m injecting into it - is not “forcing” you to do anything. If you don’t want to talk about what I have made clear that I want to talk about then kindly stop responding to my posts.
If I didn’t make might point clear the first time then yes, nobody is to blame but me for that. But I’ve been explicitly telling you what I meant for several posts now yet you keep coming back to tell me what I meant rather than listening to what I’m actually saying I meant, so the blame is on you for that.
This might be the case, I don’t actually know what is possible going forward (the fact that the problem exists today doesn’t mean it won’t be solved even if the solution is not apparent now. I understand the nature of the degradation problem, which is inherent in the way these things learn and produce content, but it doesn’t seem like something which is fundamentally never going to be solved). Also even if it’ll never be perfect at writing something as long as a novel, its already capable of writing shorter content that can be made “readable” with just editorial/proofreading work by humans, so the issue is not just hypothetical but one that seems to require an answer today in that context.
Yeah, this is the discussion I was trying to start. Is it really necessary to ban writers from using it? Or for example would allowing its use but requiring disclosure of it suffice? Or some other rule or set of rules? If there are benefits to using AI but only within certain limits then we need to make those limits clear - this is exactly the point being made in the article. Banning it will cost us that benefit, allowing a free for all on the other hand will create other problems. We need rules that lie somewhere in between and that is where intellectual effort needs to be directed.
Debating whether what this one author did is OK or not is at best only tangentially related to those broader and more important questions, hence my lack of interest in getting bogged down in that debate (despite your repeated insistence to the contrary).
Uh yes, point already made, you’ve explained yourself, I’ve explained myself, no need to keep berating the point. Passing off the work of AI as your own is not allowed. I mistakenly said that Cambridge defined it as plagiarism, when it in fact defines it as academic misconduct (which includes but is not limited to plagiarism). You can interpret this as either an honest mistake on my part in the context of a casual exchange on this forum, or a deliberate, calculated act of deception that requires you to constantly re-litigate the point long after I’ve conceded it.
Yes, feel free to do so.
virusrex
What I did was to answer your question, you made up a claim and asked if that was my position, I could have simply answered no, but instead clarified what it was, if you don't want to accept that answer that is not something I can help with.
In the same way that proving you misrepresented a source in no way is forcing you to do it in spite of not having the time of properly present what was said in those sources, yet you still claim you had to do it this way without anybody even pressuring you.
I am telling you that it is natural and logical to think you accused the author of cheating when you used an example of a cheater as the analogy. You can say that this was what you meant or that you were mistaken in using that analogy, but that does not make it less valid to have this interpretation.
Which still means assuming it would be solved is as valid as assuming it would never become a problem for literature as well, both completely depend on arbitrary assumptions.
More important to you, but not necessarily for everybody else. Someone may be more interested in cleaning up a discussion from invalid arguments than trying to justify decisions that for all anybody knows are completely arbitrary or driven only by concerns of popularity/economic profit.
In the context of an educative institution that is mainly interested in evaluating the writings as a proxy of the capacity of a student to correctly understand and explain a subject. Something that is of course not the case for the article.
Which is again a much better option instead of misrepresenting a source, that is the whole point.
zibala
So you recognize you didn't stay on point.
zibala
Very well argued; I didn't see any of your points being contradicted.
rainyday
About half of what you’ve written has been misrepresenting what I’ve written, a far worse offence than a minor error about a source which I acknowledged as soon as you pointed it out.
There is no point in furthering this discussion, I’ve wasted way too much time on this already. Feel free to have the last word if you so desire.
virusrex
By answering the original question? that would be precisely staying on point.
That would not be correct, I clearly explain how it is not illogical to assume you think the author cheated when you used doping and cheating as an analogy of what he did. I clearly demonstrated your sources did not say what you claimed they said, what part of that is misrepresentation?
I keep wondering why you think anybody is being forced to discuss here, the whole point is doing it as long as you find it a productive use of your time so it should be the default position to not do it if you find it uninteresting, boring or if you realized you made a mistake and no longer want to defend that mistake.
gcFd1
Agreed--well laid out argument.
virusrex
That argument was defeated the moment there was no justification to assume AI is a person from which content can be stolen. there is no "work of others" as clearly proved by the references provided.
zibala
The novel by 33-year-old Rie Kudan, titled Tokyo-to Dojo-to ("Sympathy Tower Tokyo") is set in the Japanese capital in the not-so-distant future when generative AI has become ubiquitous, and took the 170th Akutagawa Prize for up-and-coming authors in January.
Looking forward to an AI translation.
Good point. It is plagiarism.
virusrex
If the argument was correct then it would also be fabrication or collusion, which obviously is not. Being part of a bigger category do not mean it is also a part of every subcategory as well, that is clearly illogical.
zibala
Totally agreed. All on point.
virusrex
But at least the current example is not plagiarism then, not being anybody else from whom the text was stolen, clearing up this is a good first step into considering the actual consequences of the actions.
gcFd1
This is agreed.
Good, solid analogy.
It's a classic case of plagiarism passing off work that is not one's own, as one's own.
AI was taught by humans.
virusrex
Agreeing with something without supporting that with an argument means you are agreeing with something disproved.
Except on the part where the author of the analogy already accepted it is completely inappropiate since the author of the novel did not cheat in any way nor disobeyed any rule, so the opposite of solit.
Yet the original argument was that plagiarism was taking the work of another, which in this case is still invalid since there no other to take the work from.
rainyday
Is it really necessary to still be dredging this conversation up? I was hoping to move on but its irritating to see my words being misrepresented.
Read the section of text he is agreeing with. It was an opinion, not a statement of fact that is even capable of being “disproved”. You may have disproved other things, but not that. Accuracy is important, no?
Except that the author - me - never said it was “completely inappropriate”, he just said that he didn’t mean for it to apply to the specific case in the article. In other cases involving the use of AI by an author, depending on the circumstances, I think it could be a completely appropriate analogy.
Misrepresenting what sources say is bad, right? Please stop doing it.
Point taken. It would be more accurate to say that the plagiarism is, as the previous comment suggested, passing off work that is not your own as if it were your own. Very poor wording on my part and yes, you did disprove the validity of the definition I originally used.
Now lets please move on.
virusrex
As necessary as anybody interested is, as you were of leaving it saying you had no more interest (only to come back).
Why would you not move up? it is not like anybody is forcing anybody to come and read comments, much less to write anything.
Therefore completely inappropriate for the specific situation it was used for. Saying it was a crime for example could apply for othere completely different situations, but it would be also inappropriate for this one. There is no misrepresentation here, different from the sources you brought that actually contradicted what was claimed they said.
As you yourself wrote before saying you had no more interest in the topic, Yes, feel free to do so. I still can't understand participating in a conversation as if it was forced somehow, specially after already saying there was no point in doing it or that it was a waste of time. For those that still see a point there is no problem in continuing.
rainyday
Because you keep misrepresenting what I said.
Not what I said, not what I meant. That is your view, not mine. Stop attributing things to me which I did not say.
That is no excuse for you continuing to misrepresent what I said. Please refrain from doing so.