The requested article has expired, and is no longer available. Any related articles, and user comments are shown below.
© KYODO
Japan's AI draft guidelines ask for measures to address overreliance
TOKYO©2025 GPlusMedia Inc.
The requested article has expired, and is no longer available. Any related articles, and user comments are shown below.
© KYODO
10 Comments
Login to comment
diagonalslip
all sounds rather theoretical... not to say fanciful....
indigo
individual's permission is not protected at all.
In Japan anybody can claim that agreement has been given orally without any evidence!
Sven Asai
Of course the very most datasets used for machine learning are based on real world observations and therefore somehow biased, otherwise usage of AI would be obsolete and we all could instead simply use average data , linear regression or calculated theoretical statistics and distribution models for determining the expected but wrong standard outcomes. Obviously they don't know what they are talking about and make new senseless rules for.
factchecker
No doubt those guidelines will be circulated by fax machines.
dagon
Bias is something a hard thing to suss out, as Google and other LLM developers have found
In the case of Japan, will it filter out the hierarchical prejudices, the often wrong received wisdom sometimes called 'common sense' that is used in place of critical thinking?
The tech mavens of the LDP/Japan Inc. combine need to listen to the voices at the forefront of AI alignment.
JRO
I'm someone that is very likely to lose out a lot on the advancement on AI, from the start every art related work I had published on the internet is currently in active datasets without my approval. Of course very conveniently all websites were we publish art had very little against that type of use of our content. After a lot of hate they finally made some rules against machine learning use but it was made to only count towards uploads done after the new rules were made and won't apply to the decades of data that has already been stolen. As a final bonus they made it an opt out option, so 90% of users that aren't aware still silently approve of such use of their content.
Sadly I don't think there is much to be done. If progressive countries make unrealistic laws against AI, countries such as China will just do what we don't and the money ends up in even worse peoples pockets.
ushosh123
The regulators basically have no idea what they are talking about. It's also hard to imagine them being the AI expert.
Just have it open sourced (training data / methodology etc) and have some sort of liability framework to hold them accountable.
theFu
My main concern is that users need to be told, then told again and again and again that they are dealing with AI. A watermark or some other, extremely clear, way to notify users they are using AI needs to be legally mandated. Also, a way to contact a real human needs to be legally mandated. Ever tried to contact a real human from Google? I have. There's no way to get someone responsible to contact. They all provide "feedback" ... which is fed into the machine, but never responded by a human. That needs to be illegal.
桜川雪
The measures to reduce reliance on AI are then written by AI.
GillislowTier
Too late. They are already curating a generation of AI before thinking users. My kids friends always break out the iPad the school gave them to translate stuff without even a thought. It’s one thing if it’s a big uncommon word, but literally saw one of them use it for good morning…. This is at 14 years old by the way