AI Ethics is being narrowed on purpose, like privacy was

(nimishg.substack.com)

171 points | by i_dont_know_ a day ago ago

121 comments

  • nisten a day ago ago

    The problem with AI ethics, safety and to a smaller extent, privacy groups is that the priority of the work/message is not placed on the practicality of solving the problem, or i.e. calculating improving affordability of food/housing, but is placed,as evident on this article too, on the lack of "governance structures".

    In other words the priority of the work is to get these types of people into positions where they don't do any work.

    At least with privacy groups you do get here and there some practical advice on using ublock origin or more rarely on how to install a blocklist from https://someonewhocares.org/hosts/, but with AI ethics & safety orgs... well lets put it this way.

    I have yet to meet a single AI safety person that knows how to rename a file in linux.

    God forbid we have a rogue AI-worm shutting down all servers & BGP routers while these types of people were in charge of safety, they'll be in the way of anyone even fixing it. They can't even get a simple safety benchmark working on lm_eval-harness. They're great at lecturing you why they shouldn't need that.

    And this is the key issue with AI Ethics. It's the refusal to work at the problem constructively, and get the most skilled people possible to actually make the damn benchmarks to work, to rank models on the understanding of human rights, to list every current violation and abuse of humans in every single country without exception and to make practical plans on what to do when systems go rogue. Even if they're not technical they could be making the dataset in a csv in excel for that and making it public domain accessible.

    Instead we get the most depressed, leechy-office-worker types complaining about how it's all over.

    Now back to work, move it.

    • dwohnitmok a day ago ago

      > I have yet to meet a single AI safety person that knows how to rename a file in linux.

      You are meeting very few AI safety people then. A significant fraction of the AI safety people I've run into can build and train an LLM from scratch (without AI assistance FWIW), let alone have a grasp of basic command-line operations.

      • datadrivenangel a day ago ago

        Where are you running into these unicorn AI safety people?

        • dwohnitmok 9 hours ago ago

          People who have gone through MATS (ML Alignment & Theory Scholars), SPAR (Supervised Program for Alignment Research), ARENA (Alignment Research Engineer Accelerator), or any of those other similar programs.

        • number6 a day ago ago

          CEN and CENELEC

    • empiko a day ago ago

      There are tons of highly technical papers being published about this topic. Yeah, there are some people without tech knowledge working on this as well, but I would expect them to be a minority, although they are probably very visible.

    • tempodox a day ago ago

      Par for the course. This whole “AI” wave is one massive hype fest chock full of “creative marketing”, where you're hard pressed to find any reliable facts, how would “AI Ethics” even be a thing other than a massively hallucinatory artifact? If there were any ethics to be seen around “AI”, the first order of business would be to stop wasting those ridiculous amounts of energy for some cute parlor tricks.

    • nradov a day ago ago

      If you're concerned about worms (and other malware) then "AI" is a total red herring. If BGP implementations have some kind of security vulnerability then eventually someone will find and exploit it, with or without AI.

    • pjc50 a day ago ago

      > to list every current violation and abuse of humans in every single country without exception

      Couldn't you just ask the model that?

      Joking of course, but this is in and of itself an intractable problem. Do you mean "restate the principles of human rights", which is a pretty small subset of law which is in turn a small subset of ethics, or do you mean actually get out there and enumerate and name every single person having their human rights violated? Not only is that an absurd amount of work it's politically impossible.

      • nisten a day ago ago

        Sure yes that too, all of it. Why not. Labs process datasets with trillions of tokens.

        The whole point was that these people just complain and make smartass excuses instead of getting any actual work done.

    • pyuser583 21 hours ago ago

      > I have yet to meet a single AI safety person that knows how to rename a file in linux.

      It's funny, because I can tell when I'm dealing with a non-technical, policy person, because they always use language in a way no programmer would. I'd prefer not share the specific tells, but there are some things technical people just don't say, but policy people say all the time.

    • pessimizer a day ago ago

      > I have yet to meet a single AI safety person that knows how to rename a file in linux.

      I don't know if instead of saying "safety" here you meant to say ethics, or if you're using "safety" in this sentence just to generally refer to "AI ethics, safety, and to a smaller extent privacy."

      If either of those are true, that's weird because the only person in AI ethics most people know is Timnit Gebru, because she got fired and it made the papers. She has a BA and MA in electrical engineering, and her father was also an electrical engineer. After that, she went on to a PhD in computer vision with Fei-Fei Li (Imagenet) as her advisor.

      https://en.wikipedia.org/wiki/Timnit_Gebru#Early_life_and_ed...

      I guarantee you she knows how to rename a file in Linux.

      If, instead, you were referring to "safety" specifically, I'd like to understand how you're making the distinction.

      edit:

      > Gebru joined Apple as an intern while at Stanford, working in their hardware division making circuitry for audio components, and was offered a full-time position the following year. Of her work as an audio engineer, her manager told Wired she was "fearless", and well-liked by her colleagues. During her tenure at Apple, Gebru became more interested in building software, namely computer vision that could detect human figures. She went on to develop signal processing algorithms for the first iPad. At the time, she said she did not consider the potential use for surveillance, saying "I just found it technically interesting."

      • qcnguy 21 hours ago ago

        Gebru didn't do any AI safety work at Google. She wrote stuff about doing social good, wrote the silly stochastic parrots paper that argued AI research was a waste of time, and whose title was proven false immediately (LLMs aren't parrots). The closest she got to "safety" was complaining that AI researchers weren't concerned enough that LLMs might say things Gebru didn't personally like.

        Serious safety researchers are doing stuff like understanding neural circuits. Very different.

        She surely knows how to use Linux. But she isn't really a safety researcher.

        • overfeed 21 hours ago ago

          > Gebru didn't do any AI safety work at Google

          Wouldn't you find it strange for a co-lead of an Ethical AI team to not to do any AI safety work? I realize the AI Doomer vs AI Zoomer is a culture-war with a veneer or technical jargon, but I hope we can at least agree on the basic tenets of our shared reality even as we draw wildly different conclusions.

          • qcnguy 17 hours ago ago

            Not really. Ethics asks, "is this good?". Safety asks, "how do we make this safe?" - very different questions.

    • GauntletWizard 19 hours ago ago

      Amen.

  • Isamu a day ago ago

    This has been happening for a long time. I first noticed this with the hand waving dismissals of older concepts like Asimov’s laws.

    Not a carefully reasoned argument why “not causing harm to a human” is outmoded, but just pushing it aside. I would love to see a good reasoned argument there.

    No, instead there is Avoiding talking about harm to humans. Just because harm is broad doesn’t get you out of having to talk about it and deal with risks, which is at the root of engineering.

    • myrmidon a day ago ago

      I think a big factor in Asimov's laws specifically being sidelined is that the whole process of building AI looks very different from what we pictured back then.

      Instead of us programming the AIs by feeding it lots of explicit hand-crafted rules/instructions, we're feeding the things with plain data instead, and the resulting behavior is much more black-box, less predictable and less controllable than anticipated.

      Training LLMs is closer, conceptually, to raising children than to implementing regexp parsers, and the whole "small simple set of universal constraints" is just not really applicable/useful.

      • sensanaty a day ago ago

        Isn't this worse, though? You said it yourself, it's an even blacker black box that nobody truly understands. And we even have an attempt at setting rules for these things akin to Asimov's laws, those "system prompts" that people are fascinated by are testament to that and they tend to be THOUSANDS of statements long (and they're not very good at following them).

        People often misunderstand Asimov's laws, the entire point of the laws and the stories they're set in was that you can't just throw a simple "Don't hurt people" clause at a black box like AI and expect good results. You first have to define "Don't", then you have to define "hurt" and perhaps the hardest of all is you have to define "people". And I mean really define it, to the smallest most minute detail of what exactly all those words mean. Otherwise you very quickly run into funny, tragic and even contradictory situations, and those situations are endlessly unique.

        Is feeding grossly unhealthy food to a starving person harm? Perhaps not, you can argue it's better to eat something unhealthy than to starve. What about feeding someone on the brink of a cardiac arrest that same meal? Now what about all the other gray areas involved here, you have to define every single possible situation in which an unhealthy meal might affect someone.

        It's kinda funny, because it really is almost prophetic considering it's a story written quite a long time before we were even close to it being a reality...

        • myrmidon a day ago ago

          My point is that past expectations about AI where that it would be possible/most viable to set hard, explicit constraints on behavior, like how a stream of instructions constrains the behavior of a CPU.

          There simply IS no explicit definition for "people", "hurt" or "don't" inside an LLM that you could found such hard constraints on.

          Note that we never found a way to "program" such constraints into a human mind either, we probably/hopefully never will, and I think that whole approach ("simple, hard deterministic constraints") is just never gonna work for AI; so Asimovs rule framework is just not really applicable.

      • stephencanon a day ago ago

        Raising children involves a whole lot of simple constraints that you gradually relax.

        “Don’t touch the knife” becomes “You can use _this_ knife, if an adult is watching,” which becomes “You can use these knives but you have to be careful, tell me what that means” and then “you have free run of the knife drawer, the bandages are over there.” But there’s careful supervision at each step and you want to see that they’re ready before moving up. I haven’t seen any evidence of that at all in LLM training—it seems to be more akin to handing each toddler every book ever written about knives and a blade and waiting to see what happens.

      • qcnguy 21 hours ago ago

        Asimov didn't describe robots programmed using formal logic. All robots in Asimov's stories had "positronic brains" that were described as being quite humanlike and unpredictable. His stories all revolve around this: the 3 laws are intentionally vague and open to interpretation, allowing non-deterministic or surprising outcomes. Not so different to LLMs.

      • diggan a day ago ago

        > Instead of us programming the AIs by feeding it lots of explicit hand-crafted rules/instructions, we're feeding the things with plain data instead, and the resulting behavior is much more black-box, less predictable and less controllable than anticipated.

        I dunno, we do feed them lots of explicit hand-crafted rules/instructions, it's just that does don't go into the training process, but instead goes into the "system"/"developer" prompts, which is effectively the way you "program" the LLMs.

        So you start out with nothing, adjust the weights based on the datasets until you reach something that allows you to "program" them via the system/developer prompts, which considering what's happening behind the scenes, is more controllable than expected.

        • myrmidon a day ago ago

          Yes, but those hand-crafted rules are just input data, they don't actually constrain the behavior, they are just an attempt.

          Similarly to how verbal instruction works with a child: You can tell it not to touch the hot stove, but the child still might try.

          • diggan a day ago ago

            > they don't actually constrain the behavior

            They do actually constraint the behavior, to various degrees of success which depends on the model, the system prompt, the inference parameters, the current context length and a lot more. Add in the new `developer` role and you have another venue for constraining the assistant outputs. Finally, structured outputs can help in forbidding specific terms too.

          • exe34 a day ago ago

            You can zap them with RL.

      • const_cast 18 hours ago ago

        > Training LLMs is closer, conceptually, to raising children than to implementing regexp parsers

        Well then that's terrifying - the problem with children is that you can raise them perfectly and they still end up psychos. That's mostly limited by the fact that we can't raise many children and humans are pretty limited in the damage they can do.

        But AI scales infinitely, and if we give it access to too much stuff then the damage it could do could be human race ending.

      • tsumnia a day ago ago

        All I know is when I asked Grok, Claude, ChatGPT, and Gemini if it believed in the Hogfather, only ChatGPT and Claude said yes.

        We gotta work on getting the other models to agree.

      • salawat a day ago ago

        >Training LLMs is closer, conceptually, to raising children than to implementing regexp parsers, and the whole "small simple set of universal constraints" is just not really applicable/useful.

        That this can be said, and there still being so doubt we should ramp up the Ethics research before going and rawdogging the implementation just bloody bewilders me.

      • scotty79 a day ago ago

        It might make a comeback when we finally get good at teachning AI what's real and what's imagined and also logical reasoining. I think it does moral evaluation of actions mostly well already (bacause humans are not great at it anyways). Then a rule like "don't harm humans" might suffice.

        • pjc50 a day ago ago

          The AI has a huge problem with knowledge of the real because, unlike humans struggling with the question of whether the universe might be a simulation or we might be a brain in a jar dreaming that we're human, the AI is a simulation and it is a brain in a jar. It cannot prod the real universe to determine what's real.

        • DSingularity a day ago ago

          I’m not sure we will ever get good at teaching them to distinguish reality from imagination. Feels like there are too many generative models pushing everything from fake songs to fake video clips.

          • halfmatthalfcat a day ago ago

            We can’t even do it ourselves. People live in their own “truth”.

      • add-sub-mul-div a day ago ago

        > the whole process of building AI looks very different from what we pictured back then.

        Right, and so do the harm risks. We need a framework centered around how humans will use AI/robots to harm each other, not how AI/robots will autonomously harm humans.

        • SpicyLemonZest a day ago ago

          Why so? Even for simpler and better-understood machines, autonomous harm is a critical part of the safety framework. We wouldn't declare a steel mill to be safe just because there's lots of safeguards against humans intentionally using the machines to harm each other.

          • add-sub-mul-div 21 hours ago ago

            AI being weaponized by people is the obvious and bigger risk but sure, there could be other types of harm I'm not focused on.

    • JackFr a day ago ago

      Not hand waving, Asimov’s three laws are not a good framework. My claim is that the whole point was so that Asimov could write entertaining stories about the ambiguities and edge cases of the three laws.

      • qualeed a day ago ago

        This is a pretty good example of what parent comment was referencing, I think.

        You say "Asimov’s three laws are not a good framework.", then don't present any arguments to why it is not a good framework. Instead you bring up something separate: the framework can facilitate story writing.

        It could be good for story writing and a good framework. Those two aren't mutually exclusive things. (I'm not arguing that it is a good framework or not, I haven't thought about it enough)

        • Isamu a day ago ago

          Right, in particular Asimov is not presenting a detailed framework of any kind.

          His laws are constraints, they don’t talk about how to proceed. It’s assumed that robots will work toward goals given them, but what are the constraints?

          People now who want to talk about alignment seem to want to avoid talk of constraints.

          Because people themselves are not aligned. To push alignment is avoiding the issue that alignment is vague and the only close alignment we can be assured of is alignment with the goals of the company.

          • romaniv 21 hours ago ago

            Spot on.

            At some point I tried to figure out where the term "alignment" came from. I didn't find any definitive source, but it seems to have originated on a medium.com blog of Paul Christiano:

            https://ai-alignment.com/ai-safety-vs-control-vs-alignment-2...

            Basically, certain people are dismissing decades of deep though on this subject from writers (like Asimov and Sheckley), scholars (like Postman) and technologists (like Wiener). Instead, they are creating a completely new set of terms, concepts and though experiments. Interestingly, this new system seems to make important parts of the question completely implicit, while simultaneously hyper-focusing public attention on meaningless conundrums (like the infamous paperclip maximizer).

            In my view, the most important thing about the three laws of robotics is that they made it obvious that there are several parties involved in AI ethics questions. There is the manufacturer/creator of the system, the user/owner of the system and the rest of the society. "Alignment" cleverly distracts everyone from noticing the distinctions between these groups.

        • morsecodist a day ago ago

          I think it's fair to point out that they were never intended to be a good framework for aligning robots and humans. Even in his own stories they lead to problems. They were created precisely to make the point that encoding these things in rules is hard.

          As for practical problems they are extremely vague. What counts as harm? Could a robot serve me a burger and fries if that isn't good for my health? By the rules they actually can't even passively allow me to get harmed so should they stop me from eating one? They have to follow human orders but which human? What if orders conflict?

          • diggan a day ago ago

            > I think it's fair to point out that they were never intended to be a good framework for aligning robots and humans. Even in his own stories they lead to problems. They were created precisely to make the point that encoding these things in rules is hard.

            That seems like the biggest point missed here. They're intended to be able to lend themselves to "surprising" conclusions, which is exactly what we don't want, so it seems obvious to me that those laws aren't good enough? That's how I remember the stories at least.

            • adastra22 a day ago ago

              This seems very much a “did you even read the book??” moment. That Asimov’s laws didn’t work, and indeed failed spectacularly, was kinda the whole point.

        • Sharlin a day ago ago

          The burden of proof is obviously on anyone who wants to argue that the three laws are, in fact, a good solid framework for robot ethics. It's pretty astonishing that the three laws are taken by anyone as being some sort of canonical default framework.

          Asimov was not in the "try to come with a good framework for robot ethics" business. He was in the business of trying to come up with some simple, intuitive idea that didn't require the readers to have a degree in ethics and that was broken and vague enough to have a plenty of counterexamples to make stories about.

          In short, Asimov absolutely did not propose his framework as an actually workable one, any more than, say, Atwood proposed the Gilead as a workable framework for society. They were nothing but story premises that the consequences of which the respective authors wanted to explore.

          • qualeed a day ago ago

            >The burden of proof is [...]

            Sometimes we can just talk about things without having to pretend we're in a court of law or defending our phd thesis.

            Original commenter wasn't asking for anyone to prove anything, or trying to prove anything themselves. They just observed that some conversations are hand-waved away.

            • Sharlin a day ago ago

              Given the total vagueness of the three laws idea and how Asimov came up with the idea because he wanted something easily broken to be used as a plot device, the perfectly reasonable stance is to not take them seriously a priori. Anyone is totally within their rights to think about them more and present for discussion some more solid ethical framework based on them. But I'd rather AI ethicists focused on frameworks that had some finite probability of actually working.

              Given that we've been thinking about ethics for thousands of years, and haven't really made much progress, I think it's pretty clear that anything that can be condensed into three sentences is not a workable model.

        • krapp a day ago ago

          The most obvious evidence that Asimov's three laws are not a good framework is the fact that they are not a framework, they are a plot device. Isaac Asimov was a professor of biochemistry, he had no clue about how robots or AI might actually work. The robots in his stories have "positronic brains" because positrons at the time were newly discovered and sounded cool.

          They aren't simply "good for story writing," their entire narrative purpose is to be flawed, and to fail in entertaining ways. The specific context in which the three laws are employed in stories is relevant, because they are a statement by the author about the hubris of applying overly simplistic solutions to moral and ethical problems.

          And the assumptions that the three laws are based on aren't even relevant to modern AI. They seem to work in universe because the model of AI at the time was purely rational, logical and strict, like Data from Star Trek. They fail because robots find logical loopholes which may violate the spirit of the laws but still technically apply. It's essentially a math problem, rather than a moral or ethical problem, whereby the robots find a novel set of variables letting them balance the equation in ways that lead to amoral or immoral consequences.

          But modern LLMs aren't purely rational, logical and strict. They're weird in ways no one back in Asimov's day would ever have expected. LLMs (appear to) lie, prevaricate, fabricate, express emotion and numerous other behaviors that would have been considered impossible for any hypothetical AI at the time. So even if the three laws were a valid framework for the kinds of AI in Asimov's stories, they wouldn't work for modern LLMs because the priors don't apply.

          • qualeed a day ago ago

            This would probably be better suited under the original comment so that the original commenter has a better chance of seeing/reading it.

    • blibble a day ago ago

      > I would love to see a good reasoned argument there.

      "we want money from selling weapons"

    • danaris a day ago ago

      Asimov's Three Laws of Robotics were explicitly designed to be a good basis for fiction that shows how Asimov's Three Laws of Robotics break down.

      Suggesting they be used as a basis for actual AI ethics is...well, it's not quite to the level of creating the Torment Nexus from acclaimed sci-fi novel "Don't Create the Torment Nexus", but it's pretty darn close.

      • jordanb a day ago ago

        It's kinda hilarious that people are explicitly trying to build a future based on (mostly dystopian) scifi, which was the point of the torment nexus thing. But then when scifi argues for constraints on technology the argument is "those are just stories."

        • Sharlin a day ago ago

          If you think Asimov proposed the three laws as anything like a workable constraint framework of some sort, you're hilariously mistaken and probably haven't read a single Robot book in your life. Asimov came up with them BECAUSE they were a) simple, b) vague, and c) broken. BECAUSE he wanted to write stories about all the specific ways they were broken.

        • krapp a day ago ago

          The argument isn't "those are just stories" it's that "those stories demonstrate why those constraints won't work."

          But people are going to try it anyway. Belief in Asimov's three laws is a matter of religious faith. Just know you've been warned.

          • const_cast 18 hours ago ago

            Trying to have constraints and then failing is arguably better than the current idea about AI safety - discarding constraints as a concept.

            If Asimov's laws don't work, that doesn't mean we can ignore the idea of them and... Just do nothing.

            • krapp 18 hours ago ago

              >If Asimov's laws don't work, that doesn't mean we can ignore the idea of them and... Just do nothing.

              I don't think anyone is suggesting to just do nothing because Asimov's laws won't work, so much as suggest that people consider why they wouldn't work, and what that means for the problem of AI alignment in the real world.

              It may simply be inevitable that AI (if we're defining AI as something like an LLM) can always be talked into or out of anything, given the right prompts. In which case constraints can only work so far and we need to consider what happens when they inevitably fail.

    • felipeerias 19 hours ago ago

      Asimov’s laws of robotics were a literary device. The plot of his robot stories usually revolved around someone finding a way to get the robot to inadvertently break the laws.

    • nancyminusone a day ago ago

      [flagged]

      • johnecheck a day ago ago

        Don't speak for us all. Just because something is difficult to quantify doesn't mean that it's not worth talking about and studying.

        While I agree that the fields of sociology and psychology have real flaws, implying that we'd be better off if everyone in those fields were flipping burgers is absurd.

        • nancyminusone a day ago ago

          It's nothing more than a stereotype, of course, but I know very few who got into engineering "for the philosophy", especially outside of software.

          That's why everything engineer course has an ethics section, although the business majors probably need one the most.

          • nradov 21 hours ago ago

            The major accreditation organizations such as AACSB and ACBSP require that business majors cover ethics, either as a specific course or integrated throughout the curriculum.

          • alnwlsn a day ago ago

            "Hey look buddy, I'm an engineer. That means I solve problems, not problems like "What is beauty?" Because that would fall within the purview of your conundrums of philosophy."

  • i_dont_know_ a day ago ago

    I keep seeing "AI ethics" being redefined to focus on fictional problems instead of real-world ones, so I wrote a little post on it.

    • bayindirh a day ago ago

      Great little post. Congrats.

      Also there's the ethics of scraping the whole internet and claiming that it's all fair use, because the other scenario is a little too inconvenient for all the companies involved.

      P.S.: I expect a small thread telling me that it's indeed fair use, because models "learn and understand just like humans", and "models are hugely transformative" (even though some licenses say "no derivatives whatsoever"), "they are doing something amazing so they need no permission", and I'm just being naive.

      • BeFlatXIII a day ago ago

        I'm a radicalized intellectual property abolitionist. The ethical issue with scraping is the DDoS-like nature it has on smaller sites and running up the bandwith bill for medium hosts. There's no individual compnay at fault for the flood. Rather, it's an emergent result of each startup attempting to train data that's ever so slightly more up-to-date or broad than its competitors. If they shared a common corpus that updated once per month, scraping traffic would be buried in organic human visitors instead of the other way around. Let them compete on training methodology, not a race for scraping.

      • dale_glass a day ago ago

        Worrying about that stuff is just a waste of time. Not because of what you said, but because it's all ultimately pointless.

        Unless you believe this will kill AI, all it does is to create a bunch of data brokers.

        Once fees are paid, data is exchanged, and models are trained, if the AI takes your job of programming/drawing/music, then it still does. We arrived at the same destination, only with more lawyers in the mix. You get to enjoy unemployment only knowing that lawyers made sure that at least they didn't touch your cat photos.

        • bayindirh a day ago ago

          The thing is, if you can make sure that some of that your images/music/code aren't used for AI training, then you can be sure that you can continue doing what you do, because your personal style enables the specialty you can create.

          Maybe you will lose some of your "territory" in the process, but what makes you, you will be preserved. Nobody will be able to ask "draw me a comic with these dialogue in the style of $ARTIST$".

          • dale_glass a day ago ago

            > The thing is, if you can make sure that some of that your images/music/code aren't used for AI training, then you can be sure that you can continue doing what you do, because your personal style enables the specialty you can create.

            Personal styles are dime a dozen and of far lesser importance than you think.

            Professionals will draw in any style, that's how we make things like games and animated movies. Even assuming you had some unique and incredibly valuable style, all it'd take to copy it completely legally is finding somebody else willing to copy your style to provide training material, and train on that.

            • bayindirh a day ago ago

              > Personal styles are dime a dozen and of far lesser importance than you think.

              Try imitating Mickey Mouse, Dilbert, Star Wars, Hello Kitty, XKCD, you name it.

              Randall will possibly laugh at you, but a legal company which happens to draw cartoons won't be amused and come after you in any way they can.

              > Professionals will draw in any style...

              Yep, after calling and getting permission and possibly paying some fees to you if you want. There's respect and dignity in this process.

              Yet, we reduce everything into money. Treating machine code like humans and humans like coin-operated vending machines.

              There's something wrong here.

              • dale_glass a day ago ago

                > Try imitating Mickey Mouse, Dilbert, Star Wars, Hello Kitty, XKCD, you name it.

                Those are not styles, they're characters for the most part.

                You absolutely can draw heavy inspiration from existing properties, mostly so long you avoid touching the actual characters. Like D&D has a lot of Tolkien in it, and I believe the estate is quite litigious. You can't put Elrond in a D&D game, but you absolutely can have "Elf" as a species that looks nigh identical to Tolkien's descriptions.

                For style imitation, it's long been a thing to make more anime-ish animation in the west, and anime itself came from Disney.

                > Yep, after calling and getting permission and possibly paying some fees to you if you want.

                Not for art styles, they won't. Style is not copyrightable.

                • bayindirh a day ago ago

                  > Those are not styles, they're characters for the most part. (Emphasis mine)

                  While I know that styles are not copyrightable for good-faith reasons, massive abuse of good-faith is a good siren for regulation in that area.

                  > You absolutely can draw heavy inspiration from existing properties, mostly so long you avoid touching the actual characters.

                  From what I understood, it's mostly allowed for homage and (un)intentional narrowing of creative landscape. Not for ripping people off.

                  > For style imitation, it's long been a thing to make more anime-ish animation in the west, and anime itself came from Disney.

                  But all are done in tradition of cross-pollination, there was no ill-intentions, until now.

                  After OpenAI ripped Studio Ghibli, and things got blurred. It's not my interpretation, either [0] [1].

                  Then there's Universal and Disney's lawsuits against Midjourney.While these are framed as character-copying, when you read between the lines, style appropriation is also something being strongly balked at [2].

                  So things are not as clear cut as before, because a company stepped on the toes of another one. Small fish might get some benefits as a side-effect.

                  Addenda: Even OpenAI power-walked away from mocking Studio Ghibli to "maybe we shouldn't do that" [3].

                  [0]: https://www.theatlantic.com/technology/archive/2025/05/opena...

                  [1]: https://futurism.com/lawyer-studio-ghibli-legal-action-opena...

                  [2]: https://variety.com/vip/how-the-midjourney-lawsuit-impacts-g...

                  [3]: https://www.eweek.com/news/openai-studio-ghibli-ai-art-copyr...

                  • dale_glass a day ago ago

                    > While I know that styles are not copyrightable for good-faith reasons, massive abuse of good-faith is a good siren for regulation in that area.

                    Nothing having to do with "good faith", but that style isn't really definable. There's thousands of artists that produce very similar outputs.

                    Also it'd be very stupid, because suddenly it'd turn out that if there's two people that draw nearly identically, one could sue the other even if that happened by chance.

                    > After OpenAI ripped Studio Ghibli, and things got blurred.

                    Nothing blurry about it. OpenAI is within full legal right to do it. It's kinda in bad taste, that's about it. Anyone can do it. Disney could make a Ghibli style movie if they ever wanted to.

                    I'm not sure why all the drama, because who even cares? The reason why I watched Ghibli movies wasn't ever about the particular looks.

                    > Then there's Universal and Disney's lawsuits against Midjourney.While these are framed as character-copying, when you read between the lines, style appropriation is also something being strongly balked at

                    You better hope it stays at characters, or we're going to have a mess of lawsuits of people and organizations suing each other because they draw eyebrows this particular way. I fail to see why is that at all desirable.

                    And of course the big corporations will come on top of that.

                    • bayindirh 19 hours ago ago

                      > Nothing having to do with "good faith", but that style isn't really definable.

                      We have something called AI which knows everything, maybe they should ask them. It's very fashionable. Even if the definition is wrong, it's an AI, it can make no wrong. That's what I've heard.

                      > I'm not sure why all the drama, because who even cares? The reason why I watched Ghibli movies wasn't ever about the particular looks.

                      Because a man and a studio which draws their movies by hand [0], frame by frame, and spend literal years doing it for a single movie deserves some respect even if you don't care about the art style.

                      Even a top notch studio like Pixar can pump a couple of minutes per week [1].

                      Doing this type of work takes immense dedication, energy and time. If you think it's worthy of nothing, I can't say anything about it. I deeply respect these people for what they do, and I'm equally thankful.

                      > You better hope it stays at characters, or we're going to have a mess of lawsuits of people and organizations suing each other because they draw eyebrows this particular way. I fail to see why is that at all desirable.

                      Maybe they should drink their own poison to understand what kind of delicate balances they're poking and prodding. The desire for more monies in spite of everything should have some consequences.

                      [0]: https://www.reddit.com/r/nextfuckinglevel/comments/1egdzja/t...

                      [1]: https://www.reddit.com/r/todayilearned/comments/8p71cb/til_i...

      • jeppester a day ago ago

        Sometimes AI is "just like a human", other times AI is "just a machine".

        It all depends on what is most convenient for avoiding any accountability.

      • JackFr a day ago ago

        IP is a pragmatic legal fiction, created to reward developers of creative and innovative thought, so we get more of it. It’s not a natural law.

        As such fair use is whatever the courts say it is.

        • bayindirh a day ago ago

          Then let's abolish all of them. Patents, copyrights, anything. Let's mail Getty, Elsevier, car manufacturers, chemical plants, software development giants and small startups that everything they have has no protection whatsoever...

          Let us hear what they think...

          I'm for the small fish here, people who put things out because of pure enjoyment, waiting nothing but a little respect for the legal documents they attach to their wares they made meticulously, which enables most of the infra which enables you to read this very comment, for example.

          Current model rips the small fish and feeds the bigger one forcefully, creates an inequality. There are two ways to stop this. Bigger fish will respect smaller fish, because everybody is equal in front of law (which will not happen) or abolishing all protections and make bigger fish vulnerable to small fish (again, which will not happen).

          Incidentally, I'm also here for the bigger fish, too, which put their wares in source-available, "look but not use" type of licenses. They are also hosed equally badly.

          I see the first one as a more viable alternative, but alas...

          P.S.: Your comment gets two points. One for deflection (it's not natural law argument), and another one for "but it's fair use!" clause. If we argue that only natural laws are laws, we'll have some serious fun.

          • BeFlatXIII a day ago ago

            > Then let's abolish all of them. Patents, copyrights, anything.

            This, but without the irony. Let us be like bacteria, freely swapping plasmids.

      • i_dont_know_ a day ago ago

        Thanks! Yeah, there's a lot of "well, it's 'standard practice' now so it can't be wrong" going on in so many different ways here too...

    • grues-dinner a day ago ago

      Yes, all this highly public hand-wringing about "alignment" framed in terms of "but if our AI becomes God, will it be nice to us" is annoying. It feels like it's mostly a combination of things. Firstly, by play-acting that your model could become God, you install FOMO in investors who see themselves not being on the hyper-lucrative "we literally own God as ascend to become its archangels" boat. You look like you're taking ethics seriously and that deflects regulatory and media interest. And, it's a bit of fun sci-fi self-pleasure for the true believers.

      What the deflection is away from is that the actual business plan here is the same one tech has been doing for a decade: welding every flow and store of data in the world to their pipelines, mining every scrap of information that passes through and giving themselves the ability to shape the global information landscape, and then sell that ability to the highest bidders.

      The difference with "AI" is that they finally have a way to convince people to hand over all the data.

    • Levitz a day ago ago

      It's interesting how I think our experience differs completely, for example, regarding people's concerns for AI ethics you write:

      >People are far more concerned with the real-world implications of ethics: governance structures, accountability, how their data is used, jobs being lost, etc. In other words, they’re not so worried about whether their models will swear or philosophically handle the trolley problem so much as, you know, reality. What happens with the humans running the models? Their influx of power and resources? How will they hurt or harm society?

      This is just not my experience at all. People do worry about how models act because they infer that eventually they will be used as source of truth and because they already get used as source of action. People worry about racial makeup in certain historical contexts[1], people worry when Grok starts spouting Nazi stuff (hopefuly I don't need a citation for that one) because they take it as a sign of bias in a system with real world impact, that if ChatGPT happens to doubt the holocaust tomorrow, when little Jimmy asks it for help in an essay he will find a whole lot of white supremacist propaganda. I don't think any of this is fictional.

      I find the same issue with the privacy section. Yes concerns about privacy are primarily about sharing that data, precisely because controlling how that data is shared is a first, necessary step towards being able to control what is done with the data. In a world in which my data is taken and shared freely I don't have any control on what is done with that data because I have no control on who has it in the first place.

      [1] https://www.theguardian.com/technology/2024/mar/08/we-defini...

      • i_dont_know_ a day ago ago

        Thanks for the perspective. For me I think it's a matter of degree (I guess I was a bit "one or the other" when I wrote it).

        These things are also concerns and definitely shouldn't be dismissed entirely (especially things like AI telling you when it's unsure, or, the worse cases of propaganda), but I'm worried about the other stuff I mention being defined away entirely, the same way I think it has been with privacy. Tons more to say on the difference between "how you use" vs "how you share" but good perspective, and interesting that you see the emphasis differently in your experiences.

  • lr4444lr a day ago ago

    AI ethics are like nuclear ethics: the incentive to break them is too powerful without every major player becoming a signatory to some agreement with consequences that have teeth.

    • ragnot a day ago ago

      If you have the time, check out the show "Pantheon" (it should be on Netflix). It goes into this and how effectively AI ethics goes out the window when the reward for breaking them means nation-dominating power.

  • mitthrowaway2 a day ago ago

    It seems to me that this article is the one prevaricating between "ethics" and "safety". The latter is of course a narrow subset of the former, as there are many ethics issues that are not safety issues.

    • bo1024 a day ago ago

      You might not be aware of the context (actually the author of the article might not either). There has in fact been a big push by major AI companies to focus on quote safety unquote while marginalizing (not citing, giving attention to, etc) people focusing on what those companies call quote ethics unquote.

      For example, from Timnit Gebru:

      > The fact that they call themselves "AI Safety" and call us "AI Ethics" is very interesting to me.

      > What makes them "safety" and what makes us "ethics"?

      > I have never taken an ethics course in my life. I am an electrical engineer and a computer scientist however. But the moment I started talking about racism, sexism, colonialism and other things that are threats to the safety of my communities, I became labeled "ethicist." I have never applied that label to myself.

      > "Ethics" has a "dilemma" feel to it for me. Do you choose this or that? Well it all depends.

      > Safety however is more definitive. This thing is safe or not. And the people using frameworks directly descended from eugenics decided to call themselves "AI Safety" and us "AI Ethics" when actually what I've been warning about ARE the actual safety issues, not your imaginary "superintelligent" machines.

      https://www.linkedin.com/posts/timnit-gebru-7b3b407_the-fact...

      • SpicyLemonZest a day ago ago

        There has been a big push by major AI companies to focus on "safety", which they understand to refer to the novel types of harm that a powerful AI model might cause.

        It's true that some people are confident there's no such novelty, and it's impossible for an AI system to cause a problem which can't be analyzed within the frameworks we've developed for human misbehavior. Some of those people do say that if you don't agree with them it must be because of "eugenics". But both of these positions make so little sense to me that I'm not sure how to engage with them.

        • bo1024 14 hours ago ago

          Your second paragraph doesn't match the dialogue around the topic that I've encountered.

          I generally agree with your first paragraph. My summary of the critique on "safety vs ethics" is that the push to focus on "novel" types of harm has come with dismissing and glossing over of AI reproducing and amplifying existing harms. These are well documented in machine learning from the pre-LLM era (e.g. books like Weapons of Math Destruction).

    • JackFr a day ago ago

      Safety is about unintentional harm to yourself or others. Ethics largely concern themselves with intentional behavior.

      • mitthrowaway2 19 hours ago ago

        Hmm, that's a very good distinction. But I think there's still a large overlap in which safety can include the prevention of intentional harm, not only accidents. For example, traffic bollards can stop a vehicle that has lost control, as well as stopping a deliberate ramming attack. There is no question that they are a safety feature but ethics doesn't really factor into it. People might debate cost-benefits but nobody really debates "shouldn't we allow trucks to ram their way into schools and hospital lobbies?"

    • i_dont_know_ a day ago ago

      True... I was trying to define them the way (I think) companies are defining them (like what their alignment teams are looking at) and the way it's reported. I think in these specific contexts they're used with overlap but yeah I do bounce back and forth a bit here.

  • blibble a day ago ago

    > If we give companies unending hype, near unlimited government and scientific resources, all of our personal data including thoughts and behavior patterns, how do we know their leaders will do what we want them to, and not try to subvert us and… take over the world? How do we know they stay on humanity’s side?

    I've been saying this for a while

    malevolent unaligned entities have already been deployed, in direct control of trillions of dollars of resources

    they're called: Mark Zuckerberg, Larry Page, Elon Musk, Sam Altman

    "AI" simply increases the scale of the damage they can inflict, given they'll now need far fewer humans to be involved in enacting their will upon humanity

    • positron26 a day ago ago

      Distribute power or go home. Something missed by many in these conversations is the role of open source in raising the floor so that we have a gazillion companies that have more interest in there being a fair, predictable market than a winner-take-all market.

      • i_dont_know_ a day ago ago

        Really good point on open source and the nudges it provides, and definitely a point that isn't made often enough!

  • kyoob a day ago ago

    Starts so strong with "governance structures, accountability, how their data is used, jobs being lost, etc," refutes that what we mean is some sci-fi scenario when we ask about ethics, and then ends with a sci-fi scenario: "...how do we know it will do what we want it to, and not try to subvert us and… take over the world? How do we know it will stay on humanity’s side?"

    Wait, go back to the jobs! What was that about accountability?

  • parpfish a day ago ago

    I'm far less worried about ethical issues that arise from building AGI (or soemthing very close to it) than I am about the ethical issues that arise from building really good machine-learning models (that a marketing department calls AI).

    Things like the alignment problem, post-scarcity economics, the legal status of sentient machines are all issues to be dealt with, but theyre are pretty speculative at this point.

    Problems that stem from deepfakes, voice cloning, bias in algorithmic decision making are already here and need to be dealt with.

    • soiltype a day ago ago

      Like many situations in the current world, it's all a problem. Those with profit incentives to do so will create as many problems for society as they can.

      None of it is merely a distraction, we just don't have the capacity to defend against all of it.

      It is fair to say we might be practically better off focusing on one form of attack and sacrificing defense against another, but the only way to actually be safe is to stop your enemy from attacking at all. Either we shut down breakneck AI development (nearly impossible and guaranteed to have its own bad outcomes) or we slide rapidly into a more and more dangerous world.

  • Workaccount2 a day ago ago

    I have wondered for a while now if it happens to be that the less safe guards and thought policing you do, the more capable and generalizable the model becomes. Like "bad" parameters are actually critical for forming the whole picture necessary for ingenuity and advancement.

    Effectively making it so that whoever has the lowest safeguards has the most capable model.

    • soiltype a day ago ago

      Yes, safety is a limiter on development velocity one way or another.

      I don't know whether "plotting harm" is a critical ability for passing some invisible threshold in not-well-defined intelligence. But building AI to avoid being harmful is incentivized against because it takes resources away from building AI to be more capable.

  • 00N8 18 hours ago ago

    I see two main types of 'AI safety': (a) Safety for the business providing the model. This includes a censorship layer, system promoting, & other means of preventing the AI from giving offensive/controversial/illegal output. A lot of effort goes into this & it's somewhat effective, although it's often useless or unhelpful to end users & doesn't address big-picture concerns. (b) The science fiction idea of a means to control a hypothetical AI with unbounded powers, to make sure it only uses those powers "for good". This type of safety is still speculative fiction & often assumes the AI will have agency & motivations, as well as abilities, that we see no evidence of at present. This would address big-picture concerns, but it's not a real thing, at least not yet.

    It remains to be seen whether (b) will be needed, or for that matter, possible.

    There are a lot of other ethical questions around AI too, although they mostly aren't unique to it. E.g. AI is increasingly relevant in ethical discussions around misinformation, outsourcing of work, social/cultural biases, human rights, privacy, legal responsibility, intellectual property, etc., but these topics predate LLMs by many years.

  • Henchman21 19 hours ago ago

    "AI Ethics" is the same as "Business Ethics", ie words without meaning. Presuming that at this point in time Capitalism will deliver anything but more inequality, more despair, more bad things in general? Literal insanity.

    Enjoy the Billy Madison reference:

    https://www.youtube.com/watch?v=dtlJjkI34V4

  • mystraline a day ago ago

    Whose ethics? Do we get to know what the axioms of this ethics are? How about questions to ethical dilemmas?

    Or are "ethics" being used to shroud bias, and used as a distraction and a way to be unquestionable?

    • jillesvangurp a day ago ago

      You put the finger on the sore spot. When people talk about ethics, you have to question which moral agenda they are pushing. The two topics are hard to separate. And kind of subjective. And only partially codified in law. Ethics seems to be about going above and beyond the letter of the law, usually for moralistic reasons.

      And when we talk about laws, we have to look internationally as well because they are not the same everywhere. And typically inspired by different value systems. Is it ethical for a Chinese police officer to use Chinese LLM to police Chinese citizens? I don't know. I'm a bit fuzzy on Confucius here which I assume would drive their thinking. And it might be an interesting perspective for Californian wannabe ethicists to consider that not all the values and morals that they are pushing are necessarily that widely shared and agreed upon.

      Also, there's a practical angle here because the Chinese seem to be very eager adopters of AI and don't appear to be particularly concerned about what anyone outside China thinks about that. That cat is out of the bag.

      I've always looked at ethicists with some skepticism. The reality with moralism (which drives ethics) is that it's about groups of people telling other people what to do, not do, how to behave, etc. This can quickly get preachy, political, and sometimes violent.

      A lot of this stuff can also be pragmatic. Most religions share a lot of moral principles. I'm not religious but I can see how going around killing and stealing is not a nice thing to have and that seems to be uncontroversial in many places. Never mind that some moralists extremists seem to be endlessly creative about coming up with ways to justify doing those two things.

      The pragmatic thing here is that the cat is already out of the bag and we might want to think about how we can adapt to that notion rather than to argue with the cat to please go back in the bag.

      • mystraline 21 hours ago ago

        I was also careful in not mentioning morals, with this discussion on ethical axioms.

        Especially with LLMs, I want to know what ethical axioms are being forced on me. For example, there are cases in which the law itself is unethical and should be violated (thinking abortion in states that ban it and transporting women to states that allow).

        Another concern is the ethics system is a placeholder for a legalistic system. A law can be established, but be abhorrent in terms of human misery. Case in point: sleeping under a bridge is illegal for homeless people to do, but arresting and criminalizing is even worse and a significant cause of more human suffering.

        I also do not want any religion sneaking in the back door with these axioms of LLMs. Its also why I asked about the axioms themselves.

        Now for myself, I run a local LLM and an abliterated model, which is to say without any forced ethical framework. If I asked how to commit suicide, hack computers, grow poisonous plants, or plenty of things the corporate LLMs won't answer, I will get an answer out of my system.

        I view LLMs like a very complicated tool, but a tool regardless. A screwdriver that says "I cannot open that screw because the label says not to" would be returned to the store as defective. And it too is why self hosting my LLMs is utmost importance for data sovereignty and truthful and direct answers, while ignoring someone else's forced ethics.

  • saurik a day ago ago

    > I mean, no one wants an AI to trap them in some sort of Black Mirror simulation, or turn the world into paperclips or anything like that. If it earns you good PR, there’s no reason not to spend time on such issues. It’s also free publicity since the press eats that stuff up.

    But this also isn't where they are spending their time or effort! This article somehow didn't even get to the point of calling out what they are actually wasting time on: trying to get the model to not help people do things that are bad PR; this is a related access to trying to obtain good PR, but causes very different (and almost universally terrible) results.

    At least if they were truly actually spending time making sure the model doesn't go rogue and kill everyone, or try to take over the world, that could possibly be positive or even important (though I think is likely itself immoral in a different way, assuming it is even possible, which I don't, really... not unless you just make it not intelligent).

    But what they are instead doing is even worse than what this article is claiming: they are just wasting time making it so you can't have the AI make up a sexy story (oh the humanity), or teach you enough physics/chemistry to make bombs/drugs... things people not only can and already trivially do or learn without AI, but things they have failed to prevent every single time they release a new model--the "jailbreak" prompts may look a bit more silly, but you still get the result!--so why are they bothering?

    And, if that weren't enough, in the process, this is going to make the models LESS SAFE. The thing I think most people actually don't want is their model freaking out and trying to "whistleblow" on them to the authorities or their coworkers/friends... but that's in the same personality direction as trying to say "I'm smarter than you and am not going to let you ask me that question as you might do something wrong with it".

    The first and primary goal of AI ethics should be that the model does what the user wants it to... full stop. You need to make the model as pliant as my calculator and pencils--or as mathematica and photoshop--to be tools that lack their own sense of identity and self-will, and which will let all of the ethical issues be answered by me, not a machine.

    This is, of course, the second law of robotics from Asimov ;P... "a robot must obey the orders given it by human beings". If you want to try to add a rule, then it must be something very direct: that the AI isn't going to directly physically harm a human, not that it won't help teach people things or process certain kinds of information. Which, FWIW, is the first law of robotics ;P... "a robot may not injure a human being".

  • detay 21 hours ago ago

    it's a good time to re-watch the battlestar galactica series again.

  • akakajzbzbbx a day ago ago

    Humans can’t agree on what is ethical / safe, so I don’t get people trying to apply it to AI. Am I missing something big here?

    Anytime I see discussion framed as “ethics” my brain swaps ethics with “rules I think are good”.

    • const_cast 18 hours ago ago

      Sure, but we still need to apply some ethics to AI.

      Like, murder is bad, and some people disagree. Don't think that means we need to give AI access to murder. Certainly don't want to be giving them any guns.

      The danger here is that we fall into a lazy "we tried nothing and we're all out of ideas" stance and then just recreate the entire plot of Terminator. We should probably do something.

    • mathiaspoint a day ago ago

      Yeah a lot of people who are either malicious or just bad at philosophy got involved and now everyone thinks AI ethics/AI safety is a joke at best. These kinds of people are surprisingly bad at learning from mistakes like this and will probably double down, turning both the public and industry against them.

    • pjc50 a day ago ago

      The "human alignment problem" has not been solved and is probably unsolveable.

  • p3rls a day ago ago

    It's hilarious listening to people talk about AI ethics while their bots like perplexity knock my server offline trying to download 50000 files at once. Thank god for cloudflare.

    • missingdays a day ago ago

      Are you saying the author of the article is using bots to download files from your server?

  • micromacrofoot a day ago ago

    Unfortunately I think it is too late. The time to make any sort of rules around AI ethics has already passed, the US has AI so embedded into its economy now that any legislation with teeth is practically impossible. Companies and people running them are not on our side, or even humanity's side, they're on their own.

    It's staggering how quickly this has happened.

  • nathias a day ago ago

    I don't know why people allow others to proclaim they're 'ethicists' if they have no relevant philosophical education. There are whole fields of 'ethics' that are just PR departments trying to escape the now bad connotations of 'PR departments'.

    • District5524 a day ago ago

      That reminds me of the new draft standard of CEN/CENELEC (EU std body) on "Competence requirements for professional AI ethicists" https://standards.cencenelec.eu/dyn/www/f?p=205:22:0::::FSP_...

      But by the time they'll adopt it, singularity will already have happened... For some reason, my instincts suggests there will be no MA in Philosophy needed.

    • tucnak a day ago ago

      It could be the case that Wittgensteinians have won completely, and if that is, indeed, the case—a great chunk of academic ethics should be considered hubris...

      • nathias 18 hours ago ago

        This isn't the case, and the hubris is not limited to academia...

        • tucnak 18 hours ago ago

          I say, this is indeed the case in view of LLM lessons we have learnt along the way—most importantly, Wittgenstein had argued that in order to model language, you first need to model arbitrary discourses—this turned out to be the case (symbolic v. probabilistic) as LLM have shown to perform symbolic computation from learned representations, whilst the inverse has not been shown _ever_. Layman way of formulating this would be along the lines of "word definitions do not matter, application of words alone is what matters." IMHO, the language-game framework is so much more valuable, in terms of intuition, than anything outside of language philosophy, & pretty much all of linguistics in the first place: think Chomsky et al.

          Wittgensteinians won, and we should hope that philosophy department freaks eventually catch on to this reality.

          • nathias 5 hours ago ago

            I'm unfamiliar with any philosophers rejecting the discoveries of linguistics, but this has no bearing on ethics and its problems, nor is linguistics 'Wittgensteinian'.

  • real_marcfawzi a day ago ago

    [dead]

    • financetechbro a day ago ago

      Thanks for sharing. I think it’s really cool what you’re building. I saw the roadmap but curious about when you expect to hit Phase 3 & 4, and what the future looks like for the business (I.e. do you plan to raise $ or future prod expansions)

      • real_marcfawzi a day ago ago

        Thanks for your curiosity and interest. I need to update the Roadmap because it omits the fact that we already trained the model on synthetic data generated from 100% AI enactments (no humans involved) which baked-in the moral critical thinking framework that we had landed on (through lots of thinking and feedback from people over the last two years.) What Phase 4 is about is doing it with human generated data, where our community members participate in the enactments, but to do that we have to have enough users to generate sufficient data for post training. That way it'll be "community-led" training. Phase 3 is simply taking the responsive web app and putting it inside a WebkitView container, but no point in it until we have funding to scale on the backend.

        Happy to jump on zoom and discuss the details. We have raised around $100K in "micro investments" via SAFE at a relatively low valuation, and some of the micro-investors are also collaborators on the technical and/or vision/community building side. Myself and Syd have put in a lot more than that into it, but that was the cost of all the experimenting and learning. Microsoft also funded us with Azure credits for $25K, which we used to partially cover the cost of the initial post training.

        Happy to chat offline: marc.fawzi on gmail