• 35 Posts
  • 2.86K Comments
Joined 1 年前
cake
Cake day: 2024年3月22日

help-circle

  • There is a nugget of ‘truth’ here:

    https://csl.noaa.gov/news/2023/390_1107.html

    I can’t find my good source on tis, but there are very real proposals to seed the arctic or antarctic with aerosols to stem a runaway greenhouse gas effect.

    It’s horrific. It would basically rain down sulfiric acid onto the terrain; even worse than it sounds. But it would only cost billions, not trillions of other geoengineering schemes I’ve scene.

    …And the worst part is it’s arctic/climate researchers proposing this. They intimately know exactly how awful it would be, which shows how desperate they are to even publish such a thing.

    But I can totally understand how a layman (maybe vaguley familiar with chemtrail conspiracies) would come across this and be appalled, and how conservative influencers pounce on it cause they can’t help themselves.

    Thanks to people like MTG, geoengineering efforts will never even be considered. :(


    TL;DR Scientists really are proposing truly horrific geoengineering schemes “injecting chemicals into the atmosphere” out of airplanes. But it’s because of how desperate they are to head off something apocalyptic, and it’s not even close to being implemented. They’re just theories and plans.











  • brucethemoose@lemmy.worldtoScience Memes@mander.xyzJupiter
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    2
    ·
    edit-2
    3 天前

    The junocam page has raw shots from the actual device: https://www.msss.com/all_projects/junocam.php

    Caption of another:

    Multiple images taken with the JunoCam instrument on three separate orbits were combined to show all areas in daylight, enhanced color, and stereographic projection.

    In other words, the images you see are heavily processed composites…

    Dare I say, “AI enhanced,” as they sometimes do use ML algorithms for astronomy. Though ones designed for scientific usefulness, of course, and mostly for pattern identification in bulk data AFAIK.






  • What about ‘edge enhancing’ NNs like NNEDI3? Or GANs that absolutely ‘paint in’ inferred details from their training? How big is the model before it becomes ‘generative?’

    What about a deinterlacer network that’s been trained on other interlaced footage?

    My point is there is an infinitely fine gradient through time between good old MS paint/bilinear upscaling and ChatGPT (or locally runnable txt2img diffusion models). Even now, there’s an array of modern ML-based ‘editors’ that are questionably generative most probably don’t know are working in the background.


  • brucethemoose@lemmy.worldto196@lemmy.blahaj.zoneCamp Rule
    link
    fedilink
    arrow-up
    4
    arrow-down
    2
    ·
    edit-2
    4 天前

    that’s a weird hill to die on, to be honest.

    Welcome to Lemmy (and Reddit).

    Makes me wonder how many memes are “tainted” with oldschool ML before generative AI was common vernacular, like edge enhancement, translation and such.

    A lot? What’s the threshold before it’s considered bad?