Inferencing pictures

Qnap AI / Object Recognition Improvements / Qumagie

2024.05.14 17:25 bastooo Qnap AI / Object Recognition Improvements / Qumagie

Hi all,
since I now know that the Qnap AI is inferencing / pre-trained, I wonder how often that AI improves simply said - or how often there are updates, considering the fast development of AI in general.
I honestly have to say that I am not very stunned by the object recognition (it's a pity because i like the UI and handling of Qumagie). It barely helps me with my work, and I mostly find relevant pictures only when someone tagged the photos beforehand or the word is in the title. e.g. we have 580.000 photos atm and it only finds one picture with ice cream - and we definitely have more ^^ And I can't tell him to search for more ice cream damn it :D
Yes, I am an impatient guy, so I started object recognition again with all of the photos already in the index. But the outcome was pretty much the same again. Some categories are simply filled so wrong or don't contain a lot of photos, despite me knowing there must be thousands. e.g. we have soooo many football (soccer) photos, and only 88 of them are categorized xD I remember once testing DigiKam for our company, and it even recognized that it's a group photo or portrait (combined with other search terms - awesome!), and also the object recognition worked well (that was even b4 the new AI wave). Unfortunately it wasn't really suited/practical for us.
Am I doing something wrong, or can I do something about it? How often is there an update, or do I have to manually look for one every now and then? Any info appreciated, since we invested lots of working hours (and budget for the server setup) in that project. Thank you.
Sorry for the long post (I like to write) and for another "pleb" question, but our IT company can't help either, so it's Reddit I go to once again ^^
Cheers
submitted by bastooo to qnap [link] [comments]


2024.05.10 17:29 UstroyDestroy AI Expert Perspectives, Corporate Advocacy, and Technological Advancements in AI: Weekly Digest

leaders #science #opensource #tool #major_players #event #paper #dataset #startups #api #feature #release #update #hardware #opinions #scheduled

Yann LeCun, a renowned AI expert, shares his perspective on the nature of concepts and abstractions, stating that they do not necessarily involve language and that his speech is an approximate representation of his mental models [1][2]. He suggests that the difference in bandwidth between the visual system and the language understanding system may explain certain phenomena [3]. LeCun also believes that Large Language Models (LLMs) do not manipulate mental models and therefore will not achieve human-level intelligence on their own [4]. He clarifies that when he refers to "mental pictures," he is not talking about literal images but rather abstract mental models that can be manipulated, some of which may resemble pictures [5]. LeCun also highlights that while humans have more factual knowledge and linguistic abilities than dogs, dogs surpass humans in their understanding of the physical world and their ability to reason and plan [6]. He uses a common plot template found in novels and movies to illustrate the idea that learning a skill requires hands-on experience, not just theoretical knowledge [8].
Meta and IBM are advocating for openness and goals aligned with academia, startups, VCs, and the open source community. They feel that their voices are not being heard sufficiently by policy makers in the US, EU, and Canada. Individual EU countries like France, Germany, Italy, and Denmark support open platforms for AI for sovereignty and economic development. However, the EU commission and parliament are less favorable towards this approach [7][9].
Andrej Karpathy shares their experience of facing challenges with achieving optimal memory bandwidth on Nvidia GPUs due to limitations in the compiler and stack. Despite trying various optimization techniques, they are unable to surpass 80-90% memory bandwidth on certain kernels [10].
Google AI researchers Tomas Pfister and Sercan Arik will be hosting a live Q&A session at the #ICLR2024 Google booth today at 9:30 AM to discuss the latest in Cloud AI Research [11][12]. Google AI has made advancements in differential privacy research to enable private training of Gboard language models [13]. Google AI also presented at #ICLR2024 on using deep neural networks to potentially improve early detection of fetal hypoxia and enhance maternal/neonatal care [14]. A discussion on measuring and expressing uncertainty in language model predictions will also take place at the #ICLR2024 Google booth [15].
The Google Research Connectomics team, in collaboration with Harvard University, has published a 1.4 petabyte human brain connectome in Science Magazine. This groundbreaking work reveals new structures within the human brain and marks the 10th anniversary of the team's formation. The dataset is available for researchers to explore and refine, opening up new avenues for further discoveries in neuroscience [16][17][18][19].
Cohere has introduced Command R Fine-Tuning, a new powerful model that offers superior performance on enterprise use cases at a fraction of the cost compared to larger models in the market. This advancement enables enterprises to efficiently scale AI to production. The model consistently produces top-notch results in various industries such as finance and scientific research. The Cohere Command R with fine-tuning is available on the Cohere platform, AWS SageMaker, and will soon be accessible on additional platforms [20][21][22][23][24]. Cohere is also hosting a Build Day event in San Francisco where participants can create innovative solutions using Command R and R+ to build enterprise-grade knowledge systems and multilingual customer chatbots [25][26].
NVIDIA AI Developer shared free courses in generative AI for learning AI [27]. Groq Inc announced that Marcin Komorowski has joined their team to work on their next generation chip for AI Inference solutions. They will also showcase the LPU Inference Engine at ISC24 [28][29]. NVIDIA is a dominant player in the AI chip market, with loyal customers and strong market dominance. However, other tech companies like Groq are starting to challenge Nvidia's position with new chip offerings [30].
Stability AI has announced Stable Artisan, which allows users to access advanced models like Stable Diffusion 3, Stable Video, and Stable Image Core to create high-quality media directly within Discord [31]. Amdocs collaborated with NVIDIA to enhance customer experiences in telecom contact centers by using an LLM-based solution for billing inquiries [32]. DocuSign is utilizing NVIDIA Triton and Azure to leverage contract information, converting agreement data into actionable insights, speeding up contract review processes, and enhancing productivity [33]. NVIDIA Morpheus, an AI-driven cybersecurity framework, can be used to enhance anomaly detection in Linux audit logs [34].
Andrew Ng recently spoke at an event at the U.S. Capitol about AI and regulations, highlighting the importance of protecting open source technology. OpenAI has been licensing news archives from publishers like Financial Times to train its models. A new algorithm called Deja Vu has been proposed to accelerate the inferencing of large language models. A computer vision system developed by Safe Pro Group called Spotlight AI is being used to identify landmines and... (incomplete)
1. Yann LeCun @ylecun https://twitter.com/ylecun/status/1788446833640874115
2. Yann LeCun @ylecun https://twitter.com/ylecun/status/1788448916435841532
3. Yann LeCun @ylecun https://twitter.com/ylecun/status/1788450011929284773
4. Yann LeCun @ylecun https://twitter.com/ylecun/status/1788450332386722156
5. Yann LeCun @ylecun https://twitter.com/ylecun/status/1788451367645774276
6. Yann LeCun @ylecun https://twitter.com/ylecun/status/1788453799566233980
7. Yann LeCun @ylecun https://twitter.com/ylecun/status/1788464185921130617
8. Yann LeCun @ylecun https://twitter.com/ylecun/status/1788473350177599888
9. Yann LeCun @ylecun https://twitter.com/ylecun/status/1788553995755110788
10. Andrej Karpathy @karpathy https://twitter.com/karpathy/status/1788528061027152221
11. Google AI @googleai https://twitter.com/googleai/status/1788458770386800655
12. Google AI @googleai https://twitter.com/googleai/status/1788459202681205195
13. Google AI @googleai https://twitter.com/googleai/status/1788482498751525279
14. Google AI @googleai https://twitter.com/googleai/status/1788493400800366830
15. Google AI @googleai https://twitter.com/googleai/status/1788530862868349215
16. Google AI @googleai https://twitter.com/googleai/status/1788634055400903138
17. Google AI @googleai https://twitter.com/googleai/status/1788635682799280217
18. Google AI @googleai https://twitter.com/googleai/status/1788636515150512178
19. Google AI @googleai https://twitter.com/googleai/status/1788682690172260693
20. cohere @cohere https://twitter.com/cohere/status/1788581173645500712
21. cohere @cohere https://twitter.com/cohere/status/1788581175222583683
22. cohere @cohere https://twitter.com/cohere/status/1788581180201123861
23. cohere @cohere https://twitter.com/cohere/status/1788581186257748251
24. cohere @cohere https://twitter.com/cohere/status/1788581193258050037
25. cohere @cohere https://twitter.com/cohere/status/1788716407309705456
26. cohere @cohere https://twitter.com/cohere/status/1788717270614904905
27. NVIDIA AI Developer @NVIDIAAIDev https://twitter.com/NVIDIAAIDev/status/1788630020128206918
28. Groq Inc @GroqInc https://twitter.com/GroqInc/status/1788559302170480784
29. Groq Inc @GroqInc https://twitter.com/GroqInc/status/1788631206537359803
30. Groq Inc @GroqInc https://twitter.com/GroqInc/status/1788676335251910810
31. Stability AI @stabilityai https://twitter.com/stabilityai/status/1788634334145945648
32. NVIDIA AI @NVIDIAAI https://twitter.com/NVIDIAAI/status/1788569619336224787
33. NVIDIA AI @NVIDIAAI https://twitter.com/NVIDIAAI/status/1788645113465311330
34. NVIDIA AI @NVIDIAAI https://twitter.com/NVIDIAAI/status/1788660217028231563
35. Andrew Ng @AndrewYNg https://twitter.com/AndrewYNg/status/1788648531873628607
36. Greg Brockman @gdb https://twitter.com/gdb/status/1788694574149337336
submitted by UstroyDestroy to ai_news_by_ai [link] [comments]


2024.05.04 06:21 InstantSquirrelSoup Arxur Hospitality - Entry 8[2/2]

This is the second of two posts. If you haven't read the first one, you'll want to here.
The recording pauses for another two hours. When it resumes, Jiyuuila and the small Arxur have resumed their trip, as inferenced from the regularly spaced booms of her footsteps echoing off the walls. The microphone shakes more than usual with each one, and poor positioning causes Jiyuulia’s voice to be slightly out of focus. A light mechanical hum comes from somewhere nearby.
Hey! So we’re back, and this time… Haagghhh… we’re making progress!
*Slowly.*
With some caveats. But we are getting somewhere!
…Which… Uugh… I suddenly realize that I’ve yet to specify where somewhere is, exactly.
*Squishy!*
I know, I know. But there was the chase, and the injuries, and — agh, no, that’s getting off into the weeds again when I really should be—
*We’re going to the gear!*
Yeah! That! Thanks, Kyrix.
*You needed it.*
…As my charge has so succinctly put it, Kyrix isn’t randomly directing us through these danger-filled tunnels just for the fun of it.
*Hey!*
The main goal right now is twofold: One, we need to find the rest of the crew and figure out how many survived the initial attack. Two, we left vital equipment and supplies behind when we fled, equipment that we’ll need if we want to survive down here long enough to figure out our next steps. Both problems are best solved if we can find our way back to where we started, before everything went to hell. Even just the mapping tools and the data collected by the scouting teams represent weeks worth of work for one individual to cover by themselves, not to mention any possible weapons, food, medicine, or other miscellaneous gear that might still be there. It’s likely that any of the other surviving crewmembers have had a similar idea, and there was at least one hunting party’s worth of Arxur out at the time that would’ve come back to an empty cavern and are probably still wondering where everyone went. Which I’m yet again realizing is another thing I was supposed to get to…
*And you’re supposed to move faster. Use your magic again!*
Jiyuulia huffs.
Oh, if it was just that simple…
Things would’ve been different, that’s for sure.

I believe I’d just finished telling the Great Hunters about how our group had gotten back together after its initial little disaster with the resident crazies, yeah? Kyrix and I being relegated to the front on account of being the ones with the light and all that.
Which, I mean, I did try to get out of — I feel I perform better when I’m in a supporting role myself — but as it turns out, Arxur are taller than Kolshians, even magic ones. The tunnel’s width was usually such that we were kinda forced into a single-file line—
*No we weren’t?*
…correction, forced into a side-by-side formation where I just so happened to count as two separate people, and lighting up someone’s back rather than the tunnel ahead was rather less-than-helpful, so I had to be up there.
*I thought it was because you kept getting stuck.*
Jiyuulia mutters something unintelligible.
Aside from being the optimal position for a variety of reasons, the front came with an important responsibility: Kyrix and I were mostly responsible for deciding which route looked best to take whenever the tunnel forked into multiple paths.
*So mostly the wider ones that went downhill. And you’re leaving me out!*
I was just — ugh, fine.
Alright, so this time around, Kyrix didn’t get out of the difficulties involved in forging new paths through unfamiliar territory with me. In fact, he actually had several special duties that he alone was able to perform, some of which were absolutely vital to our travel plan!
Starting with the most simplistic, and yet still one almost deceptively useful in both its innocuousness and specificity to him, is his newfound title of “Mortal Technology Weakness Alleviator.” Because while as the pad I carry remains both something that I alone can operate and harbors a level of usefulness that’s near-impossible to overstate, the unfortunate nature of the life-saving device is that, at the end of the day, it’s still a lowly common model that’s seen a few too many charge cycles in its lifetime, and that the battery life of the thing is nothing short of dismal.
Sure, Kyrix steals it away from me to play AR games for a few minutes here and then, but there’s a good reason I make him charge it afterwards, and that’s because that’s about all it’s got left in it anymore. And while it’s no game, the ceaseless draw of the flashlight on the back would’ve normally been far too much for the thing to handle; the twenty-minute battery life and similarly lengthy recharging time far too severe of a limitation on our group’s ability to make any sort of reasonable progress into the innermost depths of the tunnels. And in fact it was for a time, halting us almost as often as my own lackluster stamina and generally making the itself a great annoyance in our trip forward, but thankfully it did not take more than two angry sessions in the dark before a solution was drafted and implemented.
In an effort not to go stumbling down one of the many pitfalls of both the literal and metaphorical varieties that our journey presented, Kyrix himself came forward with a powerful idea: Utilizing his unique abilities and numerous qualifications stemming from his many hours of using the thing whilst he was further erasing any hopes I may have had for ever again seeing my initial upon the high scores board of any of my AR games, Kyrix made himself out to be a most unusual, and yet most excellent choice, warranting his immediate promotion to the position of lightbearer.
*All I did was point the light and turn the charger crank.*
Around and around did the charger crank turn, and only further ‘round does it go yet. Unprompted and entirely at the insistence of his own brilliant mind, Kyrix quickly settled himself into a more appropriate position over my left shoulder, and, barring a small number of awkward positional readjustments best left unsaid, the dedicated lever-puller utilized his immense determination and crank-turning prowess to ensure the continued operation of the low-grade mortal construct throughout the grueling fifteen-minute sessions we had between breaks as our party trekked through the foreboding bowels of the underworld. Truly, he was an inspiration to us all.
*At least I always lasted the full fifteen minutes.*
Jiyuulia chokes.
…You’ve got a way with words, you know that?
*You’re the one who’s spent two minutes talking about a crank.*
Fair point. Moving on, then.
More than just a generator, Kyrix’s portable size opened up a litany of other useful tasks he alone could perform. Considering that his locomotive abilities still fall short of even my own, (legs are rather important when it comes to movement, it seems) several of these were limited to theory rather than practice, but there was one in particular that did not require any locomotive abilities at all.
Poison gas detection.
It might not be what first comes to mind, but by far the most insidious trap down here has nothing to do with swarms of killer bugs or suboptimal tunnel width variations. Oxygen on this hellscape of a planet is already poor enough to begin with, and while airflow in these caves is usually more than enough to make up for it, that’s not the case everywhere. In the deeper bends, the stranger corners, and even sometimes in the odd and otherwise unassuming place for what seems to be for no discernible reason at all, airflow will halt for long stretches, leaving pockets of still air that can get to upwards of a mile in length. Worse, these air pockets weren’t just stuffier than an unwashed mammal’s bedroom — they actively facilitated the buildup of noxious and deadly gasses that served to forever silence any who foolishly strayed into their silent chambers.
Now, it wasn’t as bad as it could have been. We weren’t a ramshackle band of primitive Yotul stoneheads swinging flaming sticks around our heads for light, and our total lack of equipment meant we lacked the clinking, sparking metals required for a more traditional mining accident.
Fiery death aside, there was still the risk of losing consciousness for the last time in some dark, dank tunnel somewhere as our minds were addled beyond the ability to recognize our impending doom, our perfectly-preserved corpses lying facedown in that same tunnel forever, never to be reclaimed by either nature nor fellow sapient…
Jiyuulia shudders. The Arxur yelps, scrabbling to hold on.
Less than appealing, to say the least. Such a fate would not exactly have been considered a warrior’s death under your eyes, Great Hunters, and given the whole nature of this trial thing we’re not really here to die at all if we can help it, so we needed a better option. And we had one! I’ll let him do the honors.
*Hah! Get ready!*
*So the idea was that sometimes the gas pockets were small, like me, so we could go through them if we had to and be okay. But because each trap was of different badness and we didn’t know how long we could spend in each one, we needed something to tell us when the trap was getting too long and we needed to turn around.*
*So, after the first time we nearly all died, Khogue came up with a really good idea! If I’m small, which I am, then my lungs are too, and that means I pass out faster. So if we just turned around whenever I passed out, then we could get out of the gas pocket before anybody else fell asleep too, and then we’d all be okay!*
More or less that way, at least. And the first time was very nearly the last time too, it wasn’t just you we had to haul out, Kyrix. Had it been Giznel or the Pilot on the floor, I don’t think we would’ve all made it out. Also, “Khogue?”
*Squishy! You forgot already? Khogue came up with the idea after he woke up. It wasn’t you, I passed out first and was busy waking up, Giznel was huffing too hard to speak, Kyrix was being mad again, and Selkasthithmerkzalgilnashzim is too dumb to think of good ideas like that.*
…I’m beginning to remember something about carbon monoxide poisoning being cumulative.
*What?*
Exactly.
*That’s not helpful.*
Jiyuulia exhales loudly.
Never mind.
*Squishy—*
Anyway, despite any possible misgivings some people may have had about using a living thing as an indicator valve, it was a “good” idea, one of the few that have arisen so far. Which really says a lot about our situation, most of the discussions I’d overheard from before our little trip down the river didn’t go nearly so well. I guess there’s something to be said about small groups.
That doesn’t mean that there weren’t, like, several dozen times a fight almost broke out between various parties over nothing, and at least five or so of those times something really did happen and I had to sit and watch four Arxur warriors stage a pitched battle maybe ten or so feet from my easily pierced flesh before, after a few minutes, somebody would finally surrender their “argument,” and we could move on.
Thankfully, those five were concentrated towards the beginning. Not because of any flagging sense of comradery, or even practical realizations about how to best allocate our resources, but because as it turns out, I can have ideas too, even if only unintentionally and as a result of exasperation rather than any true planning. Maybe I just have a talent for mediation, but most fights broke up pretty quickly after I just refused to bother stopping for them anymore. Art’s hard to do in the dark, after all.
*Except it only worked when your legs weren’t on break.*
I don’t see you walking on your own.
*I don’t need to. Yours are big enough for two people.*
…Thanks, Kyrix.
*Maybe three! Or four! It’s hard to tell.*
I’m flattered. Moving on.
*Yes, move faster.*
Jiyuulia huffs.
Back to the other crewmembers, I’ve been perhaps a bit too harsh on them. They weren’t all bad, after all. Call me an optimist, but I don’t think the Dominion would’ve kept them around if that were the case. And I’m not just talking about Giznel, either. Each and every one of them served a purpose, even if Kyrix… okay, I admit defeat. Kyrix, is that really his name?
*Kyrix is Kyrix. Also-Kyrix is also Kyrix, Also-Also-Kyrix is also also Kyrix, Also-Also-Also Kyrix is also also also Kyrix. It’s easy.*
FOUR?
*Yeah? I know like fifteen Kyrixes! It’s a good name. Strong leader. Very famous.*
But—
*I don’t really get why you’re confused. Kyrix the pilot. Kyrix the hunter. Kyrix the farmer. Kyrix the me!*
…Of course. How silly of me to expect anything different.
Well then. I kinda had something to say about Also-Kyrix, but whatever. He’s target practice.
Jiyuulia sniffs.
So the Butcher, Selkas… help me out again?
*Selkasthithmerkzalgilnashzim.*
You’re joking.
*No! I would never lie to the Great Hunters!*
But that’s like eight syllables! She never speaks more than ten!
*It’s not a joke!*
Jiyuulia’s breath hitches for just a moment.
Alright, alright, calm down. I believe you. Can you spell her name for me?
*What’s spelling?*
Jiyuulia goes silent, her heavy footfalls ceasing instantly as she comes to a rapid halt. She says nothing, remaining still for just over eleven seconds.
…I see we both have some things to learn later.
*You’re gonna teach me magic?!*
N— you know what, sure, why not? It’s totally magic.
*I’m gonna blow up mountains!*
You’ve already done that, if I recall. But I digress.
So the Butcher, Selkasthithmerkzal…
*Selkasthithmerkzalgilnashzim.*
Ugh, can’t I just call her Selkas? Or maybe Thith?
*Oh no, don’t do that. She doesn’t like people doing that. Lyrylef tried it once, and she stole her tongue!*
Ooof course she did. I should’ve figured.
Jiyuulia shuffles. Her heavy footsteps resume their pounding down the tunnel.
Alright, so with that history lesson out of the way, the Butcher, who is apparently named Selkasthithmerkzalgilnashzim — because of course she is — deserves the most credit after Giznel. For all she lacks in basic dignity, beauty, cognitive ability, geniality, kindness, personality, mental stability, or in fact any other sort of normally redeeming traits at all, actually, she’s still a hulking 7’1” Arxur warrior with the shoulders to match. And while she might not exactly have been capable of being left to her own devices, a hastily shouted order was more than enough to get her to take action.
Take, for instance, the first gas trap. Our resident artist only came back with the rest of us because she dragged his half-dead corpse out at a full walking pace. Which is a bit surprising, considering her mouthbreathing tendencies would’ve had me expect her to be the second out after Kyrix. Then again, drug effects are relative to body mass, and — I’m getting off track again.
Either way, he lived, which was neat. We were pretty sure he wasn’t going to, for a time. I still haven’t heard a thank you for all that CPR I had to give him, but, well, I can’t say I’m all that surprised. I don’t think anyone involved much enjoyed it.
*Don’t forget how she helped you too!*
Ah, right, yeah. Credit’s due where it’s due. The tunnels get a bit thin sometimes, and a bit of assistance through the occasional not-so-tight squeeze was much appreciated. She was certainly… efficient, when needed. There’s something to be said for that, even if it did result in a few more new rips and tears in my clothes than I’m particularly comfortable admitting.
Don’t get me wrong here, I’m still not exactly a proponent for the idea of having to get close to, much less touch a creature that’d tear my throat out without shame or remorse in two seconds flat. But when compared to the alternative of being stuck in some small section of the tunnel for the next few days, getting to experience the whole of dehydration all over again… well. I can’t say I didn’t appreciate the assistance.
…I did say thank you, by the way.

I probably should cover him too, yeah?
Jiyuulia sighs.
Alright.
The far more reasonably named, yet somehow even less reasonable Khogue wasn’t purely a drag either. While not a one of us had a clue as to where we’d ended up after the whole river debacle, and navigating a 3D underground environment is not exactly on the list of things many people are good at — again, what I wouldn’t do for a Gojid — but he wasn’t as far gone as the rest of us. Khogue’s… taste (literally) for art has left him with quite the perceptive eye, and I hate to say it, but a flair for creativity as well.
Aside from being the “genius” who came up with our new air quality test, he’s always the first to notice things about the environment around us. Striated rocks in the walls turning a slightly different shade of yellow, strange smells at the very edges of Arxurian perception, unusual drops and shifts in air currents… he’s no geologist, but there’s a reason he got sent out on scouting missions with everyone else, despite his proclivities. We’d have gotten even more lost a long time ago if it weren’t for him.
Oh, Kyrix and I still jointly held the navigator position. Somebody’s ought to be the judge of which ones look traversable, and who better than me to decide just how tight we wanted to go? Not to mention we still had the light, which was definitely a deciding factor. But that doesn’t mean his sense of direction didn’t have us soundly beaten when it came to forks with multiple viable options, and hence why the general plan for getting back involved something along the lines of “wander around until he recognizes where we’re at, and then just follow him back to camp.”
*The plan wasn’t that bad!*
Really? Do explain what it was, then.

Yeah. Not really all that great.
Besides, it hardly matters now anyway. As mentioned earlier, you guys may have noticed that plan’s become a little hard to follow as of late. While that means the crew isn’t here to hear me berate their existences — which is very good for my expected lifespan — that also means that Kyrix and I are totally alone and lost in some dark tunnel thousands of feet underground with no way out, which has rather the opposite effect. As for where they’ve gone, and why they’re not here with us right now… I, aheh, may have had something to do with that.
Hey hey hey, I had a good reason! Dissolution of the group is still preferable to dismemberment of the group.
*Squishy hit a rock. I helped.*
Mmhmm. I’m sure you remember the swarm of bugs, Great Hunters? That unassailable cloud of death that heralded the coming of yet another change in how we’ve conducted this adventure, and the further threat of which has kept us on high alert ever since?
Well, as I’m sure you’re not surprised to hear, (you are primarily responsible for this whole test, after all) and something that really shouldn’t have surprised us given the intended lesson last time, but we’re not alone down here.
We were taking a break in a more open section of the tunnels at the time, dirtying ourselves in some damp, dirty room, spanning maybe twenty or thirty feet across and about that many wide. The walls around us were beginning to show oddly familiar signs of wear, and Khogue had reasoned that while he may not have seen these exact patterns of chips and cracks before, he had seen something similar enough during his scouting expeditions to warrant checking it out.
I’m not one to complain when people want to take a break, and the foundations were strange and oddly weak-looking, so a quick stop to survey them was far from out of the question, at least to most of us. Also-Kyrix the pilot was off yelling about something again, and suffice it to say I didn’t really care any more by that point, so I can’t exactly tell you what it was all about beyond him having some sort of misgivings over our laziness, but just because I wasn’t paying attention doesn’t mean something else wasn’t.
Such as, for example, a gigantic eyeless beast the size of a large hovercar.
*A true hunt, like in the stories!*
Of course, it had to be covered from head to tail in a solid coat of those same hardened plates everything else down here’s had, so it was already basically invulnerable, but I’m going to assume you added the four-inch claws and rock-crushing maw on as a joke. I don’t know what kind of combat capabilities you expect us to have, Great Hunters, but I’d like to remind you that we’re a duo consisting of a lamed Arxur four-year-old and a supernaturally unfit Kolshian specializing in utility spells. As it stands, we couldn’t take on a regular monster, much less deal with nature’s equivalent of a tank.
That thing came crashing through the wall with a deafening roar, evidently a creature so strong that it had no need for stealth or subtlety. For whatever unknown reason — though I think I may have a compelling guess — it decided to use those same claws it’d just used to burst through stone against us.
It went for the pilot first, obviously. Not even five seconds in, and two tons of an even worse kind of predatory monster came lunging through the middle of the room straight for him. I think it wanted him to shut up just as badly as anyone else. Can’t blame it, really.
And hey! That just means we can pin all the blame on him. Not like anyone (else) will contest it.
“Luckily,” Also-Kyrix managed to dodge at the last moment, barely avoiding evisceration in what was honestly a pretty impressive move. And for all their faults, the team of warriors was able to put aside their bickering pretty quickly once a greater danger presented itself, and the four Arxur were up and roaring to go before it could get another swipe in. Not bad, all things considered. I’d seen far worse from them before.
Unfortunately, however, beyond the initial phase, the battle went about as well as you would expect from pitting four unarmed infantry against an armored vehicle. That is to say, poorly.
If one was being generous, they might raise the argument that the monster this time around gave us the mercy of actually being something somebody could feasibly attempt to defend themselves against, but in all honesty, it really wasn’t. While it wasn’t quite on the same level as the bug swarm, still more of a monster than an environmental hazard — though it was doing its level best to change that — that doesn’t mean that it wasn’t a physical incarnation of some dread hellbeast straight from a cheesy Harchen horror film.
Battle strategies came and went, but it was hopeless from the start. With the thing coated in that same strange material as everything else the hunting teams had come across in the caves had, it was no more vulnerable to swinging Arxur claws than your typical concrete bunker. The most damage anyone managed to deal through a traditional attack involved Selkthith— Selkasfifth— the Butcher jamming a claw up its nose and whirling it around up there. And while the strategy certainly punctured its defenses, the gush of reddish black oozing down its face a more-than-inspiring sight to those of us with more violent aptitudes, the new wall decoration that immediately formed thereafter when it charged forward and headbutt her into flying across the room for her achievement probably had somewhat of a mitigating effect. The three sets of spare ribs I heard snap after it swung its tail around into the rest of the crew, even more so.
Before I go on: Kyrix, you have anything to add? You’re kinda silent up there.
*No, you’re doing good. It was very exciting!*
Alright, then.
So, before you go laughing at the crew for their failure, I’d like to dispel any notions you might be getting about their combat prowess. The truth is, the four Arxur fought valiantly, each and every one a true predator. I wouldn’t even call their strategy all that flawed, seeing as how it was all they had. Flight wasn’t an option, and their backs were against the wall. While it might not have been the greatest showing of force — stars, it was a total loss even — they put up enough of a fight to keep the thing distracted enough to avoid having it charging the two least combat-oriented members of the group and instantly snuffing out our non-lives, so there’s certainly not as much to complain about as there could have been. Not that I’m sure how I’d complain if I’d ceased to exist, but you get the picture.
But at the same time, no matter how impressive the Butcher’s rage may be or how thankful I am that the crew managed to buy me a whole fifteen seconds before getting taken out, the reality is that they never stood a chance against the monster. Not with their claws alone.
Good thing those of us still standing — or straddling the shoulders of those still standing — didn’t limit ourselves so tightly, then.
*We used big rocks!*
Kyrix deserves all the credit for the idea. I was, uh, “not in the necessary headspace at the time” to come up with the more, ahem, creative solutions, but luckily, I didn’t have to this time. Sometimes the part where your charge is an actual predator who’s used to watching people being torn apart every day and not thinking anything of it helps!
Other times it gives you extremely dysfunctional ships’ crews and whole civilizations teetering on the brink of collapse, but hey. Us blasphemous herbivores ought to be good at something, huh?
Anyway, you remember how we got off the station, right Great Hunters? Well, call him unoriginal all you want, but given its previous effectiveness and his list of resources starting and ending with that of one half-catatonic Kolshian wrecking ball he could yell at, not to mention that Kyrix classic “I’m-about-to-die” desperation, poking and yelling at me until I ran headlong into the least stable-looking load-bearing wall he could find was really probably the best plan he could’ve made. And, seeing as how we’re both still alive, it was a highly effective one too!
The room we were in was old and unstable to begin with, and the removal of a critical support column instantly collapsed the cavern in a shower of falling debris. A few dozen tons of falling rock proved too much for even the monster’s hardened carapace, and suddenly it was no more.
*It went squish! Or squelch. I don’t know what the difference is.*
So, in other words, I lived up to my name, (again) and we were insanely lucky to survive. While the monster’s your fault, I should probably thank you for your generosity with the miracles as of late; not being flattened into a thin paste as a result of my cleithrophobic tendencies is certainly appreciated.
That being said, I’d also appreciate it if you’d stop setting me up for needing so many. Please don’t take this the wrong way, I’m fully aware of the journey of a Hunter Potentiate and his Spirit of Bounty is a perilous one, the hunter’s worthiness must be without question, but hooo does this seem a bit excessive sometimes.
I mean, the shining blue light stemming from a brand new hole in the wall, tastefully placed on the other side of the new rubble wall that split our group in twain? The fresh opening to a new, healthy moss cavern to the four suddenly ecstatic Arxur warriors who got to go explore a new paradise while we got left behind, those we saved completely unable to do more than wish us well as they strut off to safety? A glimpse of salvation, forever beyond our reach, our souls damned to return to the infinite darkness of the blackest corners of the world in our continued search for an exit to this never-ending labyrinth?
Absolutely, devastatingly, soul-crushingly cruel. Let it not be said that you do not test the Potentiate’s emotional fortitude with even but an ounce less harshness than you do his practicals.

As for us now, well, that’s the latest of it. It’s been a few hours since then, us having long since left behind the site of our near-demise and all those who accompanied us. Us, trudging through the bowls of the underworld, darkness only held back by the light of a single pad, almost completely alone and defenseless, all conversation exhausted long ago in favor of saving my breath for progress…
*Hah!*
I am not usually one to criticize the plans of the Great Hunters, but overall? The experience so far has been less what I would call “predatory” and more what I would define as “bleak.” You would know best as to what he needs, but it’s been tough all the same. I even understand the irony of my complaining, especially in the face of your previous generosity, and yet…
Ah, well. It’s only been a few hours, and you did bless us with the knowledge that new moss caves are both extant and accessible using the tactics we have utilized thus far. Our goals need not seem as impossible as they did before, and for that, I thank you. Even if we can’t go the same way as everyone else, you’ve granted us all the power in the world to ensure we may find our own way forward, true to the hunter’s creed. I may not be a hunter, but I have proven myself a weapon, and Kyrix himself has proven a competency and worthiness in wielding me. The time-honored status of a Hunter’s weapon is not one I take lightly, even in the highly unusual circumstances we find ourselves in.
Jiyuulia snickers under her breath. It’s clear she’s having fun with this. The Arxur does not seem to notice.
Admittedly, it has been nice to not have to worry about being shanked in the back by an irate lunatic. Even if that means that I must take extra care with the squeezes, the threat of getting stuck and all but guaranteeing a slow and painful death weighing heavily on my mind every time my sides brush the walls, being allowed to bear the challenge without having to listen to a squabbling horde behind me as I do almost makes up for it. Almost.

Really, I guess I’m just asking for something to happen. The whole “defenseless in a featureless cave” thing is kinda starting to drag on a bit. The tunnels are starting to all blend together.
The Kolshian goes mum for about three minutes, the resounding thud of her footsteps and whooshing gusts and grunts of her breathing the only sounds she makes as she continues her way down the tunnel. The Arxur remains quiet as well, apparently having nothing to add. The whirring of the charger crank persists through it all, uninterrupted.
Huh?
Jiyuulia speeds up, huffing madly as she begins to approach her limit. The pace of her steps is faster than it has ever been before during an entry.
You can’t be serious.
Her pace continues to speed up. The Arxur can’t seem to decide whether to breathe or not, switching between overwhelmingly fast hyperventilation to not at all seemingly at random.
It wasn’t even five minutes ago!
Lack of records and Jiyuulia’s unusual physical condition make it difficult to predict the length of her strides, consequentially rendering the act of making any estimates on her exact speed impossible. Whatever their length, the interval between her feet lifting and falling continues to shorten, falling further and further before suddenly, a foot is lifted before the other is allowed to fall, and the thud of her footfalls amplifies exponentially.
WHAHAHAHA!
The run only lasts a few seconds before it breaks down, one penultimate footfall crashing through something brittle before sliding on something that sounds like loose, rough gravel. The rest of the Kolshian comes down shortly thereafter, the deafening slam sending the tiny fragments scattering every which way as her body plows a new trench through the gravel. The Arxur, and consequently the microphone it’s holding, are sent flying through the air, coming down in a much lighter secondary crash that does not seem to harm either it or the device. For whatever reason, despite the abrupt and violent stop, Jiyuulia’s exuberant behavior only heightens further, breaking into an impressive belly laugh that only further serves to disrupt the position of the stones coating the floor. The Arxur joins her in expressing delight. Its laughter is somewhat chittery, clicking at regular intervals as its breath tightens and loosens rapidly.
*You did the magic! You did the magic!*
Lasting nearly two and a half minutes, computer estimates suggest that the raw, animated display of sheer mirth is at least 62% driven by hysteria, with an error margin of 17% given the unusual vocal registers and/or poor reference data available for both subjects. Several times does the laughter begin to calm slightly, only for one of the subjects to break out in a new wave of laughter that inevitably causes the other to collapse into a fit of their own. Even after it finally stops, neither party seems willing to speak, Jiyuulia herself refraining from speech for another thirty-eight seconds. She drags herself into a seated position somewhere around the sixteen second mark, gravel bouncing as she lifts herself up and reaches for the pad. When she finally does speak, her voice is exhausted, but jubilant.
Okay, I might have to convert.
Ahem, I meant, wow! Not quite what I was expecting.
*Thank you thank you thank you thank you—*
Jiyuulia stands. It’s a laborious process, requiring far more effort than such an act ever should, but she stands. Gravel crunches oddly beneath her feet as her feet plow through more of the substance. Eventually, the Arxur finds its way back onto her shoulders, seating itself without pausing in its ceaseless chanting.
Wow. That’s the only word for it.
*Blue light! Moss! A dead glowstick on the ground! We’ve found it!*
Jiyuulia begins walking forwards. She’s faster than usual.
*Go faster, go faster!*
I mean, what are the odds? Us, finding another moss cavern within hours of being separated? And one that’s been inhabited recently? Burn marks everywhere, and an old, used-up glowstick in the middle? We’ve found the lost hunting party!
*Rope! Guns! Meat! Safety!*
It’s perfect! It’s ideal! It’s the best possible outcome!
Still a bit hysteric, Jiyuulia breathes in deeply, letting the breath out slowly. The Arxur continues to spout meaningless drivel.
There are literally no downsides! We can just follow the glowsticks, turn where they say to, and we’ll easily—
The Arxur goes silent.
Oh.
Oh no.

T-t-that’s a lot of bodies.
I. I see. See, uh, uhm.
…Wow.
Okay, okay. Calm. Rational. They want a description. A description. Uhh…
Okay, so. Bones scattered everywhere. Everywhere. More skeletons than I can count. Piles of bones taller than I am. All yellowed. Ancient. Arxur and… something else.
Gigantic stone vault door. Forty, maybe fifty feet in diameter. Massive hole in the bottom. Blue light shining through. Free of skeletons, except for one. Arxur. Slumped against the door. One arm missing.
Bullet wound through the skull. Ballistic.
What… what happened here?
Jiyuulia pauses. Neither she nor the Arxur say anything for nearly two minutes. She is not still. Bones clatter. Suddenly, she lets out a gasp, then grunts of exertion. Hard ceramics scrape against rough stone.
They were prey.
The bodies? The ones who aren’t Arxur? Their eye sockets are on the sides.
They were prey, and Listener? This was their final stand.
Jiyuulia fiddles with something she’s holding. It clicks, then begins to hum with a low note.
They did not survive.
Jiyuulia lifts the device to her shoulder. It’s not built for her physiology, slipping a few times in her haste, and even when she does figure it out, the positioning is still awkward. It’s heavy, but she’s stronger. The hum reaches a crescendo.
All… they ever did… was for naught. They… were forgotten.
Time seems to stop with the roar of the plasma rifle. It doesn’t sound like a modern-day Arxur raid rifle, but judging by the popping sounds of sizzling granite in the background, it isn’t supposed to. Jiyuulia flips a switch. The hum stops. The sizzle does not.
I don’t plan on making the same mistake.
File “Entry 8 – 00:57, January 14th, 2137.mp3” ended.
Play next file? Y/N
First Prev Next
A/N:
Hey now! It was was technically a whole three days shy of three months! Not three months at all!
...Now that that's out of the way, the story. The second arc finally gets to the point I had tried to get to waaay back in Entry 6, and oh boy is it time! But where is that point? Where has the fat squid found herself, deep in the bowels of the desert caves? And what of the rest of the crew, where'd they end up? I guess you'll just have to wait for Entry 9 to find out.
The chapter's long. Really long. 11,267 words long. 84 pages and nearly 30,000 words of planning done in the Entry 8 Content Google Doc long. I'm well aware that it's so long that it's not helping my Reddit metrics, and people aren't finding my story as a result. Which is really too bad; I think they'd enjoy themselves reading about the adventures of the obese squid and her emaciated lizard! But I'm also afraid it's likely to keep happening. I try, but this story has a mind of its own sometimes, and Jiyuulia demands the lengths she does — it just doesn't feel right otherwise. And with my responsibilities as they are, that just means the release cycle is gonna continue to be very slow. I'm going to keep trying to speed it up a little, but as always, I make no promises as to this fic other than that it will either be finished, or barring that, its outline posted. Hopefully it doesn't come to that, I'm as motivated as ever, but I feel it's worth saying.
Extra assistance came during the writing of this entry, so please give thanks to my editor Edmond Johansson and proofreaders u/Cummy_Wummys and u/kabhes, authors of Curing Malpractice and From Drugs to Meat, respectively! I have no doubt you likely already have, but if you haven't, you should go check their stories out too, they're pretty good! I should know, I help edit them.
I know it may be a bit atypical for Kolshfic writers, but I am not banned and still active on the official NOP Discord! There's a thread in the creator library on there for all your AH questions and desires, so please, if you have anything you need my attention for, please fire off a question over there! Otherwise, I await all of your silliest comments below. The orange arrows are admittedly nice too. Your validation and my happiness are one and the same!
submitted by InstantSquirrelSoup to NatureofPredators [link] [comments]


2024.04.02 00:00 Yuli-Ban Foolish Musings on Artificial General Intelligence

"AGI may already be here. We just haven't woken it up yet."
That's my current operating hypothesis after snooping around the world of agentic AI and reading up some more methods that have yet to take.
The path to "first generation AGI" seems clear to me now, and if it's clear to me, it certainly should've been clear to the big labs.
Hot takes at the start (feel free to attack these points)
First, about First-generation AGI and a "universal task automation machine"
First-generation AGI (or weak AGI) is one of those terms I made up a while ago (alongside artificial expert intelligence, frozen AGI, and proto-AGI) to navigate that bizarro peripheral area between ANI and AGI that had long gone unexplored and ignored to describe a type of AI that possessed the universal capabilities of AGI without some of the more sci-fi concepts of artificial personhood.
Then I was reminded of Isaac Arthur and his explanation that automation is thought of wrongly, which is why we keep misinterpreting it. AI and robots don't automate jobs, they automate tasks. Consider this: since 1900, how many jobs have actually been fully automated? Not that many. Elevator bellhops, phone operators (to an extent), human computers, bank tellers (to an extent), and a few others. Yet how many tasks have been automated? Exponentially more, to the point we often don't notice it. Think of cashiers— money counting and physically scanning items has long been automated, but the job itself still remains. Self checkout and cashless stores only have had limited success. They might have more with new advancements, but that's not the point: the point is that mechanization and automation impact tasks rather than whole jobs, which is why the Automation Revolution seems to simultaneously be nonexistent and constantly affecting jobs at the same time.
Running with this led me to consider the invention of a Universal Task Automation, or a UTA, machine, as an alternative interpretation of an AGI.
Think of the UFO phenomenon and how it was recently rechristened to "UAPs" to take the phenomenon more seriously and reduce the connotations of alien life and New Age American mythology attached to "UFO." Perhaps UTA machine could have been that for AGI, if I felt there was enough time for it. UTA machines in my head have all the predicted capabilities of AGI without having to also factor in ideas of artificial consciousness or sapience, reverse engineering the brain, or anything of that sort.
Generally, foundational models match what I expected out of a UTA machine. But they are still limited at the moment. People have said that GPT-3, GPT-4, Gemini Ultra, and most recently Claude 3 Opus are AGI, or have debated upon it. I say they both are and aren't.
The phenomenon people are describing as AGI is the foundational model architecture— which indeed can be considered a form of "general-purpose AI." However, there's a few things they lack that I feel would be important criteria in order to jump from "general-purpose AI" to "artificial general intelligence."

Foundational models + Concept Search + Tree Search + Agent Swarms is the most likely path to AGI.

Concept search involves techniques for efficiently searching and retrieving relevant information from the vast knowledge captured in foundational models. It goes beyond keyword matching by understanding the semantic meaning and relationships between concepts. Advanced methods like vector search, knowledge graphs, and semantic indexing enable quick and accurate retrieval of information relevant to a given query or context. That said, a "concept" within an AR-LLM is a stable pattern of activation across the neural network's layers, forming a high-dimensional numerical representation of what humans understand as an "idea" or "concept." This representation is a projection of the original thought or concept, which is encoded in human language, itself a lower-dimensional projection of the underlying ideas.
Multi-modal models, which can process and generate information across different modalities (text, images, audio, etc.), have the capability to transfer information between these lower and higher-dimensional spaces. The process of crafting input tokens to guide the model towards desired outputs, is often referred to as "prompt engineering."
The capacity of a neural network (biological, digital, or analog) to maintain and access multiple coherent numerical representations simultaneously, without losing their distinct meanings or relationships, is what we perceive as "problem-solving" or "general intelligence." The more "concepts" or "ideas" a network can handle concurrently, the more accurately it models the mechanisms of problem-solving and intelligence, including social intelligence.
Tree search algorithms explore possible action sequences or decision paths by constructing a search tree. Each node represents a state, and edges represent actions leading to new states. Techniques like depth-first search, breadth-first search, and heuristic search (e.g., A*) navigate the tree to find optimal solutions or paths. Tree search enables planning, reasoning, and problem-solving in complex domains.
Demis Hassabis has said that tree search is a likely path towards AGI as well:
https://www.youtube.com/watch?v=eqXfhejDeqA
Agent swarms involve multiple autonomous agents working together to solve complex problems or achieve goals. Each agent has its own perception, decision-making, and communication capabilities. They coordinate and collaborate through local interactions and emergent behavior. Swarm intelligence enables decentralized problem-solving, adaptability, and robustness. Agent swarms can collectively explore large search spaces and find optimal solutions.
Andrew Ng recently showcased how important agents are towards boosting the capabilities of LLMs:
https://twitter.com/AndrewYNg/status/1770897666702233815
Today, we mostly use LLMs in zero-shot mode, prompting a model to generate final output token by token without revising its work. This is akin to asking someone to compose an essay from start to finish, typing straight through with no backspacing allowed, and expecting a high-quality result. Despite the difficulty, LLMs do amazingly well at this task!
...
GPT-3.5 (zero shot) was 48.1% correct. GPT-4 (zero shot) does better at 67.0%. However, the improvement from GPT-3.5 to GPT-4 is dwarfed by incorporating an iterative agent workflow. Indeed, wrapped in an agent loop, GPT-3.5 achieves up to 95.1%.
Again, necessarily, we are ill prepared for the convergence of these methods.
Agentic AI alone is likely going to lead to extraordinary advancements.
Take this AI-generated image of an apple. A friend sent this to me, and I personally am deeply skeptical of all the details of it (a lot of "anonymous, as of yet, unannounced" things in it), but the benefit of the doubt explanation is that this apple was fully drawn by an AI.
But not by diffusion, or by GANs, or any prior method. Rather, the anonymous researcher who had this drawn had instructed an experimental agent workflow powered by an as-of-yet unannounced LLM to generate an image of an apple ("give me a picture of an apple" allegedly), assuming the agent would utilize Midjourney to do so (see: https://www.youtube.com/watch?v=_p6YHULF9wA) as you actually can use early autonomous agents right now to do things such as using Midjourney or ChatGPT.
Instead, this particular agent interpreted the researcher's command a bit literally, and rather searched up what apples looked like, then proceeded to open an art program and manually draw the apple, paintbrush tool and fill-in tool and all. That image is the final result of which.
Now again, I'm skeptical of the whole story and none of it is verified, but it also tracks closely to what I've been expecting out of agentic AI for some time now. In a "Trust, but Verify" sort of way, I don't fully believe the story because it seems to match my expectations too closely, but nothing mentioned is explicitly beyond our capabilities.
Indeed, "agent-drawn AI art" is one of the things I've been passingly anticipating/fearing for months, as it almost completely circumvents every major criticism with contemporary diffusion-generated AI art, including the fact that it was allegedly manually drawn, and even drawn after the agents autonomously Googled the appearance of an apple. It just seems too humanlike, too "good," (and too convenient, because that also completely circumvents the "it's not learning like humans, it's illegally scraping data" argument) but again, that only seems unrealistic to those who don't follow the burgeoning world of AI agents.
Again, see this:
https://www.youtube.com/watch?v=Xd5PLYl4Q5Q
Single-agent workflows are like the "spark of life" for current models, and agent swarms are going to be what causes some rather spooky behaviors to emerge.
And that belies the larger point: current expectations of AI are driven by historical performance and releases. Most people are expecting GPT-5-class AI models to essentially be GPT-4++, but with magical "AGI" powers, as if prompting GPT-5 will give you whole anime and video games without really knowing how. We're used to how LLMs and foundational models work and extrapolate that into the future.
In fact, GPT-3 (as in the original 2020 GPT-3) with a suitably capable agent swarm may match a few of the capabilities we expect from GPT-5. Perhaps there is a foundational model "overhang" that we were blinded to due to a lack of autonomous capabilities (plus the cost of inferencing these agents makes it prohibitive for the larger models).
This is what I believe will lead to AGI, and likely in very short order. We are not at all prepared for this, again, because we're expecting the status quo (as changing and chaotic as it already is) to remain. The rise of agentic AI alone is going to hit those unprepared and unknowing like a tsunami as it will likely feel like AI capabilities leapt 5 years overnight.
This is a major reason why I say AI winter is not likely to happen. The claims that AI winter are about to happen are largely based around the claims that foundational models have reached a point of diminishing returns and that current AI tech is overhyped. I still feel the ceiling for foundational model capabilities is higher than what we see now, and that there's at least another generation's worth of improvement before we start running into actual diminishing returns. Those saying that "the fact no one has surpassed GPT-4 in the past year is proof GPT-4 is the peak" forget that there was a time when GPT-3 had no meaningful competitor successor for three years.
Generally what I have noticed is that no one seems to be interested in genuinely leapfrogging OpenAI, but rather catching up and competing with their latest model. This has been the case since GPT-2: after 2's release in early 2019, we spent an entire year seeing nothing more than other GPT-2-class models trickling out, such as Megatron and Turing-NLG, which technically were larger but not much more impressive, right up until GPT-3's launch eclipsed them all. And despite a three year gap between 3 and 4's release, few seemed interested in surpassing GPT-3, with even the largest model (PaLM) not even seeing a formal release and most others sticking to within the size of GPT-3. Essentially when GPT-4 was released, everyone was still playing catch-up with GPT-3, and have done the same thing with 4. Claude 3 surpassing GPT-4 is not different to that time when Turing-NLG surpassed GPT-2— it's all well and good, but ultimately GPT-5 is the one that's going to set the standard for the next class of models. Even Gemini 1.5 Pro and Ultra don't seem materially better than GPT-4, rather possessing much greater RAG and context windows but otherwise still within the 4-class of reasoning and capability. If nothing else, it seems everything will converge in such a way that GPT-5 will not be alone for long.
This is why I'm not particularly concerned about an AI winter as a result of any sort of LLM slowdown.
As a result of LLMs tapping out, that would only be a concern if GPT-5 came out and was only marginally better than Claude 3 Opus. We won't know until we know.
And again, that's only talking about the basic foundational models with their very limited agency. If OpenAI updated GPT-4 so that you could deploy an autonomous agent(s), we'd essentially have something far better than a model upgrade to GPT-4.5 (this is what I originally assumed the Plug-Ins and the GPT Store were going to be, which is why my earlier assumptions about these two things were so glowingly optimistic).
Point is, I simply feel AI has crossed a competency threshold that prevents any sort of winter from occurring. My definition of an AI winter relies on a lack of capability causing a lack of funding. In the 1960s and early 70s, researchers were promising AIs as good as we have now with computers that were electric bricks and total digital information that could fit inside of a smartphone charger's CPU. The utter lack of power, data, and capability meant that AI could not achieve even the least impressive accomplishments besides raw calculations (and even those required decent builds). If the researchers had accomplished 1% of their goals, that would have been enough for ARPA to not completely eviscerate all of their funding, as at least something could have been used as a seed to sprout into a useful function or tool.
In the 80s, things were different, in that computers were powerful enough to accomplish at least 1% of the aims of the 5th generation computer project, and the resulting winter did not completely kill the field as had occurred. The promise then wasn't even for AGI necessarily, but rather for AI models that bear a strong resemblance to modern foundational models. Again, something not possible without vastly more powerful computers and vastly more data.
Here, now, in the 2020s, the feahope of an AI winter is essentially that the general-purpose statistical modeling AIs we have now that have been widely adopted and used by millions, and whose deficiencies are more or less problems of scale and a lack of autonomous agency, are not superintelligent godlike entities promised by Singularitarians, and that will magically cause the entire field to evaporate once investors wise up, and then everyone currently using or even relying on GPT-4 will realize how worthless the technology is and cease using it and the entire suite of AI technologies available now entirely. While I think something akin to an "AI autumn" is very much possible if companies realize that expectations do outstrip current capability, I feel those saying AI winter is imminent are more hoping to validate their skepticism of the current paradigm.
This is dragging on too long, so reread the hot takes at the top if you want a TLDR.
submitted by Yuli-Ban to singularity [link] [comments]


2024.03.26 17:56 constructiontimeagnn MU will eventually become the NVDIA of AI solid state memory.. specially for mobile..data centers...etc....

MU will eventually become the NVDIA of AI solid state memory.. specially for mobile..data centers...etc....
Micron is the shiznit. Before you even read this post, please first quickly visit their splash page, and scroll down the entire page, and APE read that splash home page. Then come back here to pre-school explain what it all means: https://www.micron.com/
one of the visuals you'll find:

https://preview.redd.it/s3mfp0urkpqc1.jpg?width=1486&format=pjpg&auto=webp&s=c0c47d5af2015d9fa8830ee18f0338be219e08f7
Now that you've enjoyed the pretty pictures and big fat fonts, we pre-school explain it further here for your pre-school reading pleasure.
If you haven't heard of the legendary rabbit+zoo they pulled out of their hat this last earnings, you don't deserve to call yourself an ape trader, you're more like a caculus spider monkey too caught up in patterns and logic to appreciate this magical feat. But seriously, Micron is what I've loaded on. Let me post their earnings vs expected so you can appreciate the sheer mythological times we currently live in:
March 20th 2024, that magical date... sigh....
Q2 2024 :
  • Earnings per share: 42 cents adjusted vs. 25 cent loss expected by LSEG, formerly known as Refinitiv.
  • Revenue: $5.82 billion vs. 5.35 billion expected by LSEG.
Micron said revenue rose to $5.82 billion from $3.69 billion in the year ago quarter. The company reported a net income of $793 million, up from a net loss of $2.3 billion in the same period last year. (ohhh and that pretty guidance into next Qtr.... )
For its fiscal third quarter, Micron expects to report revenue of $6.6 billion, above the $6.02 billion expected by analysts. ( these mofo's will be busy for YEARS to come pumping out the storage capacity for the AI revolution )
“We believe Micron is one of the biggest beneficiaries in the semiconductor industry of the multi-year opportunity enabled by AI,” Micron CEO Sanjay Mehrotra said in a release.
Micron has long provided memory and flash storage for computers, data centers and phones. Large data centers are used to power the influx of new AI software. While Nvidia has grabbed much of the spotlight for its graphics processing units that run AI, companies like Micron benefit by providing the memory and storage for those systems.
CNBC worship article: https://www.cnbc.com/2024/03/20/shares-of-micron-pop-12percent-on-earnings-beat-driven-by-ai-boom.html#:~:text=Shares%20of%20Micron%20popped%20in,in%20the%20year%2Dago%20quarter.
Now check out what Micron does:
Check out this premonition anouncement A YEAR OLD on their wicked SOLID STATE SUPER MASSIVE STORAGE FOR DATA CENTERS:

New Micron 9400 SSD delivers best-in-class performance and capacity

BOISE, Idaho, Jan. 09, 2023 (GLOBE NEWSWIRE) -- Micron Technology, Inc., (Nasdaq: MU), today announced the Micron 9400 NVMe™ SSD is in volume production and immediately available from channel partners and to global OEM customers for use in servers requiring the highest levels of storage performance. The Micron 9400 is designed to manage the most demanding data center workloads, particularly in artificial intelligence (AI) training, machine learning (ML) and high-performance computing (HPC) applications. The drive delivers an industry-leading 30.72 terabytes (TB) of storage capacity, superior workload performance versus the competition, and 77% improved input/output operations per second (IOPS).1 The Micron 9400 is the world’s fastest PCIe Gen4 data center U.3 drive shipping2 and delivers consistently low latency at all capacity points.3
“High performance, capacity and low latency are critical features for enterprises seeking to maximize their investments in AI/ML and supercomputing systems,” said Alvaro Toledo, vice president and general manager of data center storage at Micron. “Thanks to its industry-leading 30TB capacity and stunning performance with over 1 million IOPS in mixed workloads, the Micron 9400 SSD packs larger datasets into each server and accelerates machine learning training, which equips users to squeeze more out of their GPUs.”
Industry-leading 30TB capacity maximizes storage densityThe Micron 9400 SSD’s industry-leading capacity of 30TB doubles the maximum capacity of Micron’s prior-generation NVMe SSDs. A standard two-rack-unit 24-drive server loaded with 30.72TB Micron 9400 SSDs provides total storage of 737TB per server. By doubling capacity per SSD, Micron is enabling enterprises to store the same amount of data in half as many servers.
Leading storage performance excels in a range of environments from AI to cloudThe Micron 9400 SSD sets a new performance standard for PCIe Gen4 storage by delivering 1.6M IOPS for 100% 4K random reads.
The Micron 9400’s capacity and performance enable larger datasets and accelerate epoch time, the total number of iterations of data in one cycle for training machine learning models – leading to more efficient utilization of graphics processing units (GPUs).
While many SSDs are designed for pure read or write use cases, the Micron 9400 was designed with real-world applications in mind. Mixed workloads are prevalent in many data center applications, including caching, online transaction processing, high-frequency trading, AI, and performance-focused databases requiring extreme performance.
For mixed read and write workloads, the Micron 9400 also outperforms the competition, providing:
  • 71% higher IOPS for 90% read and 10% write workloads, surpassing 1 million IOPS4
  • 69% higher IOPS for 70% read and 30% write workloads, surpassing 940,000 IOPS4
In testing scenarios, the Micron 9400 SSD excelled in mixed workload performance compared against competitors’ high-performance NVMe SSDs. The results show:
  • For RocksDB, a storage database renowned for its high performance and used for latency-sensitive, user-sensitive applications like spam detection or storing viewer history, the 9400 delivered up to 23% higher performance and up to 34% higher workload responsiveness5
  • For Aerospike Database, an open-source NoSQL database optimized for flash storage, the Micron 9400 demonstrated up to 2.1 times higher peak performance and superior responsiveness. Aerospike Database underpins time-critical web applications like fraud detection, recommendation engines, real-time payment processing and stock trading – meaning the 9400 can deliver faster results for these time-sensitive use cases5
  • For NVIDIA Magnum IO GPUDirect Storage which enables a direct memory access data transfer path between GPU memory and storage, the Micron 9400 beat the competition by delivering 25% better performance in a busy system with compute-bound tasks — a critical improvement for AI environments6
  • For multi-tenant cloud architectures, the Micron 9400 delivers more than double the overall performance of a competitor’s performance-focused SSD and up to 62% better response time6
“As the world’s most innovative organizations continue to adopt cloud and digital-first strategies, WEKA and our partners are focused on removing obstacles to data-driven innovation,” said Liran Zvibel, co-founder and chief executive officer of WEKA. “High-performance, high-capacity storage like the Micron 9400 SSD provides the critical underlying technology to accelerate access to data and time to insights that drive tremendous business value.”
Improved energy efficiency reduces environmental impactA major consideration for data center operators is the combination of workload performance and the amount of energy consumed. Higher energy efficiency means there is more throughput for the energy consumed to complete the work. The Micron 9400’s 77% better IOPS per watt reduces power consumption and therefore operational expenses, carbon footprint and environmental impact.
“Supermicro designs innovative servers that provide maximum performance, configurability, and power savings to tackle the growing customer demand for increased capacity and efficiency,” said Wally Liaw, co-founder and senior vice president of business development at Supermicro. “The Micron 9400 SSD delivers an immense storage volume of over 30TB into every drive while simultaneously supporting optimized workloads and faster system throughput for advanced applications.”
Various capacities offer enterprises flexible deploymentThe Micron 9400 SSD is available in a U.3 form factor that is backwards-compatible with U.2 sockets and comes in capacities ranging from 6.4TB to 30.72TB. These options provide data center operators the flexibility to deploy the most energy efficient storage while matching their workloads with the right blend of performance, capacity and endurance.7 This versatile SSD is built to manage critical workloads whether in on-premises server farms or in a multi-tenant shared cloud infrastructure, and can be flexibly deployed in hyperscale, cloud, data center, OEM and system integrator designs.

This is a YEAR AGO. Now check this shit out:
First a little terminology:
High-bandwidth memory (HBM) version 3E (HBM3E) is the latest standard for high-bandwidth memory (HBM) SDRAM. It's used in high-performance installations like graphics accelerators, data center processors, and AI accelerators. HBM3E uses a stacked die format, with the CPU and HBM memory stack on the same interposepackage substrate.
And Microns Official intro to their HBM3E:
HIGH-BANDWIDTH MEMORY

HBM3E

The industry's fastest, highest-capacity high-bandwidth memory (HBM) to advance generative AI innovation
Link to their page: https://www.micron.com/products/memory/hbm/hbm3e?gad_source=1&gclid=CjwKCAjw5ImwBhBtEiwAFHDZxx09Cbkrsl6T_QadrkfcJuoaclcro2yubnncuoVpXRzCYGCnL00kEhoCQ04QAvD_BwE

Samples now available for Micron HBM3E 12-high 36GB cube (the predecessor is 8-high, see pics)

Today’s generative AI models require an ever-growing amount of data as they scale to deliver better results and address new opportunities. Micron’s 1-beta memory technology leadership and packaging advancements ensure the most efficient data flow in and out of the GPU. Micron’s 8-high and 12-high HBM3E memory further fuel AI innovation at 30% lower power consumption than competition.
count the damn little squares, there's 12 (solid state memory capacity per GPU, you APE)
VS:

They want the 12 honey, but now volume availability is 8, so live with it punk, for now... you APE.
Bear in Mind, Micron is in EVERY aspect of memory, including mobile memory, catered to AI, like their latest Samsung Collab shows (Feb 2024): THIS IS LAST MONTH YOU BLITHERING BANANA JUNKIE... HEY!! DON'T EAT THE PEEL YOU BeAST!! .. STAY FOCUSED!!
"As these data- and energy-intensive features push the limits of smartphones’ hardware capabilities, Micron’s LPDDR5X memory and UFS 4.0 storage provide critical high-performance capabilities and power efficiency to deliver these AI experiences at the edge. Select Samsung Galaxy S24 devices across the S24 Ultra, S24+ and S24 models are shipping with LPDDR5X and UFS 4.0 — the most recent innovations in Micron’s robust mobile portfolio. Micron’s LPDDR5X is the industry’s only mobile-optimized memory offering the advanced capabilities of the 1β (1-beta) process node, while Micron’s UFS 4.0 offers leadership performance and power to store growing amounts of data in today’s AI-driven smartphones. "
https://investors.micron.com/news-releases/news-release-details/micron-collaborates-samsung-galaxy-s24-series-unlock-era-mobile#:~:text=(Nasdaq%3A%20MU)%20announced%20today,mobile%20users%20around%20the%20world%20announced%20today,mobile%20users%20around%20the%20world).
This is last month:
February 26, 2024 at 7:00 AM ESTMicron Commences Volume Production of Industry-Leading HBM3E Solution to Accelerate the Growth of AI

Micron HBM3E helps reduce data center operating costs by consuming about 30% less power than competing HBM3E offerings

BOISE, Idaho, Feb. 26, 2024 (GLOBE NEWSWIRE) -- Micron Technology, Inc. (Nasdaq: MU), a global leader in memory and storage solutions, today announced it has begun volume production of its HBM3E (High Bandwidth Memory 3E) solution. Micron’s 24GB 8H HBM3E will be part of NVIDIA H200 Tensor Core GPUs, which will begin shipping in the second calendar quarter of 2024. This milestone positions Micron at the forefront of the industry, empowering artificial intelligence (AI) solutions with HBM3E’s industry-leading performance and energy efficiency.
HBM3E: Fueling the AI RevolutionAs the demand for AI continues to surge, the need for memory solutions to keep pace with expanded workloads is critical. Micron’s HBM3E solution addresses this challenge head-on with:
  • Superior Performance: With pin speed greater than 9.2 gigabits per second (Gb/s), Micron’s HBM3E delivers more than 1.2 terabytes per second (TB/s) of memory bandwidth, enabling lightning-fast data access for AI accelerators, supercomputers, and data centers.
  • Exceptional Efficiency: Micron’s HBM3E leads the industry with ~30% lower power consumption compared to competitive offerings. To support increasing demand and usage of AI, HBM3E offers maximum throughput with the lowest levels of power consumption to improve important data center operational expense metrics.
  • Seamless Scalability: With 24 GB of capacity today, Micron’s HBM3E allows data centers to seamlessly scale their AI applications. Whether for training massive neural networks or accelerating inferencing tasks, Micron’s solution provides the necessary memory bandwidth.
“Micron is delivering a trifecta with this HBM3E milestone: time-to-market leadership, best-in-class industry performance, and a differentiated power efficiency profile,” said Sumit Sadana, executive vice president and chief business officer at Micron Technology. “AI workloads are heavily reliant on memory bandwidth and capacity, and Micron is very well-positioned to support the significant AI growth ahead through our industry-leading HBM3E and HBM4 roadmap, as well as our full portfolio of DRAM and NAND solutions for AI applications.”
Micron developed this industry-leading HBM3E design using its 1-beta technology, advanced through-silicon via (TSV), and other innovations that enable a differentiated packaging solution. Micron, a proven leader in memory for 2.5D/3D-stacking and advanced packaging technologies, is proud to be a partner in TSMC’s 3DFabric Alliance and to help shape the future of semiconductor and system innovations.
Micron is also extending its leadership with the sampling of 36GB 12-High HBM3E, which is set to deliver greater than 1.2 TB/s performance and superior energy efficiency compared to competitive solutions, in March 2024. Micron is a sponsor at NVIDIA GTC, a global AI conference starting March 18, where the company will share more about its industry-leading AI memory portfolio and roadmaps.
**About Micron Technology, Inc.**We are an industry leader in innovative memory and storage solutions transforming how the world uses information to enrich life for all. With a relentless focus on our customers, technology leadership, and manufacturing and operational excellence, Micron delivers a rich portfolio of high-performance DRAM, NAND and NOR memory and storage products through our Micron® and Crucial® brands. Every day, the innovations that our people create fuel the data economy, enabling advances in artificial intelligence and 5G applications that unleash opportunities — from the data center to the intelligent edge and across the client and mobile user experience. To learn more about Micron Technology, Inc. (Nasdaq: MU), visit micron.com.
© 2024 Micron Technology, Inc. All rights reserved. Information, products, and/or specifications are subject to change without notice. Micron, the Micron logo, and all other Micron trademarks are the property of Micron Technology, Inc. All other trademarks are the property of their respective owners.
So check it, These are the main competitors to MU : South Korea's : Samsung Electronicis Co, SK Hynix, and Japan's Kioxia. Did you remember that little Samsung tiny memory AI collaboration, MU provides the memory and Sammy provides the Cell Phone? ... now ask yourself, WHY WOULD A MORTAL ENEMY MONKEY FROM ANOTHER TRIBE, KNEEL BEFORE THE GREAT MU SILVER BACK GORRILA, BECAUSE IT'S BABY SAMSUNG GALAXY S24 ULTRA, S24+ AND ALL THE S24 LINE UP... SPECIAL CELL PHONE BABY NEEDS SPECIAL MEMORY, APPARENTLY, IT ITSELF HAS NOT THE TECHNIQUE, NOR THE TECHNOLOGY TO DO IT ITSELF. THE GREAT SAMSUNG MUST KNEEEEEL BEFORE THE GREAT MU SILVER BACK FOR MEEERCCYYYYYY..... AGAIN IF YOU FORGOT DEAR APE: https://investors.micron.com/news-releases/news-release-details/micron-collaborates-samsung-galaxy-s24-series-unlock-era-mobile
aaaand:
China clamping down on EVERYONE, MSFT, AMD, INTC, but giving MU the pearly gates, This is all with U.S. approvals. "On March 23 (2024, as in a few days ago... ), China’s Minister of Commerce, Wang Wentao, met with Micron CEO Sanjay Mehrotra to exchange views on Micron’s development in China. The Chinese government welcomed Micron to continue deepening its presence in the Chinese market and accelerate the implementation of new investment projects in China, Wang Wentao said. Sanjay Mehrotra introduced Micron’s business and new investment projects in China, stating that the company will strictly adhere to Chinese laws and regulations. Sanjay also unveiled plans to expand investments in China to meet the demands of Chinese customers. This is Sanjay Mehrotra’s second visit to China in nearly six months, with their previous meeting occurring on November 1, 2023"
https://technode.com/2024/03/26/us-chip-firm-micron-plans-to-expand-investment-in-china/
"Micron told investors in December that the shipments could generate "several hundred millions of dollars of HBM revenue in fiscal 2024," with continued growth in 2025. As of March 2024, Micron's HBM chips are sold out for calendar 2024, and the majority of their 2025 supply has already been allocated."
https://www.fool.com/investing/2024/03/25/micron-sold-high-bandwidth-memory-nvidia/#:~:text=The%20market%20has%20been%20going,another%20year%2C%20if%20not%20longer.
"Given the data above, Micron may turn out to be the better AI play compared to Nvidia because of one simple reason: At almost 37 times sales, Nvidia stock is way more expensive when compared to Micron's price-to-sales ratio of 6.4. Also, Micron is significantly cheaper than Nvidia as far as the forward earnings multiple is concerned." msn.com/

Motley Article that came out today 3-31-24 = (discounting their dominance in their cutting edge Server that came out the 9400 SSD. 77% less power4 better prfmnc=THE BEST IN INDUSTRY)
"..in AI servers, the company's high-bandwidth memory (HBM) is being deployed by Nvidia. The graphics specialist recently announced its next-generation Blackwell AI GPUs (graphics processing units), and Micron points out that these chips carry 33% more HBM.
Micron says that AI-enabled PCs could be equipped with 40% to 80% more DRAM content as compared to usual PCs. On the other hand, Micron expects AI-capable smartphones to "carry 50 to 100% greater DRAM content compared to non-AI flagship phones today. That's why investors would do well to buy the stock right away. Its price-to-sales ratio of 6.6, a discount to the Nasdaq-100 Technology Sector index's multiple of 7.4, means it is a solid bargain right now." C Pics
https://www.fool.com/investing/2024/03/31/2-artificial-intelligence-ai-stocks-that-could-go/#:~:text=Shares%20of%20both%20Broadcom%20and,go%20on%20a%20parabolic%20run.
Technicals on Blackwell=
https://www.anandtech.com/show/21310/nvidia-blackwell-architecture-and-b200b100-accelerators-announced-going-bigger-with-smaller-data
https://preview.redd.it/stsnr2o5dprc1.jpg?width=2771&format=pjpg&auto=webp&s=2c099c36998ac86eb8479a373e3c380322692aed

So I am long MU and FU if you aren't along for NVDIA 2.0 in Storage ride. Because AI is shit without MEMORY . And the baddest mofo's at the top of Memory food chain is Micron. Load and hold. And bless worship this hear post for posterity for your childrens children, childrens children and so on.
Nuff Said.
I have said my peace, no go and prosper.
Mizuho Upgraded to 130.
I say 250.00+ Bitch. I have spoken.


submitted by constructiontimeagnn to wallstreetbets [link] [comments]


2024.01.18 10:27 super_psyched69 26f, Rate Me, & would wearing makeup be strongly suggested?

26f, Rate Me, & would wearing makeup be strongly suggested?
In the first two images the lighting blurs my imperfections and does me justice I think, though picture 3 is a better raw example of how I look on a medium to bad day. All 3 no makeup(clearly)
I don't wear makeup often but I am starting to think maybe I am someone who would be looked at and treated much differently, taken more seriously. I am curious how the average person sees me from a photo or off first impression. Do I look as sickly as I think I look? I consider myself to be at least at the Average line, and on a good day with makeup on i think id rate myself somewhere around 6-7/10, but please be honest if that is a bit generous. To be fair, If I smiled with my teeth showing my rating would plummet as I am slightly slackjawed and have a missing eye tooth. Oh and a broken nose from 6th grade(if I could change any feature it would be my beak).
A face can say a lot. A lot can be read or inferenced from someone's face just by a photo or first impression, so what does my face say about me?
submitted by super_psyched69 to Rateme [link] [comments]


2023.12.09 05:30 Xtianus21 Q* Could Be It - Forget AlphaGO - It's Diplomacy - Peg 1 May Have Fallen - Noam Brown May Have Achieved The Improbable - Is this Q* Leak 2.0?

Q* Could Be It - Forget AlphaGO - It's Diplomacy - Peg 1 May Have Fallen - Noam Brown May Have Achieved The Improbable - Is this Q* Leak 2.0?
Oddly, someone today posted an article reference on Singularity that was seemingly inconspicuous. https://www.reddit.com/singularity/comments/18dnlex/comment/kciwnbh/?context=3
I read through the Ars Technica article and thought hmmm there's a lot here so i'll come back to it. The author from the article said that he was giving more light to the reuters article about Q* and what it really meant.
I read through the entire article and now I am thinking wow, holy shit, this is how you would do that.
I want to explain peg 1 in an easy to understand way. I've wrote about it here with my Hello World post.
Effectively Peg 1 is simply about communication and owing the context. Owning the purpose of where thought and thus action/purpose derives from. I don't know if my brain is just hardwired this way working on automation for so long but context has always been an interesting subject matter.
Owning or beholding context is key attribute to human level cognition and behavior of thoughts and that is why the safety issues sounded so ridiculous to me. The human, as of now, always owns the context. Think about how a chatbot application works; well, I should say how ChatGPT or Claude or Bard function today. You say something and it says something back. That's simply inference.
As an example, what would be freaky is if your own ChatGPT application just went and said at 5:00pm Hello; out of nowhere. That would freak you out right? Hello, what's going on. The reason why that would be so odd is because there is nothing that exists today that gives an LLM agency to interact with you in any direct capacity other than you querying it for questions and answers.
You can fake that a bit with a scripted chatbot. Hello "person", how may I help you. As you see right there the context switched from you to something else (in this case a text chatbot". However, it's easy to fake because as soon as I respond, what is today's weather the context just switched back to me. The flow becomes more straightforward. I call it putting people into a box. They don't know they're in a box but they're in a box.
It's a very subtle concept but it's infinitely powerful in terms of something being cognitive and sentient and something being an inferenced LLM. That is why I describe "Peg 1" and "Peg 2" infinite protections towards any humanity level altering system of a super intelligence. An infinite wall of protection if you will. You can't get agency from an LLM. It's a dead end. That "hello person" initiation was some programmer that programmed that in. Plain and simple. Until you have an agentic system that stands on it's own cognitive thought and communication capabilities from it's own internal understanding you cannot possibly have an ASI system. It is the first step. There is not a person on this earth that can argue against this.
You can problem solve and problem solve all you want and you can train a model over and over again. In the end, that inferenced model will serve to exist as a snapshot of time for a methodology of some statistical tree of reasoning. You cannot gain function, you cannot gain creation beyond scope, you cannot gain intelligence in anyway. You effectively are a massive known problem solver. That sometimes hallucinates. The frequency of the hallucinations may go down towards 0% but the creativity and gain of function layer remains statistically flat.
This is why a learning / dynamic layer that sits as a satellite outside of the core LLM is so vitally important. People are arguing to put RL layer inside of the LLM. I am arguing to put the LLM inside of the R layer. And I think others are now starting to do this. It may not be the exact thing I am speaking off but it is tremendously getting closer and closer to the simple acknowledgement. LLM's are just language to the system. They don't necessarily have to be the brains (or ALL of the brains) of the system.
Here is a fascinating paper that Microsoft just published that speaks to exactly this. Already surprising CoT and GoT capabilities. Microsoft's XoT Everything of Thoughts

https://preview.redd.it/e1vspeg87e5c1.png?width=801&format=png&auto=webp&s=5e76e98f4e3d2ff037bb95f3ba0c40fad766695a
The information below isn't some rapid change of thought but rather and observation of the Ars Technica article and how I am beginning to realize that there might actually be something there that OpenAI may have found regarding AI capabilities. I might be wrong and there may be nothing and that's ok. The read is just to research and tie things together for how I see the problem and what research is out there that might tie things together to give clues as to how AGI/ASI may be worked on today.
The reading isn't meant to jump around it is meant to come to a conclusion that there may be a there, there. I encourage anyone with interest to read the links that were provided if you actually care about this technology and want to understand what people are currently working on in this space. If not, this may seem like an extreme read, to me it is not. Everything flows if you actually just read through it.
------------ Hello World ----------------------------------
here is my official peg 1.
  1. An active RL learning system based on language. meaning, the system can primarily function in a communicative way. Think of a human learning to speak. This would be something completely untethered from an LLM or static (what I call lazy NLP layer) inference model. Inference models are what we have now and require input to get something out. This effectively is a infinite wall of protection as of today. Nothing can possibly come out other than what it was trained on. In my theory's you could have a system still use this layer for longer term memory context of the world view. Google's Deep Mind references exactly this.
--------------------------------------------------------------------
So what was so freaky about the post? He says the thing that I am saying. Holy shit.
An important clue here is OpenAI’s decision to hire computer scientist Noam Brown earlier this year. Brown earned his PhD at Carnegie Mellon, where he developed the first AI that could play poker at a superhuman level. Then Brown went to Meta, where he built an AI to play Diplomacy. Success at Diplomacy depends on forming alliances with other players, so a strong Diplomacy AI needs to combine strategic thinking with natural language abilities.
This caught my attention. There's more. And here it is. Something I have not seen but thanks to OP I found evidence for how peg 1 may fall.
Human-level play in the game of Diplomacy by combining language models with strategic reasoning

Despite much progress in training artificial intelligence (AI) systems to imitate human language, building agents that use language to communicate intentionally with humans in interactive environments remains a major challenge. We introduce Cicero, the first AI agent to achieve human-level performance in Diplomacy, a strategy game involving both cooperation and competition that emphasizes natural language negotiation and tactical coordination between seven players. Cicero integrates a language model with planning and reinforcement learning algorithms by inferring players’ beliefs and intentions from its conversations and generating dialogue in pursuit of its plans. Across 40 games of an anonymous online Diplomacy league, Cicero achieved more than double the average score of the human players and ranked in the top 10% of participants who played more than one game.
This is what i'm talking about. Using language to do reasoning is the next big thing. This is massive. This is the leak this is the thing. This is what you have to do in order to do the next step towards ASI.
I will purchase the article tomorrow and read through it more. Going back to the ars technica article by Timothy B. Lee there is another inconspicuous line that is just laid into a probable bombshell.

Learning as a dynamic process
I see the second challenge as more fundamental: A general reasoning algorithm needs the ability to learn on the fly as it explores possible solutions.
When someone is working through a problem on a whiteboard, they do more than just mechanically iterate through possible solutions. Each time a person tries a solution that doesn’t work, they learn a little bit more about the problem. They improve their mental model of the system they’re reasoning about and gain a better intuition about what kind of solution might work.
In other words, humans’ mental “policy network” and “value network” aren’t static. The more time we spend on a problem, the better we get at thinking of promising solutions and the better we get at predicting whether a proposed solution will work. Without this capacity for real-time learning, we’d get lost in the essentially infinite space of potential reasoning steps.
In contrast, most neural networks today maintain a rigid separation between training and inference. Once AlphaGo was trained, its policy and value networks were frozen—they didn’t change during a game. That’s fine for Go because Go is simple enough that it’s possible to experience a full range of possible game situations during self-play.
......
But the real world is far more complex than a Go board. By definition, someone doing research is trying to solve a problem that hasn’t been solved before, so it likely won’t closely resemble any of the problems it encountered during training.
So, a general reasoning algorithm needs a way for insights gained during the reasoning process to inform a model’s subsequent decisions as it tries to solve the same problem. Yet today’s large language models maintain state entirely via the context window, and the Tree of Thoughts approach is based on removing information from the context window as a model jumps from one branch to another.
One possible solution here is to search using a graph rather than a tree, an approach proposed in this August paper. This could allow a large language model to combine insights gained from multiple “branches.”
But I suspect that building a truly general reasoning engine will require a more fundamental architectural innovation. What’s needed is a way for language models to learn new abstractions that go beyond their training data and have these evolving abstractions influence the model’s choices as it explores the space of possible solutions.
We know this is possible because the human brain does it. But it might be a while before OpenAI, DeepMind, or anyone else figures out how to do it in silicon.
Let me repeat
One possible solution here is to search using a graph rather than a tree, an approach proposed in this August paper. This could allow a large language model to combine insights gained from multiple “branches.”
But I suspect that building a truly general reasoning engine will require a more fundamental architectural innovation.
We know this is possible because the human brain does it. But it might be a while before OpenAI, DeepMind, or anyone else figures out how to do it in silicon.
If you take anything from this conversation it is this. He's telling us what is the thinking is/they're thinking is. They are trying to actively take peg 1 down. The Silicon comment just does it for me. I theorize you would need a low level if not bare metal architecture here. I am talking about a super theoretical Assembly to C abstraction that would serve pure silicon purpose of keeping and maintaining a function of RL/Q* design systems.
The last paper is another clue. Graph of Thoughts. It just get's better and better. Update XoT seems even better than CoT or GoT.

We introduce Graph of Thoughts (GoT): a framework that advances prompting capabilities in large language models (LLMs) beyond those offered by paradigms such as Chain-of Thought or Tree of Thoughts (ToT). The key idea and primary advantage of GoT is the ability to model the information generated by an LLM as an arbitrary graph, where units of information (“LLM thoughts”) are vertices, and edges correspond to dependencies between these vertices. This approach enables combining arbitrary LLM thoughts into synergistic outcomes, distilling the essence of whole networks of thoughts, or enhancing thoughts using feedback loops. We illustrate that GoT offers advantages over state of the art on different tasks, for example increasing the quality of sorting by 62% over ToT, while simultaneously reducing costs by >31%. We ensure that GoT is extensible with new thought transformations and thus can be used to spearhead new prompting schemes. This work brings the LLM reasoning closer to human thinking or brain mechanisms such as recurrence, both of which form complex networks. Website & code: https://github.com/spcl/graph-of-thoughts

Let me repeat.

This work brings the LLM reasoning closer to human thinking or brain mechanisms such as recurrence, both of which form complex networks. Website & code: https://github.com/spcl/graph-of-thoughts
It's all in plain sight.
This article is wayyyy too specific and the essence of convenience. As if to say, If you didn't hear me before do you hear me now.
In fact, Tim Lee says as much.
The real research behind the wild rumors about OpenAI’s Q* project
OpenAI hasn't said what Q* is, but it has revealed plenty of clues.
Reuters published a similar story, but details were vague.
Well damn. You have my attention. What prompted Tim Lee to write an expose of grave detail about what it is and how it is that you would build such a thing? Again, it's like yea that Rueters story was not great let me spell it all out for you.
How has noone picked up on this?
Then this guy, responds to the original OP's article (i'm just going into WAY MORE DETAIL) on my comment saying this.
Who is Jolly-Ground???? Why does he have a picture of Ilya as his Icon? Who is Sharp_Glassware and why did he respond to a completely innocuous comment that the post really isn't excellent or a good article. Why would anyone get mad at that article? Unless, it's unfortunately another LEAK in plain sight. "Yes excellent, "THANK YOU OP!" Leak drop much.

https://preview.redd.it/sgid3wie275c1.png?width=1108&format=png&auto=webp&s=4afc0bd79e0685986cbc49db66a28c0752524126
Why did Jolly-Ground-3722 leave this comment on THIS POST.
https://www.reddit.com/singularity/comments/180vqpn/deepmind_says_new_multigame_ai_is_a_step_toward/
I had to upvote that. Jesus. It's being played out right here on Singularity.
https://preview.redd.it/d86jx0dn375c1.png?width=1126&format=png&auto=webp&s=8f6c0558c8eb29dad7f660b5d194106bcdaaa89c
I don't know what you think but there are wayyyyyy too many coincidences here to not connect some dots. It's like they're trading off information through Reddit as a communication proxy. That could be salacious but it's tracking.
More content from that paper.

2.1 Language Models & In-Context Learning The conversation with the LLM consists of user messages (prompts) and LLM replies (thoughts). We follow the established notation [77] and we denote a pre-trained language model (LM) with parameters θ as pθ. Lowercase letters such as x, y, z, ... indicate LLM thoughts. We purposefully do not prescribe what is a single “thought”, and instead make it usecase specific. Hence, a single thought can be a paragraph (e.g., in article summary), a document (e.g., in document generation), a block of code (e.g., in code debugging or optimization), and so on.

https://preview.redd.it/8rvhs8vn475c1.png?width=1005&format=png&auto=webp&s=162c0aeb60f72ec06531c707576e7cbe1771d5f7
9 Conclusion Prompt engineering is one of the central new domains of the large language model (LLM) research. It enables using LLMs efficiently, without any model updates. However, designing effective prompts is a challenging task. In this work, we propose Graph of Thoughts (GoT), a new paradigm that enables the LLM to solve different tasks effectively without any model updates. The key idea is to model the LLM reasoning as an arbitrary graph, where thoughts are vertices and dependencies between thoughts are edges. This enables novel transformations of thoughts, such as aggregation. Human’s task solving is often non-linear, and it involves combining intermediate solutions into final ones, or changing the flow of reasoning upon discovering new insights. GoT reflects this with its graph structure. GoT outperforms other prompting schemes, for example ensuring 62% increase in the quality of sorting over ToT, while simultaneously reducing costs by >31%.
We also propose a novel metric for a prompting scheme, the volume of a thought, to indicate the scope of information that a given LLM output could carry with it, where GoT also excels. This provides a step towards more principled prompt engineering. The graph abstraction has been the foundation of several successful designs in computing and AI over last decades, for example AlphaFold for protein predictions. Our work harnesses it within the realm of prompt engineering.
In summary, this is for sure things OAI is working on. There are just too many clues. But the real question here isn't did they improve prompt engineering. Did they discover a way to use language where the context of thought is on the silicon agent. That is the real question and if the answer is YES that is a spectacular discovery that would warrant a watershed moment in AI.
If this is possible to train a bot to communicate through certain mechanism in it's own right a new age of AI has truly emerged. This could be it. The dawn of ASI has emerged. Or has it?

submitted by Xtianus21 to singularity [link] [comments]


2023.11.21 17:59 norcalnatv Earnings Preview - Beth Kindig

tl;dr Direction is hard to call, she's seeing a near term peak and selling a) if the stock achieves her $545-575 PT, or if it breaks down after earnings. Support and re-accumulation at ~$435-420 levels. (Long term she sees NVDA with a larger MC than APPL.) Great detail and discussion in the whole long article. More analysis coming from her next week post ER.

Conclusion [moved up from bottom]:

Nvidia’s earnings outcome is not easy to read in the tea leaves. This is because the fundamentals are the best in the S&P 500 and the CFO has been clear that she has strong visibility for this quarter and into next year. It’s possible the company misses, but not probable (outside of something China related). Rather, Nvidia’s issues are sector-wide as semiconductor indexes and the bellwether TSM are looking weak on technicals. This would signal even if Nvidia beats/raises and the stock goes up, that its peer group may weigh on the company’s price action in the near-term. There’s also immense pressure that Nvidia raises, which may not be realistic given constraints on supply.
We’ve been crystal clear in both August and September that Nvidia has a move to $545 to $570 and this could mark the top. We continue to believe this is the price target where our firm will again take gains. If we don’t get there this evening, and price breaks down, then we will also take gains. In our opinion, this is the only way to procure a win-win scenario with a stock that holds a leading allocation in a portfolio that has extended 200% in one year.
source

Nvidia’s Fiscal Q3 Earnings Preview: The Pressure Is On

This article was originally posted on Forbes on November 21, 2023.
Nvidia has surged this year with 241% gains YTD, which has more than doubled the returns of the FAANGs. This is no small feat considering it’s widely understood Big Tech is holding up the broader market. Valuations are stretched and leadership is only narrowing; to say there’s pressure going into Nvidia’s report this evening is an understatement.
The outsized demand for the H100 has led to historic moments as Nvidia is expected to exit this fiscal year with quarterly data center revenue of $14 to $15 billion compared to $3.6 billion per quarter at FY2023 exit. Should these estimates be correct (we will get the official guide this evening), Nvidia will end the year with a bang with approximately 300% growth in the final fiscal quarter.
Wow, what a year. Investors may not truly appreciate what Nvidia accomplished given a global pandemic and shelter-in-place orders fueled triple digit growth in tech stocks three years ago. Yet, what Nvidia accomplished was entirely due to product-market fit and design prowess with no end of the world scenario needed. It’s rare what Nvidia did, which was to ignite demand of enormous magnitude.
It’s well known my firm was early to this move in Nvidia with a bold analysis that claimed Nvidia will surpass Apple in valuation by 2026. You can look forward to my firm updating the long-term thesis in the coming weeks with details on how Nvidia will close-in on the next trillion in market cap. But in the near-term, Nvidia investors face what makes or breaks a portfolio, which is the inevitable moment of when Nvidia will top and sell off, how to handle these enormous gains, and if Nvidia can surprise the market again now that it was the defacto leader in the Nasdaq’s historic rally this year.
My firm strongly believes that simply picking a stock is akin to playing a fantasy sport, whereas discussing how to manage the stock is what separates fantasy from the live game. On Nvidia, we’ve been quite clear that we were net buyers in 2022 and we have been trimming the position to take gains in 2023. Meanwhile, Nvidia has remained our largest position until very recently when we put a different stock as first place and Nvidia as second place. Although we typically reserve our trades for our research members, we’ve been open about our strategy of active portfolio management with this spectacular, winning position. Judging by filings by famous hedge fund managers, we are in good company with this strategy.
Going into this highly anticipated report, I’d like to provide my readers with more information on how we are managing our Nvidia position and what to expect from the earnings report. This is a near-term analysis whereas our long-term thesis that Nvidia will surpass Apple in valuation is still firmly intact.

Neck-Breaking Release Cycle: H200 is Hopping Ahead

Nvidia has a near-monopoly in data center GPUs, and one of its strategies to protect its moat is to upgrade GPUs quickly to where it’s hard for AMD, Intel or custom silicon to catch up. The release cycle from the H100 to H200 is neck-breaking, as a typical cycle is two years whereas the H200 will ship in volume one year following the H100. The B100 based on the Blackwell architecture is expected to hit the market at the end of calendar year 2024 with the X100 following soon after.
📷 Source: Nvidia Investor’s Presentation
If 2023 was the year AI accelerators made their importance known to Wall Street, then 2024 will be the year that memory and HBM3/HBM3E makes its importance known as the competition is going head-to-head at memory capacity and bandwidth per GPU rather than compute performance. This further translates to mean the AI race is more focused on inference for the next generation of GPUs as the neural network can be run entirely in memory without the need to move data back-and-forth with the external memory. The H200 is the first GPU with HBM3e for 141 GB of memory and 4.8 TB/s bandwidth. This will result in 1.6X to 1.9X better inferencing performance than the H100.
To drive the point further as to how important memory will be in the next generation of GPUs, the compute performance from the H100 to the H200 is not changing much. According to what the industry has seen so far from Nvidia’s GPU HGX 200 systems, there will be “32 PLOPS FP8” performance, which would be achieved through eight H100s with 3,958 teraflops of FP8 each. The translation is that Nvidia’s H200 upgrade is strategically focused on memory, which also translates to Nvidia feeling pressure from AMD as the MI300X will be the first GPU to hit the market with the memory capacity and bandwidth for full utilization to increase LLM inferencing performance.
By adding HBM3 and HBM3e memory, the compute engines get a performance boost, albeit at a higher cost as HBM3 costs 5-6 times more than typical DRAM. Fewer GPUs will be needed so the cost does not translate to an equal increase in total cost of ownership. GPUs with HBM3 and HBM3E will run compute-intensive large language models with fewer GPUs than is required with the H100s due to offering roughly double the memory. The need for fewer GPUs is accomplished by running LLMs in the memory. The H200 with 141GB of memory compared to the H100’s 80GB will reduce the number of GPUs required for running popular large language models.
If you read between the lines on the H200, then Nvidia is a bit nervous about AMD’s MI300X with the H200 serving as an attempt to bridge the H100 and the B100. AMD’s design more than doubles the memory of the H100 with 192GB HBM3 memory and 5 TB/s of bandwidth, and most importantly, will be out a few months prior to the H200. The MI300X was the first to run a 40B parameter large language model on a single GPU.
AMD should feel satisfied that it forced the near-monopoly leader to hurry toward releasing the H200 with HBM3e as an answer to the MI300X. We covered this in a deep dive for our premium members in July and reiterated it again in August when we covered our favorite memory stock.

What to Expect in the Upcoming Earnings Report:

The very quarter that Nvidia began reporting double digit negative revenue growth of (-16.5%) was the best buying moment. Near the bottom a year ago, our firm wrote for Forbes that Nvidia Was Ready to Rumble with the RTX 40 Series and the H100 GPUs. Notably, Nvidia is up 200% YTD yet is up over 300% since the October low, which is why timing matters.
One year later, and Nvidia is unrecognizable from where the company was exactly one year ago. For the October quarter, Nvidia is expected to report YoY growth of 169.6% for $16 billion and growth of 190.6% YoY growth for the December quarter. According to current estimates, the December quarter is peak growth.
📷
Pictured Above: The very quarter that Nvidia bottomed in fiscal Q1 was the quarter that the stock was had its highest short interest since the Covid low as the product thesis was little understood at the time.
A beat is very important for Nvidia given the spotlight on this company. Demand is certainly there, and what instead is in question (into the foreseeable future) is supply.
Here is what the CFO stated on the last earnings call:
We expect supply to increase each quarter through next year” and also “Demand for our Data Center platform where AI is tremendous and broad-based across industries on customers. Our demand visibility extends into next year. Our supply over the next several quarters will continue to ramp as we lower cycle times and work with our supply partners to add capacity.”
Where the market was a tad disappointed last quarter was when the CFO declined to elaborate on what percentage increase in supply she was expecting to see. The translation is that these are hard comps to compete with, and without a substantial increase in supply, the growth rate may have an inherent constraint given supply has already increased triple digits YoY.
The soaring demand for GPUs is evident in Nvidia’s growth rate. Per the Financial Times, Nvidia is planning to ship 1.5M to 2M GPUs next year compared to a target of 500,000 this year. Given this outsized demand, the hiccup is more likely to happen on the supply side. For this reason, we detail Taiwan Semiconductor’s chart below.
When you strip out data center revenue, what you have is an even higher growth rate for the data center segment of 226% to $12.5 billion expected this quarter. So, the question remains —- can supply continue to grow at these elevated percentages?
📷 Source: Nvidia IR
The data center segment is clearly the thesis but it doesn’t hurt that gaming has rebounded, as well, with 22% growth last quarter.
📷 Source: Nvidia IR
Last quarter, the gross margin improved significantly to 70.1% compared to 64.6% in Q1 and 43.5% in the same period last year. This was the best gross margin in Nvidia’s history due to higher average sales prices and some contribution from the increased mix of software.
Per the CFO: “software is a part of almost all of our products, whether they're our Data Center products, GPU systems or any of our products within gaming and our future automotive products.” Separately, the standalone software business is worth “hundreds of millions of dollars annually.” As seen with our note on the H100 release from last year, its important investors are early to a tipping point. This is why we’ve been adamant that Nvidia’s true AI moment was in 2020 with the A100. If you bought the stock for the H100, you likely missed this year’s power move. The same will be true for Nvidia’s software revenue.
Regarding this quarter’s gross margin, management expects it to expand to 71.5% in the upcoming quarter. The operating income grew by an incredible 1,263% YoY to $6.8 billion, which shows the cyclical nature of semiconductors. The operating margin was 50.3% compared to 7.4% in the same period last year. The management guidance for the next quarter is 53.1%. Typically, Nvidia’s operating margin is in the 30% range.
📷 Source: Nvidia IR
This has flowed through to the bottom line with Nvidia’s adjusted EPS up 429% YoY for $2.70 compared to 481.3% growth expected this quarter for EPS of $3.37.
📷 Source: Seeking Alpha
Nvidia has the strongest cash flow margins among mega cap stocks. The operating cash flow margin is 47% with a free cash flow margin of 44.8%. In addition to higher revenue helping the cash flow, there was also $1.25 billion in customer payments received ahead of the invoice date.
📷 Source: YCharts
The company has cash and marketable securities of $16.02 billion with debt of $9.7 billion. Last quarter, there was $3.28 billion shares repurchased. The Board of Directors approved an additional $25 billion in stock purchases with $4 billion authorized remaining at the end of Q2.

Data Center Assumptions

Internal Analyst Notes from the I/O Fund on Nvidia’s Data Center Segment
The magnitude of Q4 guidance will be very important given heighted expectations. Assuming Nvidia meets its Q3 guidance of $16B +/- 2%, we’ve put together a simple scenario analysis to parameterize the different outcomes anticipated based on Nvidia’s potential Q4 guidance.
+/- indicates anticipated stock positive or negative price performance on the next trading day based on that scenario.
📷 At $40,000 per H100, that equals $28B in H100 sales alone, and when you add the A100 and other data center sales at a current run rate of $14B, the Data Center segment could report total revenue of $42B in FY24 (CY23). When you equal this out across the upcoming quarters, it looks something like this based on our estimates and Piper Sandler estimates. 📷 We believe the market will reactive negatively if Nvidia provides F4Q24 (Jan-Q) guidance that is in-line or lower than consensus growth of 11% Q/Q for the Jan-Q.
On the flip side, Nvidia will likely need to provide guidance of at least greater than 20% Q/Q growth for a significant positive reaction. This is because consensus will need to make upward revisions to their earnings for the remainder of FY24 (CY23) and FY25 (CY24). This is critical to support the current valuation with NVDA trading at ~45x NTM Non-GAAP P/E in-line with its 5 year average of ~45x NTM Non-GAAP P/E as of Monday November 21, 2023.
Our base case assumption is that Nvidia’s F4Q24 (Jan-Q) guidance will estimate Q/Q growth of at least +20%. Recall, H100 was only introduced to the market toward end of CY22. The Apr-Q was the very first quarter when Nvidia was beginning to see the impact of AI and demand for the H100. Piper Sandler believes Nvidia will close out the year with data center revenue of $42B and 2H23 Data Center revenue ~88% greater than 1H23 revenue.
Furthermore, we believe if Nvidia maintains its ~35% beat that it had for the Jul-Q for the rest of FY24 (CY23), Nvidia can potentially do $52B in Data Center revenue for FY24 (CY23).
Looking ahead to FY25, we believe Nvidia can do ~$92B in Data Center revenue based on our estimates for % beats for Actual Data Center revenue vs. Estimates for Data Center Revenue (Piper Sandler). 📷

Exploring Scenarios for the Upcoming Earnings Report:

The neutral-case scenario is that Nvidia reports in line, but can’t give the Street what it wants, which is a raise on already impressive growth to help sustain the market leader’s gains this year.
If investors are being realistic, a raise is best left to next quarter when the company typically offers a fiscal year outlook. The question is not whether Nvidia is a top AI stock, and has a promising future (of course it does). The question at hand is whether Nvidia can produce a report that pushes buyers off the sidelines. These are two different matters, and are often in opposition after a large run-up in price.
The best-case scenario is that Nvidia’s been downplaying its supply (just a touch) and there will be a beat for the fiscal Q4 guide. Nvidia’s story is quite clear, which is that the data center segment is producing historic growth and the bottom line is so beautiful, you have to squint to make sure it’s real. If this happens, we could see the price go into the mid-$500s before technicals are predicting that buyers will be exhausted. As a reminder, that’s only a 7% move from where the stock is trading now.
Piper Sandler has a data center estimate for fiscal year 2024 of $42 billion, which translates to $14.9 billion in data center revenue if we assume $12.5 billion this quarter. We detail below the price targets we are eyeing to take more gains should Nvidia report a beat on Q4FY24.
The topping-out scenario is that Nvidia’s buying is exhausted, and there isn’t one fundamental analyst on earth that can help investors figure out when this will happen. That is best left to somewhat-esoteric technical analysis. As you’ll note, I am not calling this the bear-case as there is not a bear case for Nvidia. Even if the company loses China entirely due to restrictions, it’s likely that demand gets absorbed. However, there is a bear case for the semiconductor sector, of which Nvidia is exposed to, and I detail this for you below.
Regarding the topping-out scenario, it’s unlikely Nvidia has a major negative surprise to the downside as semiconductors have strong visibility compared to, say, an ad-tech company. The management team should be going to great lengths to be consistent and accurate with Wall Street given the long golden roadway in front of them. Therefore, the topping-out scenario is aptly named as a 200% gain means you’ve got to impress the Street to keep those gains, and Nvidia may need to refuel for a quarter or so until we can get to a new fiscal year guide next quarter.

The Red Scare

What’s not to be forgotten in the excitement of the product road map is China, which has been the predominant risk for semiconductor stocks dating back to 2018. Last year, the government restricted Nvidia from selling its two most powerful chips to China, the A100 and H100. To circumvent these restrictions, Nvidia designed slightly less powerful chips called the A800 and H800. As reported by Reuters, the H800 has as much computing power as the H100 in certain settings. For the United States, these chips are important to block as they strengthen China’s military.
Last month, the U.S. Department of Commerce announced updated rules focuses on computing performance by removing the bandwidth parameter and focusing exclusively on how powerful a chip is, as well as performance density, which will prevent companies from working loopholes. According to an official who spoke to Reuters, “the U.S. will require companies to notify the government about semiconductors whose performance is just below the guidelines before they are shipped to China.”
Although this is a medium-term issue for Nvidia, analysts believe the demand is high enough today that the company shouldn’t have any issues absorbing the 20% to 25% loss in its data center segment from tighter export restrictions to China. Looking further out for FY2025, Keybanc sees a $5 impact to Nvidia’s $25.62 EPS estimate, and up to a $20B impact to its data center segment with current estimates at $101B for the data center in FY2025.
Eventually, demand may settle – especially as more competitors step up – and investors should pencil-in losing China revenue as a risk that is materializing now, with the revenue impact likely to be felt in FY2025.

The Topping Out Scenario

Nvidia (NVDA)

Nvidia continues to push to all-time highs, which is a scenario that was outlined in our prior free report on NVDA in September of this year. In the last analysis, the I/O Fund Portfolio Manager stated: “as long as we hold $340, Nvidia has the potential for one more swing higher into year-end/early next year.”
The primary scenario presented had the $545-$574 region as the target for the next swing higher. As of today, we are about 7% away from this target in what appears to be the final 5th wave in an uptrend off the October 2022 low.
I have laid out two scenarios that I continue to see playing out in the coming weeks-months:

📷

Semiconductor Industry May Be the Achilles Heel

Nvidia could certainly miss, yet it’s less likely given the company has outsized demand and visibility on supply. Within this context, it is easier to see the level of risk with interrelated stocks. One chart that is quite concerning, which has ramifications for all of tech, is Taiwan Semiconductor (TSM)
📷
The bounce from the October, 2022 low is clearly an overlapping and messy move higher. This is common of B waves. What’s concerning is that the drop from the July high is a 5 wave pattern that broke through the major trendline. This would be wave 1 of the larger (C) wave.
What followed is a 3 wave retrace, so far, which would be wave 2. If the next drop is a 5 wave pattern that takes us below $89, it will be a strong warning. On the other hand, if we can see a vertical move over $104, then it will shift the odds away from the red count above, and suggest that we could see a larger swing higher into early 2024, which would be the green count.
We do not own TSM as we closed this position, yet one reason we are watching this chart is to help manage our semiconductor positions as a break below $89 is concerning enough to have a read-through to our other positions. In this case, we will likely hedge the semiconductor stocks that we have identified as those we want to own in a downturn.
A break below $89 could also be concerning given TSM is in the crosshairs with China, and the United States recently tightened export restrictions to effectively cutoff AI chips. China has made it known they are pursuing domestic silicon, and if so, TSM may become stuck in a tug-o-war on which country gets 3nm, 4nm and 5nm supply.

The Broader Semiconductor Sector (PHLX)

The PHLX Semiconductor Index is a popular index of the broader semiconductor sector. It currently has the same downside setup that we are seeing in TSM. However, it is moving up into major resistance and into a cycle that suggests a reversal is likely to follow.
📷
The fan placed at the October, 2022 low represents a series of important angles that the PHLX has been using in its push higher. The red 1x1 line is a true 45 degrees off the low, and is the most important angle in defining an uptrend. Note how price broke below it and is now testing this angle as resistance.
Furthermore, those symbols above price represent cycles that we see within the PHLX. Note how price tends to reverse the trend that is moving into them. So, regarding these cycles, how we trend into them is the most important thing. We are currently trending up, into the current cycle, while testing the major angle in red.
It is likely that the broader semi sector sees a reversal soon, and until the PHLX can retake the red 1x1 angle, the pressure and risk remain to the downside. (conclusion moved to top of post)


submitted by norcalnatv to NVDA_Stock [link] [comments]


2023.11.16 10:43 Jhonjournalist The Copilot is Available for Microsoft 365 Enterprise Users Since November

The Copilot is Available for Microsoft 365 Enterprise Users Since November


  • This chip will be utilized in Microsoft server farms and resolution administrations like Microsoft Copilot or Sky Blue OpenAI Administration.
  • Sky blue OpenAI stage clients will likewise gain admittance to the most recent GPT-4 Super beginning in December using public review.
  • Microsoft has likewise rebranded Bing Talk to Copilot and Bing Visit Undertaking to Copilot Star.
  • Essentially, the organization is in any event, overhauling Microsoft Deals Copilot to Copiot for Deals.
Microsoft at its yearly Light 2023 occasion has declared a progression of new elements and administrations for Windows PC clients, including improved simulated intelligence capacities, where, generative artificial intelligence supported copilot will currently be accessible on additional administrations like the organization’s office suite — Microsoft 365.
Truth be told, copilot has been accessible for Microsoft 365 venture clients since November, and the equivalent is currently made accessible to the overall population.

Copilot in Microsoft 365 Enterprise Users

Microsoft is reporting a new and overhauled Groups which currently accompanies another UI alongside highlights like live deciphered records, picture obscure, cooperative notes choice, and another channel insight.
Plus, Microsoft additionally sent off its most memorable artificial intelligence chip — the Microsoft Purplish blue Maia simulated intelligence Gas pedal, improved for generative computer-based intelligence assignments, and Microsoft Sky blue Cobalt central processor in light of Arm engineering.
Essentially, the organization will likewise be adding AMD MI300X sped-up virtual machines (VMs) to Sky Blue, which are intended to deal with artificial intelligence responsibilities like model preparation and generative inferencing.
For accomplices, Microsoft will likewise begin to offer another Model-as-a-Administration arrangement that permits accomplices to effortlessly coordinate the most recent generative computer-based intelligence models like Llama 2 from Meta and impending models from Mistral, and Jais from G42 with their administrations.
Microsoft Copilot Studio, a low-code arrangement is the most recent improvement device from the organization, which bridles the capacities of generative man-made intelligence and it likewise accompanies Microsoft 365, permitting clients to team up and co-make items and administrations.
Microsoft Organizer Experience stage currently consolidates the abilities of Organizer, To Do, and Undertaking into one arrangement with an additional generative artificial intelligence capacity.
Learn More: https://worldmagzine.com/technology/the-copilot-is-available-for-microsoft-365-enterprise-users-since-novembe
submitted by Jhonjournalist to u/Jhonjournalist [link] [comments]


2023.10.17 18:10 ZebraEatCake $1k pc build for voice inferencing

>**What will you be doing with this PC? Be as specific as possible, and include specific games or programs you will be using.**
Voice inferencing with ai, a cloud server to store pictures/images, slight gaming (not important)
>**What is your maximum budget before rebates/shipping/taxes?**
1k usd
>**When do you plan on building/buying the PC? Note: beyond a week or two from today means any build you receive will be out of date when you want to buy.**
In a week or so
>**What, exactly, do you need included in the budget? (ToweOS/monitokeyboard/mouse/etc\)**
Just the pc
>**Which country (and state/province) will you be purchasing the parts in? If you're in US, do you have access to a Microcenter location?**
I prefer to keep my country undisclosed, there's no microcenter here. I'll try to search for alternatives by my own if the parts isnt available in my country
>**If reusing any parts (including monitor(s)/keyboard/mouse/etc), what parts will you be reusing? Brands and models are appreciated.**
I've heard used gpus are great deal so I'm open to that option (other parts should be new)
>**Will you be overclocking? If yes, are you interested in overclocking right away, or down the line? CPU and/or GPU?**
Yes, down the line. Most likely will be overclocking gpu
>**Are there any specific features or items you want/need in the build? (ex: SSD, large amount of storage or a RAID setup, CUDA or OpenCL support, etc)**
minimum 4tb storage space, prefabably ssd, but if its not possible then 2tb ssd + 2tb hdd. Would also like nvdia gpu 12gb vram as a lot of people mentioned it works better in deep learning. 64gb ram would be great but 32gb is also acceptable as i can upgrade it down the line.
>**Do you have any specific case preferences (Size like ITX/microATX/mid-towefull-tower, styles, colors, window or not, LED lighting, etc), or a particular color theme preference for the components?**
Dont care about aesthetics nor sizes
>**Do you need a copy of Windows included in the budget? If you do need one included, do you have a preference?**
Not needed
>**Extra info or particulars:**
Thanks in advance
submitted by ZebraEatCake to buildapcforme [link] [comments]


2023.09.02 15:44 wyem This week in AI - all the Major AI development in a nutshell

  1. Researchers introduce ‘Swift’, the first autonomous vision-based drone that beat human world champions in several fair head-to-head races. This marks the first time that an autonomous mobile robot has beaten human champions in a real physical sport [Details].
  2. Generative AI updates from Google Cloud Next event:
    1. General availability of Duet AI in Google Workspace .
    2. SynthID - a tool for watermarking and identifying AI images generated by Imagen (Google’s text-to-image diffusion model). It embeds a digital watermark directly into the pixels of an image, making it invisible to the human eye, but detectable for identification, without reducing the image quality.
    3. AlloyDB AI for building generative AI applications with PostgreSQL.
    4. Vertex AI’s Model Garden now includes Meta’s Llama 2 and TII’s Falcon — and pre-announcement of Anthropic’s Claude 2..
    5. Model and tuning upgrades for PaLM 2, Codey, and Imagen. 32,000-token context windows and 38 languages for PaLM 2.
    6. Style Tuning for Imagen - a new capability to help customers align their images to their brand guidelines with 10 images or less.
    7. Launch of fifth generation of its tensor processing units (TPUs) for AI training and inferencing.
  3. Meta AI released CoTracker - a fast transformer-based model that can track any point in a video.
  4. WizardLM released WizardCoder 34B based on Code Llama. WizardCoder-34B surpasses GPT-4, ChatGPT-3.5 and Claude-2 on HumanEval Benchmarks.
  5. Meta AI introduced FACET (FAirness in Computer Vision EvaluaTion) - a new comprehensive benchmark dataset for evaluating the fairness of computer vision models for protected groups. The dataset is made up of 32K images containing 50,000 people, labeled by expert human annotators.
  6. Allen Institute for AI launched Satlas - a new platform for exploring global geospatial data generated by AI from satellite imagery.
  7. A new generative AI image startup Ideogram, founded by former Google Brain researchers, has been launched with $16.5 million in seed funding. Ideogram's unique proposition lies in reliable text generation within images.
  8. a16z announced a16z Open Source AI Grant program and the first batch of grant recipients and funded projects.
  9. Runway AI announced Creative Partners Program - provides a select group of artists and creators with exclusive access to new Runway tools and models, Unlimited plans, 1 million credits, early access to new features and more.
  10. OpenAI has released a guide for teachers using ChatGPT in their classroom—including suggested prompts, an explanation of how ChatGPT works and its limitations, the efficacy of AI detectors, and bias.
  11. DINOv2, a self-supervised vision transformer model by Meta AI which was released in April this year, is now available under the Apache 2.0 license.
  12. Tesla is launching a $300 million AI computing cluster employing 10,000 Nvidia H100 GPUs.
  13. Inception, an AI-focused company based in the UAE unveiled Jais, a 13 billion parameters open-source Arabic Large Language Model (LLM).
  14. Google announced WeatherBench 2 (WB2) - a framework for evaluating and comparing various weather forecasting models.
  15. Alibaba launched two new open-source models - Qwen-VL and Qwen-VL-Chat that can respond to open-ended queries related to different images and generate picture captions.
  16. OpenAI disputes authors’ claims that every ChatGPT response is a derivative work.
  17. DoorDash launched AI-powered voice ordering technology for restaurants.
  18. OpenAI launched ChatGPT Enterprise. It offers enterprise-grade security and privacy, unlimited higher-speed GPT-4 access, longer context windows for processing longer inputs, advanced data analysis capabilities and customization options.
  19. OpenAI is reportedly earning $80 million a month and its sales could be edging high enough to plug its $540 million loss from last year.
If you like this news format, you might find my newsletter, AI Brews, helpful - it's free to join, sent only once a week with bite-sized news, learning resources and selected tools. I didn't add links to news sources here because of auto-mod, but they are included in the newsletter. Thanks
submitted by wyem to ChatGPTCoding [link] [comments]


2023.09.01 19:02 jaketocake AI — weekly megathread!

News provided by aibrews.com
  1. Researchers introduce ‘Swift’, the first autonomous vision-based drone that beat human world champions in several fair head-to-head races. This marks the first time that an autonomous mobile robot has beaten human champions in a real physical sport [Details].
  2. Generative AI updates from Google Cloud Next event:
    1. General availability of Duet AI in Google Workspace [Details].
    2. SynthID - a tool for watermarking and identifying AI images generated by Imagen (Google’s text-to-image diffusion model). It embeds a digital watermark directly into the pixels of an image, making it invisible to the human eye, but detectable for identification, without reducing the image quality [Details].
    3. AlloyDB AI for building generative AI applications with PostgreSQL [Details].
    4. Vertex AI’s Model Garden now includes Meta’s Llama 2 and TII’s Falcon — and pre-announcement of Anthropic’s Claude 2 [Details].
    5. Model and tuning upgrades for PaLM 2, Codey, and Imagen. 32,000-token context windows and 38 languages for PaLM 2 [Details].
    6. Style Tuning for Imagen - a new capability to help customers align their images to their brand guidelines with 10 images or less [Details].
    7. Launch of fifth generation of its tensor processing units (TPUs) for AI training and inferencing [Details].
  3. Meta AI released CoTracker - a fast transformer-based model that can track any point in a video [Hugging face GitHub].
  4. WizardLM released WizardCoder 34B based on Code Llama. WizardCoder-34B surpasses GPT-4, ChatGPT-3.5 and Claude-2 on HumanEval Benchmarks [Details].
  5. Meta AI introduced FACET (FAirness in Computer Vision EvaluaTion) - a new comprehensive benchmark dataset for evaluating the fairness of computer vision models for protected groups. The dataset is made up of 32K images containing 50,000 people, labeled by expert human annotators [Details].
  6. Allen Institute for AI launched Satlas - a new platform for exploring global geospatial data generated by AI from satellite imagery [Details].
  7. A new generative AI image startup Ideogram, founded by former Google Brain researchers, has been launched with $16.5 million in seed funding. Ideogram's unique proposition lies in reliable text generation within images [Details].
  8. a16z announced a16z Open Source AI Grant program and the first batch of grant recipients and funded projects [Details].
  9. Runway AI announced Creative Partners Program - provides a select group of artists and creators with exclusive access to new Runway tools and models, Unlimited plans, 1 million credits, early access to new features and more [Details].
  10. OpenAI has released a guide for teachers using ChatGPT in their classroom—including suggested prompts, an explanation of how ChatGPT works and its limitations, the efficacy of AI detectors, and bias [Details].
  11. DINOv2, a self-supervised vision transformer model by Meta AI which was released in April this year, is now available under the Apache 2.0 license [Details Demo].
  12. Tesla is launching a $300 million AI computing cluster employing 10,000 Nvidia H100 GPUs [Details].
  13. Inception, an AI-focused company based in the UAE unveiled Jais, a 13 billion parameters open-source Arabic Large Language Model (LLM) [Details].
  14. Google announced WeatherBench 2 (WB2) - a framework for evaluating and comparing various weather forecasting models [Details].
  15. Alibaba launched two new open-source models - Qwen-VL and Qwen-VL-Chat that can respond to open-ended queries related to different images and generate picture captions [Details].
  16. OpenAI disputes authors’ claims that every ChatGPT response is a derivative work [Details].
  17. DoorDash launched AI-powered voice ordering technology for restaurants [Details].
  18. OpenAI launched ChatGPT Enterprise. It offers enterprise-grade security and privacy, unlimited higher-speed GPT-4 access, longer context windows for processing longer inputs, advanced data analysis capabilities and customization options [Details].
  19. OpenAI is reportedly earning $80 million a month and its sales could be edging high enough to plug its $540 million loss from last year [Details].

🔦 Weekly Spotlight

  1. How 3 healthcare organizations are using generative AI [Link].
  2. The A.I. Revolution Is Coming. But Not as Fast as Some People Think [Link].
  3. LIDA by Microsoft: Automatic Generation of Visualizations and Infographics using Large Language Models [Link].
  4. Curated collection of AI dev tools from YC companies, aiming to serve as a reliable starting point for LLM/ML developers [Link].
  5. Beating GPT-4 on HumanEval with a Fine-Tuned CodeLlama-34B [Link].
—-------
Welcome to the artificial weekly megathread. This is where you can discuss Artificial Intelligence - talk about new models, recent news, ask questions, make predictions, and chat other related topics.
Click here for discussion starters for this thread or for a separate post.
Self-promo is allowed in these weekly discussions. If you want to make a separate post, please read and go by the rules or you will be banned.
Previous Megathreads & Subreddit revamp and going forward
submitted by jaketocake to artificial [link] [comments]


2023.09.01 17:13 wyem This week in AI - all the Major AI development in a nutshell

  1. Researchers introduce ‘Swift’, the first autonomous vision-based drone that beat human world champions in several fair head-to-head races. This marks the first time that an autonomous mobile robot has beaten human champions in a real physical sport [Details].
  2. Generative AI updates from Google Cloud Next event:
    1. General availability of Duet AI in Google Workspace .
    2. SynthID - a tool for watermarking and identifying AI images generated by Imagen (Google’s text-to-image diffusion model). It embeds a digital watermark directly into the pixels of an image, making it invisible to the human eye, but detectable for identification, without reducing the image quality.
    3. AlloyDB AI for building generative AI applications with PostgreSQL.
    4. Vertex AI’s Model Garden now includes Meta’s Llama 2 and TII’s Falcon — and pre-announcement of Anthropic’s Claude 2..
    5. Model and tuning upgrades for PaLM 2, Codey, and Imagen. 32,000-token context windows and 38 languages for PaLM 2.
    6. Style Tuning for Imagen - a new capability to help customers align their images to their brand guidelines with 10 images or less.
    7. Launch of fifth generation of its tensor processing units (TPUs) for AI training and inferencing.
  3. Meta AI released CoTracker - a fast transformer-based model that can track any point in a video.
  4. WizardLM released WizardCoder 34B based on Code Llama. WizardCoder-34B surpasses GPT-4, ChatGPT-3.5 and Claude-2 on HumanEval Benchmarks.
  5. Meta AI introduced FACET (FAirness in Computer Vision EvaluaTion) - a new comprehensive benchmark dataset for evaluating the fairness of computer vision models for protected groups. The dataset is made up of 32K images containing 50,000 people, labeled by expert human annotators.
  6. Allen Institute for AI launched Satlas - a new platform for exploring global geospatial data generated by AI from satellite imagery.
  7. A new generative AI image startup Ideogram, founded by former Google Brain researchers, has been launched with $16.5 million in seed funding. Ideogram's unique proposition lies in reliable text generation within images.
  8. a16z announced a16z Open Source AI Grant program and the first batch of grant recipients and funded projects.
  9. Runway AI announced Creative Partners Program - provides a select group of artists and creators with exclusive access to new Runway tools and models, Unlimited plans, 1 million credits, early access to new features and more.
  10. OpenAI has released a guide for teachers using ChatGPT in their classroom—including suggested prompts, an explanation of how ChatGPT works and its limitations, the efficacy of AI detectors, and bias.
  11. DINOv2, a self-supervised vision transformer model by Meta AI which was released in April this year, is now available under the Apache 2.0 license.
  12. Tesla is launching a $300 million AI computing cluster employing 10,000 Nvidia H100 GPUs.
  13. Inception, an AI-focused company based in the UAE unveiled Jais, a 13 billion parameters open-source Arabic Large Language Model (LLM).
  14. Google announced WeatherBench 2 (WB2) - a framework for evaluating and comparing various weather forecasting models.
  15. Alibaba launched two new open-source models - Qwen-VL and Qwen-VL-Chat that can respond to open-ended queries related to different images and generate picture captions.
  16. OpenAI disputes authors’ claims that every ChatGPT response is a derivative work.
  17. DoorDash launched AI-powered voice ordering technology for restaurants.
  18. OpenAI launched ChatGPT Enterprise. It offers enterprise-grade security and privacy, unlimited higher-speed GPT-4 access, longer context windows for processing longer inputs, advanced data analysis capabilities and customization options.
  19. OpenAI is reportedly earning $80 million a month and its sales could be edging high enough to plug its $540 million loss from last year.
If you like this news format, you might find my newsletter, AI Brews, helpful - it's free to join, sent only once a week with bite-sized news, learning resources and selected tools. I didn't add links to news sources here because of auto-mod, but they are included in the newsletter. Thanks
submitted by wyem to GPT3 [link] [comments]


2023.09.01 16:58 wyem This week in AI - all the Major AI development in a nutshell

  1. Researchers introduce ‘Swift’, the first autonomous vision-based drone that beat human world champions in several fair head-to-head races. This marks the first time that an autonomous mobile robot has beaten human champions in a real physical sport [Details].
  2. Meta AI released CoTracker - a fast transformer-based model that can track any point in a video.
  3. WizardLM released WizardCoder 34B based on Code Llama. WizardCoder-34B surpasses GPT-4, ChatGPT-3.5 and Claude-2 on HumanEval Benchmarks.
  4. Meta AI introduced FACET (FAirness in Computer Vision EvaluaTion) - a new comprehensive benchmark dataset for evaluating the fairness of computer vision models for protected groups. The dataset is made up of 32K images containing 50,000 people, labeled by expert human annotators.
  5. Allen Institute for AI launched Satlas - a new platform for exploring global geospatial data generated by AI from satellite imagery.
  6. Generative AI updates from Google Cloud Next event:
    1. General availability of Duet AI in Google Workspace .
    2. SynthID - a tool for watermarking and identifying AI images generated by Imagen (Google’s text-to-image diffusion model). It embeds a digital watermark directly into the pixels of an image, making it invisible to the human eye, but detectable for identification, without reducing the image quality.
    3. AlloyDB AI for building generative AI applications with PostgreSQL.
    4. Vertex AI’s Model Garden now includes Meta’s Llama 2 and TII’s Falcon — and pre-announcement of Anthropic’s Claude 2..
    5. Model and tuning upgrades for PaLM 2, Codey, and Imagen. 32,000-token context windows and 38 languages for PaLM 2.
    6. Style Tuning for Imagen - a new capability to help customers align their images to their brand guidelines with 10 images or less.
    7. Launch of fifth generation of its tensor processing units (TPUs) for AI training and inferencing.
  7. A new generative AI image startup Ideogram, founded by former Google Brain researchers, has been launched with $16.5 million in seed funding. Ideogram's unique proposition lies in reliable text generation within images.
  8. a16z announced a16z Open Source AI Grant program and the first batch of grant recipients and funded projects.
  9. Runway AI announced Creative Partners Program - provides a select group of artists and creators with exclusive access to new Runway tools and models, Unlimited plans, 1 million credits, early access to new features and more.
  10. OpenAI has released a guide for teachers using ChatGPT in their classroom—including suggested prompts, an explanation of how ChatGPT works and its limitations, the efficacy of AI detectors, and bias.
  11. DINOv2, a self-supervised vision transformer model by Meta AI which was released in April this year, is now available under the Apache 2.0 license.
  12. Tesla is launching a $300 million AI computing cluster employing 10,000 Nvidia H100 GPUs.
  13. Inception, an AI-focused company based in the UAE unveiled Jais, a 13 billion parameters open-source Arabic Large Language Model (LLM).
  14. Google announced WeatherBench 2 (WB2) - a framework for evaluating and comparing various weather forecasting models.
  15. Alibaba launched two new open-source models - Qwen-VL and Qwen-VL-Chat that can respond to open-ended queries related to different images and generate picture captions.
  16. OpenAI disputes authors’ claims that every ChatGPT response is a derivative work.
  17. DoorDash launched AI-powered voice ordering technology for restaurants.
  18. OpenAI launched ChatGPT Enterprise. It offers enterprise-grade security and privacy, unlimited higher-speed GPT-4 access, longer context windows for processing longer inputs, advanced data analysis capabilities and customization options.
  19. OpenAI is reportedly earning $80 million a month and its sales could be edging high enough to plug its $540 million loss from last year.
If you like this news format, you might find my newsletter, AI Brews, helpful - it's free to join, sent only once a week with bite-sized news, learning resources and selected tools. I didn't add links to news sources here because of auto-mod, but they are included in the newsletter. Thanks
submitted by wyem to learnmachinelearning [link] [comments]


2023.07.13 03:32 Life_Bee_6354 Help with targeting a goal from the PLS-5

I’m a CF starting out, and we use the PLS-5 quite frequent - anyone have any idea how to target this goal from the PLS that goes “Anna got hurt, show me what picture shows how you think she got hurt” as it relates to the goal “X will make inferences by pointing to pictures stimuli” tried to look up inferencing pics on TpT but nothing is similar to how to shows it on the PLS, any advice helps
submitted by Life_Bee_6354 to slp [link] [comments]


2022.08.07 20:57 Solder_Man Pockit: Summer update 😎

Hey everyone!
Been fully focused on the Pockit project’s developments for the last couple of months, as well as manufacturer meetings in recent weeks. I wanted to take this moment to share a bunch of good news 🔔 especially as it's been a while.
(Sidenote: A few people mentioned that they were unable to make posts due to an automatic spam-filter. I should have fixed this earlier, but it should be good to go now, as far as I can tell from the subreddit settings, so post away!)
So, below is a breakdown of all that is up with Pockit. I’ll expand some of these items into a series of posts+pictures over the next couple of weeks.

Production planning in advance

Pockit into everyone’s hands

Hardware developments

Firmware + Apps

nCode (natural language programming for Pockit)

Enclosure generator:

Working with others:

Pockit real-life base:

So, that's the summary of the updates (more details in the coming days).
As always, looking forward to questions and comments from you guys! 🙌
submitted by Solder_Man to Pockit [link] [comments]


2022.06.13 23:49 visionexpert20 7 Machine Vision Image Acquisition Challenges

Table Of Contents

1. Imaging Complex Geometries2. Challenging Surfaces3. A Large Number of Variants4. Complex Material Handeling5. Working Distance Constrains6. High Resolution Necessities

Background of Industrial Image Acquisition Challenges:

According to Fanuc, Image Acquisition contributes to 85% of Machine Vision solution success. The workflow of a typical AI based Machine Vision system includes four steps. First, the image acquisition is done through a Plug n Play camera setup that optimizes the acquisition process. Second, data preparation is done by configuring and collecting training images. Third, specialized deep learning architecture trains are used to optimize models using cloud infrastructure. Finally, the optimized model is deployed for inferencing in an edge device and monitored for accuracy. The first step of the workflow, image acquisition is a complex process and has a lot of challenges.
However, a regular industrial setting comes with many challenges that make it increasingly hard to acquire perfect images for conducting visual process or quality inspections
Also read: Image Acquisition Components
Let’s look at some real world scenarios:

Imaging of complex geometries –

The manufacturing industry builds a lot of parts with complex geometries. These parts have multiple faces, complex shapes, and small detailing that can be seen from some angles. Such objects consist of design structures including undercuts, hollow space, or intricate internal details. To capture the images of such parts for the training of the machine vision system, the parts either need to be rotated and moved or multiple cameras need to be deployed to capture all the intricacies of the object so that defects that come in the production line can be detected with minimal errors.

Challenging surfaces –

Surfaces that are either transparent or reflective pose a challenge for image acquisition systems. Even in uniform lighting conditions, these surfaces cannot be captured adequately by the vision system for proper training and inspection. There have to be specialized lighting techniques so that useful training images can be taken from such surfaces. These can include dome light, diffused lighting, or any such lighting which makes sure that illumination is spread evenly across the entire surface. The use of polarised light can also reduce lighting hotspots that make it difficult to capture images.

A large number of variants –

In any industrial business, there are many variants of parts and materials being manufactured at a large scale. All such SKUs or stock-keeping units must be thoroughly inspected for defects. They differ in size, shape, color, geometry, and the production volume and the AI needs to comprehensively inspect them for errors. A robust cataloguing system can help the design of a good training dataset that has an even distribution of all such SKUs in terms of the errors that need to be addressed in them in the ratio of their production volumes. This avoids any bias in the training data due to unbalanced and skewed numbers.

Harsh environments –

The manufacturing industry has a lot of operations that have to be performed in harsh environments with dust, smoke, or heat. Some environments are also hazardous for people working in them. In harsh environments, image acquisition systems need to work and deliver results. Enclosures with correct IP ratings can protect the hardware. Other kinds of enclosures are used for explosive environments. Harsh environments can affect the hardware of the system and hazards can damage the product altogether. Continuous and correct use of protective layers over acquisition equipment safeguards the system and allows it to function correctly.

Complex Material Handling

Harsh environments also affect the materials that are being inspected by the machine vision system. During inspection processes, materials should not be compromised for any reason. Sometimes, customized material handling systems need to be developed to cater to these materials. Better material handling translates into better image acquisition, and in turn higher accuracy from the machine vision system due to clarity of images. For instance, fasteners that come in bulk need to be provided feeding systems so that cameras can localize each object and take a picture at the required angle and orientation.

Working Distance Constraints –

Inspections of existing materials might need to be done in conjunction with other tasks that involve human workers or machine vision systems might need manual interventions at certain points of their operations. The positioning of cameras and the movement of humans must not cause any blockage for one another. There can be a minimum working distance constraint in larger factories or maximum working distance constraints in smaller, tighter spaces. Cameras need to be at a proper distance to the assembly line to inspect the parts. The usage of specialized optics can account for such working distance constraints.

High-Resolution Necessities –

While dealing with parts that have intricate details and designs and sub-parts that are very small and packed into the parent part, there is a necessity of using high-resolution image acquisition systems. To capture the tiny features, multiple cameras at different working distances can be used or a single high-resolution camera must be chosen. High-resolution imagery has challenges related to camera systems as traditional cameras like area scan cameras cannot support such image capture due to the smaller number of pixels. Line scan imaging is a new technology that can create high-resolution images for better visibility of features by the acquisition system.

Summary –

Thus, the challenges of image acquisition are numerous but they can be managed through good technology and intelligent design. Qualitas overcomes many such image acquisition challenges along with other hurdles that come in the workflow. Having Qualitas as a partner for machine vision technology can extend a company’s network of technologies, increase revenue and margins through profitable integration of machine vision into the inspection process of various parts getting manufactured, and gain an edge over the competitors with the use of advanced and up-to-date technology to automate all possible processes in the domain of manufacturing.

AI in Machine Vision Machine Vision Software Machine Vision Case studies
submitted by visionexpert20 to Machine_vision [link] [comments]


2022.06.10 15:00 visionexpert20 7 Machine Vision Image Acquisition Challenges

Table Of Contents
1. Imaging Complex Geometries2. Challenging Surfaces3. A Large Number of Variants4. Complex Material Handeling5. Working Distance Constrains6. High Resolution Necessities

Background of Industrial Image Acquisition Challenges:

According to Fanuc, Image Acquisition contributes to 85% of Machine Vision solution success. The workflow of a typical AI based Machine Vision system includes four steps. First, the image acquisition is done through a Plug n Play camera setup that optimizes the acquisition process. Second, data preparation is done by configuring and collecting training images. Third, specialized deep learning architecture trains are used to optimize models using cloud infrastructure. Finally, the optimized model is deployed for inferencing in an edge device and monitored for accuracy. The first step of the workflow, image acquisition is a complex process and has a lot of challenges.
However, a regular industrial setting comes with many challenges that make it increasingly hard to acquire perfect images for conducting visual process or quality inspections
Also read: Image Acquisition Components
Let’s look at some real world scenarios:

Imaging of complex geometries –

The manufacturing industry builds a lot of parts with complex geometries. These parts have multiple faces, complex shapes, and small detailing that can be seen from some angles. Such objects consist of design structures including undercuts, hollow space, or intricate internal details. To capture the images of such parts for the training of the machine vision system, the parts either need to be rotated and moved or multiple cameras need to be deployed to capture all the intricacies of the object so that defects that come in the production line can be detected with minimal errors.

Challenging surfaces –

Surfaces that are either transparent or reflective pose a challenge for image acquisition systems. Even in uniform lighting conditions, these surfaces cannot be captured adequately by the vision system for proper training and inspection. There have to be specialized lighting techniques so that useful training images can be taken from such surfaces. These can include dome light, diffused lighting, or any such lighting which makes sure that illumination is spread evenly across the entire surface. The use of polarised light can also reduce lighting hotspots that make it difficult to capture images.

A large number of variants –

In any industrial business, there are many variants of parts and materials being manufactured at a large scale. All such SKUs or stock-keeping units must be thoroughly inspected for defects. They differ in size, shape, color, geometry, and the production volume and the AI needs to comprehensively inspect them for errors. A robust cataloguing system can help the design of a good training dataset that has an even distribution of all such SKUs in terms of the errors that need to be addressed in them in the ratio of their production volumes. This avoids any bias in the training data due to unbalanced and skewed numbers.

Harsh environments –

The manufacturing industry has a lot of operations that have to be performed in harsh environments with dust, smoke, or heat. Some environments are also hazardous for people working in them. In harsh environments, image acquisition systems need to work and deliver results. Enclosures with correct IP ratings can protect the hardware. Other kinds of enclosures are used for explosive environments. Harsh environments can affect the hardware of the system and hazards can damage the product altogether. Continuous and correct use of protective layers over acquisition equipment safeguards the system and allows it to function correctly.

Complex Material Handling

Harsh environments also affect the materials that are being inspected by the machine vision system. During inspection processes, materials should not be compromised for any reason. Sometimes, customized material handling systems need to be developed to cater to these materials. Better material handling translates into better image acquisition, and in turn higher accuracy from the machine vision system due to clarity of images. For instance, fasteners that come in bulk need to be provided feeding systems so that cameras can localize each object and take a picture at the required angle and orientation.

Working Distance Constraints –

Inspections of existing materials might need to be done in conjunction with other tasks that involve human workers or machine vision systems might need manual interventions at certain points of their operations. The positioning of cameras and the movement of humans must not cause any blockage for one another. There can be a minimum working distance constraint in larger factories or maximum working distance constraints in smaller, tighter spaces. Cameras need to be at a proper distance to the assembly line to inspect the parts. The usage of specialized optics can account for such working distance constraints.

High-Resolution Necessities –

While dealing with parts that have intricate details and designs and sub-parts that are very small and packed into the parent part, there is a necessity of using high-resolution image acquisition systems. To capture the tiny features, multiple cameras at different working distances can be used or a single high-resolution camera must be chosen. High-resolution imagery has challenges related to camera systems as traditional cameras like area scan cameras cannot support such image capture due to the smaller number of pixels. Line scan imaging is a new technology that can create high-resolution images for better visibility of features by the acquisition system.

Summary –

Thus, the challenges of image acquisition are numerous but they can be managed through good technology and intelligent design. Qualitas overcomes many such image acquisition challenges along with other hurdles that come in the workflow. Having Qualitas as a partner for machine vision technology can extend a company’s network of technologies, increase revenue and margins through profitable integration of machine vision into the inspection process of various parts getting manufactured, and gain an edge over the competitors with the use of advanced and up-to-date technology to automate all possible processes in the domain of manufacturing.
submitted by visionexpert20 to Machine_vision [link] [comments]


2022.04.22 21:33 Beeker93 Which english dialect slunds the most unintelligent?

*Edit: which accent SOUNDS the dumbest. Can't edit titles.
I realize there is variety among these listed also. Black vernacular in Georgia would sound different than in New York. Kentucky sounds different from Texan. I actually have a bit of a rural accent from Ontario (picture the show Letterkenny to some degree) and I hate it. I think I sound dumb. I realize you can't actually make inferenced based on accent.
View Poll
submitted by Beeker93 to polls [link] [comments]


2021.01.04 02:31 Yoel_Dei_Umbra On open house day at our new college, my friend Diana fell through the floor and saw something indescribable. I don’t think I’m safe.

Part 1
Diana is a girl known for her medical enigmas that require a constant dosage of prescription drugs, and for a long time, I chopped our experience at open house as due to a mental breakdown.
“Hi! Welcome to ********* University!”
There are moments in your life where your heart beats around in your chest like a parakeet flapping around in a cage. The sensation tends to spread in your stomach. A suggestion that if someone were to look at you through thermal lens, they would see splotches of yellow and green all over.
I totally stole that metaphor from some poet girl I used to date in high school, but that’s not the point. In spring of 2018, we stepped off the tour bus and Counselor Burlesque greeted us at this ‘too good to be true’ academic institution. The background morning sunlight was at her side as if it were of upmost intention to be as picturesque as possible. I can speak for everyone when I say that there was a contagious warmth.
Well, everyone except Brittany Flowers - or who I liked to call - “The Fireball”. “Okay, so where’s the guy with the striped rainbow shoes and the makeup?”
Counselor Burlesque – whose fuckin sexy as heck, by the way - furled her brow at Brittany’s comment like someone had told a funny joke and she tried to maintain professionalism. "I don't quite understand."
I guess Brittany continued trying to decipher the monkey's paw. "The fucking clown. He's gonna come up at any second, throw pie at our faces and then say we're all on some TV prank show. I'm with whatever. Any credit on my acting resume is fine with me."
We stood in midst of a wide campus. Whiff of mowed grass hit my nose; it gave me a sense of… nostalgia? Kind of nature scent that pressures you in soft places. Around her, around all of us, were houses. Most of which were in groups. Each carried the same color palette of orange brick and white outlined windows. At first, I thought the building in front of us was a church because the windows were almost too damn large. From the tiled concrete floor to the triangular rooftops that gave off impressions of castles, I inferenced the red carpet was on royal territory.
So organized, so far spread out, like a really ostentatious neighborhood. I remember nudging my bud Tommy and going: “This is like a suburb or some shit.”
Tommy goes: “Like Disney land, for nerds.”
“Come on, baby, let’s get this show on the road,” My future roommate, Don, shouted as he got out the vehicle with the other alumni candidates.
When ready, we all followed our Counselor from behind, marveled at the fountain in the middle of the grass plain, and she took us through a straight-forward path that led us to the building ahead. It was called "Residence Hall". There were a bunch of venues of different host professors. Majority of which were intended to represent different majors. So, for example: if you were a Computer Science major, your first stop of this open-house tour would be to the electrical science houses. I think, at least. Don’t know because I didn’t have an idea on what my major was going to be between computer science and creative writing. Those who didn’t have a major continued with Counselor Burlesque. We spent the next few hours touring the most relevant aspects of the college. The main halls, classrooms, cafeterias, recreation centers – which basically means gyms, sports and other leisure activities. Eventually, we all got split up for different things. Some went to go check out the dorm tenements. Others – like myself - went to go explore the sceneries and nature aspects, like that really awesome lake. I got to snap as many pictures as I wanted with someone else who, I guess, was just as much of a camera-freak.
When we got back inside the main Residence building, everyone was directed to an auditorium, but not everyone would follow suit in immediate haste. Most took their sweet time still roaming the venue area, and I did the same. I mean, I felt like this was candy-land. After such an exciting day, the thought of sitting down to listen to some boring ass speech didn’t carry any appeal.
In my roam, I notice two security men standing at an elevator, which was the coolest thing ever because it was my first time seeing an elevator in a school before. Behind them, I spot these two guys walking down the hallway. They were huddled together as if they were semi-intelligent rats planning to steal cheese. I’ve seen the looks on their delinquent faces a million times. Know exactly what it is. They’re about to do something idiotic.
The two bozos stick their hand out of their hoodies and start smacking down these little pebbles that create a spark and "pop" when they hit the floor. I thought it was bonkers someone could try that in a place like this on opening day.
It confirmed my suspicion: this school must have no real standards on who they accepted. I’m amazed that myself was able to make it. The two men were already in the car, but in response to the pop, probably fearing the worse, they hop out with a: “Hey, hey!”
The nitwits aren’t cunning enough to haste together a “I ain’t do nuffin’” act in time. I scoff and turn around to go head back toward the auditorium.
“Psst, yo, bro,” I heard Don behind me seconds later.
He had spotted me from the crowd and had called me over to the same elevator. He held it and scurried me with a gesture. I scamper to get in with him. Sorry, to get in with them – because there was another person who hopped in. Me, Don, and a girl named Diana.
“Yeah, alright, let’s get it,” Don announced while pressing the “C” button.
For that, he had to lift up a slip that said: “Floor currently not in use, for staff only.”
It didn’t glow, the door wouldn’t shut. Diana noticed that there was a key slot with an actual key left in it. She twisted it, pressed “C” herself, and down we went.
Good thinking. Amazing, in their haste they must have forgotten to take the key out.
“Wait, what?” I spoke. “We can’t just jack this. What about the auditorium?”
Don laughed. “It’ll just be a second. Just doing some venturin’. Y’know, some extended touring.”
“Yeah, don’t be such a wuss-ass,” Diana said. “Look at your dumb face, you know you wanna.”
Out of everyone, Don and I were probably eating this up the most. The grins on our faces told it all. This college, institution, whatever, was in our backyards. You know what I mean? No, you probably don’t. Look, what I’m trying to say, is that we lived in an urban area where things aren’t exactly easy to get by. Spent a lot of time getting around every inch of it in my adolescence. Yet, I’ve never even heard of this place and it’s only a couple hours off the outskirts. Was never on the map, our high-school teachers never brought it up – and you know those lunatics never shut the fuck up about college around Junior & Senior years. Though, I could have sworn – maybe once or twice in my lifetime – my parents drove through this area in ************, but I don’t ever remember seeing this damn behemoth of a place. I have a vague memory of driving by a giant metal-dome structure and was always curious as to what could be inside. Though, I could be thinking about the wrong place. While I’m on this tangent, I wanna preface that when we arrived at the C-floor, something happened that was just…well, in a phrase, downright bizarre. Of course, I hadn’t thought too much about it at the time. However, it’s easier to pick it out in memory as something that fits right-the-hell in with all the other seriously confusing crap I’ve had to deal with. It’s especially easy to remember since it’s the only real strange thing that happened that entire first year. An instance of abnormality, if you will.
We step out of the elevator, and it’s quiet. Don and I get giddy at this like dumb children. A juxtaposition from the upstairs, barely audible mess. Most of all, no one knew we were there, and it didn’t seem like anyone else was there either. It was like when the lights turned off in a Grade School classroom, you just can’t help yourself.
In a moment of brilliance, Diana pressed the “2” on the elevator input so it would go back to the floor we were on. In the hopes that it would have gone back up before the security guys were done with the issue.
To our left, there’s a room. It had see-through glass with the title “ID-Check-in,” which will be an office for identification cards. I say “will”, because from looking outside-in, it didn’t really seem like the insides were fully installed. The desks were clear, the paint on the walls were in patches, and there was some loose…stuff…hanging from the ceiling, like loose yellow cotton or something. Diana called it gross, but its condition made sense to me. This college was new, after all. To our right was a room with no name. While Diana and Don went to check it out, I became distracted by a hum underneath my feet. Surprised the other two hadn’t said anything about it. There was a vibration. An engine, or something in works underneath me. While I’m staring at my feet, I’m also subconsciously picking at some wrinkled piece of wall at my side because I can’t help myself but to do shit like that. My nails scratched through and made contact with some cold surface. It looked like some metal interior through the paint. I look back at the office, at the parts of the wall that didn’t have paint over it, and then I realize that it isn’t paint at all, but rather an incomplete exterior.
Before I can further invest, the vibration beneath my feet energized to an earthquake. It didn’t last further than a second. I stumble, lights above my head flicker before they shut off all together. I’m suspended in pitch darkness for no longer than a millisecond before a screech pierces the silent underground. It was Diana. It came from whatever room her and Don had gone into. Then a loud crash happened that made me jump out of my shoes.
At that moment, in pitch blackness, instant regret and instant karma were in locked arms, prancing around like ghouls feasting off the idea of me currently pissing my pants. I turn on my phone’s flashlight and I Shaggy-it. With jittering legs but with some unreasonable source of courage, I yell for them both but receive no answer, just continued cries from Diana. Through the room, I shine around and get a grasp on what type of area I’m in. It’s some sort of electrical, power junk room. Shelves upon shelves of laptops, cords scribbled across the floor, dangled from the ceiling. Advancement towards them was like trudging through a forest of wire. This place was a hazard on steroids. It definitely wasn’t the type of room you stumble across and venture. So many twists and turns, so dark, little space, forcing humid air into my lungs, it doesn’t help that I feel like I’m playing that old Ju-On Grudge game for the Wii, if anyone remembers that.
I get fed up and call out Don to say anything. Fucking anything at all.
“Hear me? I’m right here. Dude…she’s…hurry,” as Don said that, it seemed like he was also trying to “shh” Diana as well, while managing his own tone.
I get a better sense of direction from him and made it.
I see Don, but I don’t see Diana. Next to him, there’s a big ass breach, a hole. My flashlight spots it all, but there’s hardly even a need for it. Because from the hole emanated a turquoise hue that gave some light to our surroundings. Due to the rumble, Diana had toppled over onto some precarious area of floor and collapsed through – sheesh.
“Damn, yo, can you calm the hell down, now? You’re gonna get us into some bull,” Don said. He was on his knees, peered down through the breach in the floor.
I did the same.
Diana stood up, hands on her hips in a pouty position, covered in debris. "Then get me the fuck out of here."
I can tell from the distance of the fall that she must have been at least a little hurt. For the most part, she seemed fine, just obviously stuck. In brighter ambiance, I probably would have laughed.
There’s still a problem. The distance is way too large for either of us to reach and grab her. ‘What now?’
We brainstorm ideas while Diana kept insisting that we just go and get some help. Me and Don look at each other.
Neither of us wants to do that shit. That’s college suicide. Not a great first impression. We’d rather not risk all three of our parents going to town on us.
Even so, the longer we stay down there, a foreboding in this dark, claustrophobic space gave me so many spinal chills. We cave. Don was to stay as I went and grabbed help, but as I turn myself to leave, my flashlight caught a miracle.
By some oracle, by some gift from the gods, I spotted a ladder propped right before a vent near the ceiling. Perfect.
Don helped me get it.
Diana gets quiet. Like, real quiet. ‘She must have calmed down,’ was my thought. With the retrieved ladder, we get back to the breach and look down. Diana isn’t moving. Faced opposite direction of before, her thighs are together like someone’s desperate attempt to not pee themselves.
She’s looking at something. I make the connection in my head: the direction she’s looking in is also the source of the turquoise. Then her behavior gets odd. Her head titled to the right, then to the left. She puts her right hand up, then drops it. All this while staring in the same direction. This behavior repeated.
Don puts down the ladder and places it next to her. No reaction. We’re wasting time.
I call out. “Dia-“
Another scream. Like, high-pitched: she’s the woman in the shower who just got their curtains pulled off by a psycho killer in some horror flick. It’s so loud, I’m convinced that whatever she’s looking at is coming to get her. Me and Don lose our shit, we panic with her. I’m not ashamed to admit I felt like running away. We both yell for Diana to get on the ladder, but she kept yelling that she’s seeing some sort of figure.
“Oh, my goodness, oh my fu-, oh god, help, please get me out, get me out, what is this thing?”
That’s my best translation of her gibberish through sobs.
I never heard Don beg before. “Diana, Diana, please. The ladder is right here. Just take it, please, it’s okay.”
“Diana, just close your eyes, okay? Close your eyes, listen to the sound of our voices. Climb up the ladder,” I said.
Diana is able to stop screaming, then turned to us at snail’s pace, like a wet kitten. Trying to keep her eyes closed but can’t resist the tiny peaks at whatever has got her shook. After some time, she’s able to climb all the way. In my arms, she’s stiffed up like she witnessed the eye of medusa. We sat there, and I try to make out Diana’s rapid, jumbled attempts at conveying what she witnessed…and from what I could make out, I couldn’t believe it.
You might be thinking: “well, now make a break for it, asshats!”
I would’ve loved to, but it’s not like we can drag or even carry her out of this deathtrap.
We then heard something that gave us enough wind to even knock Diana out of her shock. The sound of keys from outside the room, footsteps, and voices from two men.
“We know someone’s in there. Come out. Whoever you are, believe me, you’re gonna be in a world of shit within the next two minutes.”
My goodness. I knew we technically trespassed, but I would’ve never expected such venom in his cadence. Whatever our consequences were for this, it felt like it wasn’t merely gonna be a stern talking to or simple demand to never come back all together.
I look at Don’s face in the little light. It’s scrunched up, a lion whose competitor in the food chain entered his territory.
I understood it. Heck, I felt it. That man who approached from outside sounded no different to the superiority complex from some type of cops in our hometown. Always looked to escalate a situation before it was called for.
When the two men busted in, Don hid near the hole. I hid with Diana further away behind trays. We covered our mouths. Their flashlights zig-zagged. We were break-out prisoners in avoidance of searchlights. One of their line-of-sights managed to go right above my eyes. The two men were audibly puzzled by the floor’s breach. Stupidly, they both peered in together.
Although I hated Don at that moment for what he did next, I’m actually grateful for it nowadays. I have to admit, he has titanium balls.
Don snuck behind and pushed them both in the hole’s direction. He spared them the amount of force required to have them crashing through. Instead, they were able to catch themselves. It bought us enough time to start making our escape.
I wanted to yell something like: “you idiot, why would you do something like that?”
There was no time to complain, however. Don’s logic was we were already deep in shit, he didn’t mind gambling an extra layer of trouble for himself if it meant getting away scot-free. The lights were off so they wouldn't see our faces clear, there probably weren’t any cameras installed yet as I hadn’t remember seeing any. We all could’ve gotten away free as birds.
Could’ve.
So, we’re booking it through this hazard zone. Shelves, trays are knocked over, tech is collapsing, I get scraped on my legs and arms. I held Diana’s hands, but her foot must have gotten caught on some bundle of wire she stepped on. My first instinct is to help her get up, but they’re on us. There’s no way I can get her up. We’re screwed. I feel Don’s grip on my shoulder, pulling me in his direction against my own will.
I didn’t want to leave Diana. Understand this when I say it’s beyond difficult to shrug off fight or flight responses. My legs moved in accordance to survival. I hear Diana get caught and I feel like the biggest asshole in the world, she’ll never forgive us for that.
We make it. The hallway is still dark. Need a way out. I shine at a staircase, but I figured it would be locked since we weren’t supposed to be down there anyway. Bolting through a straight corridor as one man chased us, we could either make a right or a left in front of the office from earlier. I chose left, and lucky us, there’s a bright sign that said “Exit". We pray, push through the panic bar, and it opens. Fresh, outside air. We’re then in a garage area. There isn’t a fraction of rest in our haste. After an upward slope out of the garage, we’re greeted to clear streets. We cross one and hide behind trees. Strangely, the security guard never appeared. Next thirty minutes is spent of me panting, legs felt like fresh boiled noodles. Thinking about Diana made me want to slap myself.
Don had this asshole grin on his face. “Just like the old days, huh?”
“Shut up.”
Through front entrance of Residence Hall, a crowd emerged. Open house came to a close. It’s disjointed at first, so we’re able to inch ourselves as apart of everything in seamless fashion. The busses pulled up, and before too long, everyone was ushered on. There was a head count, and of course, there was one person missing. My stomach pulled a titanic.
“She’ll be here in just a second,” Counselor Burlesque said with a wink that sparked glitter.
5 minutes later, and there she was. Diana, with two security guards behind her, had arrived. On board, she stood by the bus driver, peering over available seats. I happened to be the only one with an empty chair. Probably the most bizarre thing that day, she sat next to me with a smile. In no time at all, everyone was on their way home.
Don’t get me twisted, before we departed, I did wanna gesture the Counselor and ask for information. Not to patronize Diana, I could have asked her, but if she were in big trouble, I also wanted to confess that I had all to do with it as well. It wouldn’t have been fair for only her to be punished. Of course, I wouldn’t have ratted Don out.
I didn’t do that, and I didn’t have to. Because it was actually Don who ended up asking before I could say a word. It also turned out that he did for the exact same reason.
“Oh, everything's fine,” I overheard Burlesque say to Don. “She had stumbled upon some of our art gallery works. Saw something she didn’t like and had a bad time. We sedated her accordingly.”
Diana didn’t have anything to say about that comment and didn’t seem to care. It truly had been the case, then. She must’ve fell into some art room and tripped out. That’s a logical explanation.
Though, you'd think there would be concern about someone pushing two members of security.
“Oh, baller,” He said. “Oh, well, she’s not in trouble, is she? Like, she can still go to this school?”
“Well of course she can, silly. If anything, we welcome daring students who are willing to take risk and wander. Nothing ever wrong with a little, Y’know, venturin’… some extended touring. We have a no-wuss policy zone, if you will.”
When she said that, she turned, sought out my where I was seated, and gave me a wink before turning to head out. Weird, but whatever.
I guess that’s the end of this tale.
So, I’ll answer some of your questions.
For one, Diana wasn’t putting up a front. On the bus, she was genuinely happy. Later on, I asked about what she saw down there that triggered her panic. She said it was some weird sculptures and paintings.
In her words: “Weird stuff, but nothing horrifying.”
Another thing I want to note is that she wouldn’t shut up about how ‘great’ the University was. Like, on and on, until we all separated.
For second, you might be thinking: “why did you continue going back to that school for freshman year, onwards?”
To that I say: why not? Sure, it was a weird day for an open house, and yeah, it’s kind of ridiculous that such a weak floor design was even possible. We could have probably put up a case to sue.
Then again, we were in an area that wasn’t supposed to be available for entry. Look, my point is that from our perspective, and factoring in Diana’s anecdote, there’s no reason to believe that we hadn’t stirred up most of the trouble we got into that day. An accident happened, that was about it.
Not to mention, the benefits here are far too good to leave over one little incident.
Nowadays, I spend a lot of nights reflecting back on open house day. Can’t help myself but to force dots together.
I try not to abide by things that can’t be proven. If something can be explained by hallucination, then so be it.
I try to cater towards normality.
Despite this, in jaws of unexplainable, events of my sophomore year at that college – that you may or may not know about – I find myself doing the opposite. I find myself not able to sleep, because there are certain words that rattle in my head that stem from that day. I often lay with lights on passed midnight, feeding myself the suggestion that those words may be relevant now, more than I ever imagined.
Diana’s explanation, of course, differed from when I pulled her up from the hole, and from when I asked her on the bus.
Through the frantic staccato of her breathing, I managed to make out what she was trying to say.
Not verbatim. From what I can make out, she was describing some sort of humanoid, white figure that she witnessed. It was inside a bubbling tank. It was alive, active, with: “bulging, yellow eyes.”
I did make out one complete statement. It’s the reason I’m writing this story.
It’s the phrase she repeated the most.
“It’s copying me. It copied my every movement.”
Part 3
submitted by Yoel_Dei_Umbra to nosleep [link] [comments]


2020.10.09 18:50 jibaikia Doubling my stake, if news on Xilinx is true.

What is FPGA
For those who don't know what a FPGA is, here is a very simple washed down explanation. Computers are simply made of gates, AND and OR gates. Instead of generic CPU (like AMD, Intel and ARM), FPGA allow users to run custom logic gates which are programmed to run only a certain dedicated task. Therefore, its called Field Programmable Gate Array.
Why do you use FPGA over CPU?
Performance and Energy consumption. As you can see here:
https://imgur.com/a/ErWxKxB
Also, some example of ASIC is the Google TPU (Tensor Processing Unit), which is design to run tensorflow optimally. However, FPGA allow you to optimise it for Tensorflow and you can reprogram it to run and optimise something else the next day. For example, it might be only optimise for Tensorflow version 2 today, you can also update it to run optimally for Tensorflow version 5 later on. FPGA is like a reprogrammable dedicated accelerator for applications.
What are some of FPGA application?
Satellite communications, FPGA consume very little power and can be remotely updated to run an optimize program later on.
Communication Towers, remotely update the FPGA and for real time applications.
Industrial Controllers, real-time control applications, motor drivers and etc.
What does it do with a CPU?
Back to tensorflow FGPA again as an example. CPU will be responsible for anything else other than running Tensorflow. A CPU can handle connection to a camera, capture a photo, apply filters and etc, finally pass it to the FPGA, and get the result from it again. In other words, CPU and FPGA can work together in applications that require real-time high performance applications. Of course, theoretically you can program FPGA to do what a CPU does, but it will be too tedious to do, and you might not need that sort of performance.
How will AMD incorporate Xilinx?
The main application that I see is Machine Learning - While Nvidia GPU is good for training Machine Learning models, FPGA is very good for inferencing. If AMD do it right, we should be able to train on a AMD CDNA and deploy it automatically on Xilinx
https://imgur.com/a/48rUP3P
https://blogs.nvidia.com/blog/2020/09/03/what-is-mlops/
The picture above is from nVidia. With Xilinx, AMD can not only complete the whole MLops and also have it ready to deploy on dedicated inference accelerator.
Another is automobiles, not just on autonomous side, but also for control critical components such as accelerators, emergency braking and etc. Its going to be either AMD with Xilinx or nVidia with ARM.
Benefits
Higher Total Addressable Market
AMD and Xilinx technology stack compliment each other
Higher margin in data centres and more enterprise customers
if the deal go through, Dr. Lisa will have stay even longer with AMD to execute it, and i take that as a win. ;)
Risk
Machine learning is just a hype.
No demand for edge computing.
Most importantly, lack of software stack and support from AMD and Xilinx
Can they execute it?
I am not sure about this, but I believe in Dr. Lisa and her team. If not, why are we even investing in AMD. Thats all I can say.
submitted by jibaikia to AMD_Stock [link] [comments]


2020.10.07 04:57 manic_mangos I’m a baby wlw idk exactly yet and I think my friends the same way. Help plz?

ok so I gotta friend we both girls and like for the last 2 months or so we be hardcore simping for girls together. She like send me picture “Look how hot she is” and I’m like “so hot”. Or just gushing over over female characters voices.
She’s never said her sexuality and I haven’t either and yeah. Like I know I’m not straight idk if I’m bi, pan, or just lesbian I just know girls <33. Also she sent meme about being gay and liking girls and yeah.
I lowkey wanna come out but idk how because idk what I am. we never rly stated that either of us liked girl we just like u know.
And like if I come out to her I wanna ask her how she feels. Like if she knows her sexuality but I dont want to force her to come out or anything.
So what I do? I just wanna know if my friend likes girls too. Like from context clues it’s pretty obvious she does but neither of us said anything concrete and yeye im kinda shit at inferencing I don’t wanna fuck up and come to the wrong conclusion.
Also how do i come out if i don’t know exactly what I am yet?
submitted by manic_mangos to AskLesbians [link] [comments]


http://swiebodzin.info