Singularity | Artificial Intelligence (ai), Technology & Futurology

14 readers
1 users here now

About:

This sublemmy is a place for sharing news and discussions about artificial intelligence, core developments of humanity's technology and societal changes that come with them. Basically futurology sublemmy centered around ai but not limited to ai only.

Rules:
  1. Posts that don't follow the rules and don't comply with them after being pointed out that they break the rules will be deleted no matter how much engagement they got and then reposted by me in a way that follows the rules. I'm going to wait for max 2 days for the poster to comply with the rules before I decide to do this.
  2. No Low-quality/Wildly Speculative Posts.
  3. Keep posts on topic.
  4. Don't make posts with link/s to paywalled articles as their main focus.
  5. No posts linking to reddit posts.
  6. Memes are fine as long they are quality or/and can lead to serious on topic discussions. If we end up having too much memes we will do meme specific singularity sublemmy.
  7. Titles must include information on how old the source is in this format dd.mm.yyyy (ex. 24.06.2023).
  8. Please be respectful to each other.
  9. No summaries made by LLMs. I would like to keep quality of comments as high as possible.
  10. (Rule implemented 30.06.2023) Don't make posts with link/s to tweets as their main focus. Melon decided that the content on the platform is going to be locked behind login requirement and I'm not going to force everyone to make a twitter account just so they can see some news.
  11. No ai generated images/videos unless their role is to represent new advancements in generative technology which are not older that 1 month.
  12. If the title of the post isn't an original title of the article or paper then the first thing in the body of the post should be an original title written in this format "Original title: {title here}".
  13. Please be respectful to each other.

Related sublemmies:

!auai@programming.dev (Our community focuses on programming-oriented, hype-free discussion of Artificial Intelligence (AI) topics. We aim to curate content that truly contributes to the understanding and practical application of AI, making it, as the name suggests, “actually useful” for developers and enthusiasts alike.)

Note:

My posts on this sub are currently VERY reliant on getting info from r/singularity and other subreddits on reddit. I'm planning to at some point make a list of sites that write/aggregate news that this subreddit is about so we could get news faster and not rely on reddit as much. If you know any good sites please dm me.

founded 2 years ago
MODERATORS
101
102
103
104
 
 

Summary:

  • An MIT study provides evidence that AI language models may be capable of learning meaning, rather than just being "stochastic parrots".
  • The team trained a model using the Karel programming language and showed that it was capable of semantically representing the current and future states of a program.
  • The results of the study challenge the widely held view that language models merely represent superficial statistical patterns and syntax.
105
106
107
 
 

Fascinating new paper from Jin and Rinard at MIT that shows models might develop semantic understanding despite being trained on text: "We present evidence that language models can learn meaning despite being trained only to perform next token prediction on text, specifically a corpus of programs."

108
109
110
111
112
113
114
 
 

Our vision at IBM is to scale quantum systems to a size where they’ll be capable of solving the world’s most challenging problems. To get there, we’ve set our sights on a key milestone: deploying a quantum-centric supercomputer powered by 100,000 qubits by 2033.

Read more: https://research.ibm.com/blog/100k-qubit-supercomputer

115
116
 
 

The new wave of AI systems, ChatGPT and its more powerful successors, exhibit extraordinary capabilities across a broad swath of domains. In light of this, we discuss whether artificial INTELLIGENCE has arrived.

Paper available here: https://arxiv.org/abs/2303.12712 Video recorded at MIT on March 22nd, 2023

117
 
 

Google puts it foot on the accelerator, casting aside safety concerns to not only release a GPT 4 -competitive model, PaLM 2, but also announce that they are already training Gemini, a GPT 5 competitor [likely on TPU v5 chips]. This is truly a major day in AI history, and I try to cover it all.

I'll show the benchmarks in which PaLM (which now powers Bard) beats GPT 4, and detail how they use SmartGPT-like techniques to boost performance. Crazily enough, PaLM 2 beats even Google Translate, due in large part to the text it was trained on. We'll talk coding in Bard, translation, MMLU, Big Bench, and much more.

118
 
 

Paper: https://arxiv.org/abs/2306.07052

Abstract:

In this work, we empirically show that updating pretrained LMs (350M, 1.3B, 2.7B) with just a few steps of Gradient Ascent Post-training (GAP) on random, unlabeled text corpora enhances its zero-shot generalization capabilities across diverse NLP tasks. Specifically, we show that GAP can allow LMs to become comparable to 2-3x times larger LMs across 12 different NLP tasks. We also show that applying GAP on out-of-distribution corpora leads to the most reliable performance improvements. Our findings indicate that GAP can be a promising method for improving the generalization capability of LMs without any task-specific fine-tuning.

Pretty cool research. I'm wondering if this method could be applied in a more effective way, e.g. by introducing gradient ascent throughout the full training process (I'd be curious to see how different ratios of descent:ascent during training would affect convergence/generalization abilities). Will also be neat to see this applied to larger models.

Edit for Lemmies: You can read paper here: https://www.researchgate.net/publication/371505904_Gradient_Ascent_Post-training_Enhances_Language_Model_Generalization

119
120
121
122
123
124
 
 

In the project "Seeing the World through Your Eyes," researchers at the University of Maryland, College Park, show that the reflections of the human eye can be used to reconstruct 3D scenes. This, they say, is an "underappreciated source of information about what the world around us looks like".

Summary:

  • Researchers at the University of Maryland have developed a NeRF-based method to reconstruct 3D scenes from reflections in the human eye. They believe this is an underappreciated source of information about the world around us.

  • The method uses the uniform geometry of the cornea in healthy adults to estimate the position and orientation of the eye. An important aspect of the work is the development of a corneal position optimization technique that helps improve the robustness of the method.

  • Tests have been performed with both synthetic eye images and real photographs, but only under laboratory conditions. Despite certain challenges, such as inaccuracies in the localization of the cornea and the low resolution of the images, the method is considered promising.

Source: https://the-decoder.com/better-watch-what-youre-looking-at-ai-can-reconstruct-it-in-3d/

Paper & more: https://world-from-eyes.github.io/

125
 
 

The UN report highlights the threat to information security from deepfakes created with the help of AI. Despite the potential of neural networks in solving global problems, the UN has expressed concerns about their use in generating fake images and videos, especially in conflict situations.

The UN is calling on all stakeholders to use AI responsibly, insisting that action be taken to ensure that the technology is used safely and ethically, in line with international human rights. Digital platform owners are also encouraged to invest in content moderation systems and transparent reporting. The UN Secretary General expressed his hope for the joint efforts of the community to solve this problem at the upcoming summit in 2024.

view more: ‹ prev next ›