Tumgik
#demo2
worldsofzzt · 1 year
Photo
Tumblr media
Source Demo II: ZZT-OOP Programming Language by David Pinkston (1996) [DEMO2.ZZT] - "Commands" Play this world in your browser
6 notes · View notes
2meowmeow · 2 years
Photo
Tumblr media
onewe - studio we: recording 2
13 notes · View notes
mourninglamby · 5 months
Note
You have a vibe of someone with amazing music taste, so
Do you have any music recommendations?
i usually paste my fav songs in captions of art posts if u would like 2 peruse those. but other than princess ketamine demo2 ethel cain, tammy faye by nicole dollanganger is my latest obsession
8 notes · View notes
diarochalinas · 6 days
Text
alright ok
1 note · View note
ithisatanytime · 1 month
Audio
(J.Skxnny)
0 notes
newestmusic · 6 months
Audio
(via T Demo)
0 notes
jcmarchi · 6 months
Text
Zero-shot adaptive prompting of large language models
New Post has been published on https://thedigitalinsider.com/zero-shot-adaptive-prompting-of-large-language-models/
Zero-shot adaptive prompting of large language models
Tumblr media
Posted by Xingchen Wan, Student Researcher, and Ruoxi Sun, Research Scientist, Cloud AI Team
Tumblr media
Recent advances in large language models (LLMs) are very promising as reflected in their capability for general problem-solving in few-shot and zero-shot setups, even without explicit training on these tasks. This is impressive because in the few-shot setup, LLMs are presented with only a few question-answer demonstrations prior to being given a test question. Even more challenging is the zero-shot setup, where the LLM is directly prompted with the test question only.
Even though the few-shot setup has dramatically reduced the amount of data required to adapt a model for a specific use-case, there are still cases where generating sample prompts can be challenging. For example, handcrafting even a small number of demos for the broad range of tasks covered by general-purpose models can be difficult or, for unseen tasks, impossible. For example, for tasks like summarization of long articles or those that require domain knowledge (e.g., medical question answering), it can be challenging to generate sample answers. In such situations, models with high zero-shot performance are useful since no manual prompt generation is required. However, zero-shot performance is typically weaker as the LLM is not presented with guidance and thus is prone to spurious output.
In “Better Zero-shot Reasoning with Self-Adaptive Prompting”, published at ACL 2023, we propose Consistency-Based Self-Adaptive Prompting (COSP) to address this dilemma. COSP is a zero-shot automatic prompting method for reasoning problems that carefully selects and constructs pseudo-demonstrations for LLMs using only unlabeled samples (that are typically easy to obtain) and the models’ own predictions. With COSP, we largely close the performance gap between zero-shot and few-shot while retaining the desirable generality of zero-shot prompting. We follow this with “Universal Self-Adaptive Prompting“ (USP), accepted at EMNLP 2023, in which we extend the idea to a wide range of general natural language understanding (NLU) and natural language generation (NLG) tasks and demonstrate its effectiveness.
Prompting LLMs with their own outputs
Knowing that LLMs benefit from demonstrations and have at least some zero-shot abilities, we wondered whether the model’s zero-shot outputs could serve as demonstrations for the model to prompt itself. The challenge is that zero-shot solutions are imperfect, and we risk giving LLMs poor quality demonstrations, which could be worse than no demonstrations at all. Indeed, the figure below shows that adding a correct demonstration to a question can lead to a correct solution of the test question (Demo1 with question), whereas adding an incorrect demonstration (Demo 2 + questions, Demo 3 with questions) leads to incorrect answers. Therefore, we need to select reliable self-generated demonstrations.
Example inputs & outputs for reasoning tasks, which illustrates the need for carefully designed selection procedure for in-context demonstrations (MultiArith dataset & PaLM-62B model): (1) zero-shot chain-of-thought with no demo: correct logic but wrong answer; (2) correct demo (Demo1) and correct answer; (3) correct but repetitive demo (Demo2) leads to repetitive outputs; (4) erroneous demo (Demo3) leads to a wrong answer; but (5) combining Demo3 and Demo1 again leads to a correct answer.
COSP leverages a key observation of LLMs: that confident and consistent predictions are more likely correct. This observation, of course, depends on how good the uncertainty estimate of the LLM is. Luckily, in large models, previous works suggest that the uncertainty estimates are robust. Since measuring confidence requires only model predictions, not labels, we propose to use this as a zero-shot proxy of correctness. The high-confidence outputs and their inputs are then used as pseudo-demonstrations.
With this as our starting premise, we estimate the model’s confidence in its output based on its self-consistency and use this measure to select robust self-generated demonstrations. We ask LLMs the same question multiple times with zero-shot chain-of-thought (CoT) prompting. To guide the model to generate a range of possible rationales and final answers, we include randomness controlled by a “temperature” hyperparameter. In an extreme case, if the model is 100% certain, it should output identical final answers each time. We then compute the entropy of the answers to gauge the uncertainty — the answers that have high self-consistency and for which the LLM is more certain, are likely to be correct and will be selected.
Assuming that we are presented with a collection of unlabeled questions, the COSP method is:
Input each unlabeled question into an LLM, obtaining multiple rationales and answers by sampling the model multiple times. The most frequent answers are highlighted, followed by a score that measures consistency of answers across multiple sampled outputs (higher is better). In addition to favoring more consistent answers, we also penalize repetition within a response (i.e., with repeated words or phrases) and encourage diversity of selected demonstrations. We encode the preference towards consistent, un-repetitive and diverse outputs in the form of a scoring function that consists of a weighted sum of the three scores for selection of the self-generated pseudo-demonstrations.
We concatenate the pseudo-demonstrations into test questions, feed them to the LLM, and obtain a final predicted answer.
Tumblr media
Illustration of COSP: In Stage 1 (left), we run zero-shot CoT multiple times to generate a pool of demonstrations (each consisting of the question, generated rationale and prediction) and assign a score. In Stage 2 (right), we augment the current test question with pseudo-demos (blue boxes) and query the LLM again. A majority vote over outputs from both stages forms the final prediction.
COSP focuses on question-answering tasks with CoT prompting for which it is easy to measure self-consistency since the questions have unique correct answers. But this can be difficult for other tasks, such as open-ended question-answering or generative tasks that don’t have unique answers (e.g., text summarization). To address this limitation, we introduce USP in which we generalize our approach to other general NLP tasks:
Classification (CLS): Problems where we can compute the probability of each class using the neural network output logits of each class. In this way, we can measure the uncertainty without multiple sampling by computing the entropy of the logit distribution.
Short-form generation (SFG): Problems like question answering where we can use the same procedure mentioned above for COSP, but, if necessary, without the rationale-generating step.
Long-form generation (LFG): Problems like summarization and translation, where the questions are often open-ended and the outputs are unlikely to be identical, even if the LLM is certain. In this case, we use an overlap metric in which we compute the average of the pairwise ROUGE score between the different outputs to the same query.
Tumblr media
Illustration of USP in exemplary tasks (classification, QA and text summarization). Similar to COSP, the LLM first generates predictions on an unlabeled dataset whose outputs are scored with logit entropy, consistency or alignment, depending on the task type, and pseudo-demonstrations are selected from these input-output pairs. In Stage 2, the test instances are augmented with pseudo-demos for prediction.
We compute the relevant confidence scores depending on the type of task on the aforementioned set of unlabeled test samples. After scoring, similar to COSP, we pick the confident, diverse and less repetitive answers to form a model-generated pseudo-demonstration set. We finally query the LLM again in a few-shot format with these pseudo-demonstrations to obtain the final predictions on the entire test set.
Key Results
For COSP, we focus on a set of six arithmetic and commonsense reasoning problems, and we compare against 0-shot-CoT (i.e., “Let’s think step by step“ only). We use self-consistency in all baselines so that they use roughly the same amount of computational resources as COSP. Compared across three LLMs, we see that zero-shot COSP significantly outperforms the standard zero-shot baseline.
USP improves significantly on 0-shot performance. “CLS” is an average of 15 classification tasks; “SFG” is the average of five short-form generation tasks; “LFG” is the average of two summarization tasks. “SFG (BBH)” is an average of all BIG-Bench Hard tasks, where each question is in SFG format.
For USP, we expand our analysis to a much wider range of tasks, including more than 25 classifications, short-form generation, and long-form generation tasks. Using the state-of-the-art PaLM 2 models, we also test against the BIG-Bench Hard suite of tasks where LLMs have previously underperformed compared to people. We show that in all cases, USP again outperforms the baselines and is competitive to prompting with golden examples.
Accuracy on BIG-Bench Hard tasks with PaLM 2-M (each line represents a task of the suite). The gain/loss of USP (green stars) over standard 0-shot (green triangles) is shown in percentages. “Human” refers to average human performance; “AutoCoT” and “Random demo” are baselines we compared against in the paper; and “3-shot” is the few-shot performance for three handcrafted demos in CoT format.
We also analyze the working mechanism of USP by validating the key observation above on the relation between confidence and correctness, and we found that in an overwhelming majority of the cases, USP picks confident predictions that are more likely better in all task types considered, as shown in the figure below.
Tumblr media
USP picks confident predictions that are more likely better. Ground-truth performance metrics against USP confidence scores in selected tasks in various task types (blue: CLS, orange: SFG, green: LFG) with PaLM-540B.
Conclusion
Zero-shot inference is a highly sought-after capability of modern LLMs, yet the success in which poses unique challenges. We propose COSP and USP, a family of versatile, zero-shot automatic prompting techniques applicable to a wide range of tasks. We show large improvement over the state-of-the-art baselines over numerous task and model combinations.
Acknowledgements
This work was conducted by Xingchen Wan, Ruoxi Sun, Hootan Nakhost, Hanjun Dai, Julian Martin Eisenschlos, Sercan Ö. Arık, and Tomas Pfister. We would like to thank Jinsung Yoon Xuezhi Wang for providing helpful reviews, and other colleagues at Google Cloud AI Research for their discussion and feedback.
0 notes
hospitalterrorizer · 8 months
Text
diary18
9/22-23/2023
listening to brainiac before bed.
i did 5 songs today but only 4 are like things i redid, i got a new short song down, exciting. have to wait a bit before i can listen to it and think about lyrics for it i think, it's fresh and i'll want to mess with the guitars too much i think.
i just spaced out listening to the guitars in 'this little piggy' i'm so fucked right now. thankfully the process of fixing everything will be over soon, hopefully, probably, and hopefully i'll fall in love with everything.
i should think about reverb a little, if the record needs it or not. i really avoid it cuz it feels cheap, not like it sounds bad, i mean i feel cheap using it cuz it used to be such a crutch for me i feel like, when i was making jungle and stuff.
thinking about cheapness, and jungle i guess, apparently people spend huge amounts of money getting synths to make "ps1 jungle" now, i didn't ever consider that. it's massively fucked up, all the music i've made i've never spent a dime on, making everything i've done super cheap, which i think is sort of cool. it's fucked up to me how people will spend huge amounts of money on equipment without ever really putting anything out with it. i just don't understand having really nice things and not really doing anything outside of putzing around with them. it's not like it would become okay if they did make anything though i guess, if someone spends 2000 dollars on a synth to imitate a sound that came from a sample pack that a japanese dude used to put a song together in a day, they are wasting their time. it's also fucked up because, at the time at least, in 2019 when i was making this stuff, it felt basically liberating to make that stuff because it let you be cheap, now it's gotten super particularized. i can't really touch breaks anymore, not because of these people who i never considered until today, but more because so much mediocre breakcore is being put out, and so many people love sewerslvt, it makes me feel dire, and i decided to just give up i guess.
i went looking for old mp3s of my jungle stuff so i might put it here or something and instead i found a bunch of other old stuff. reminded me of how much i miss having access to my cousin's audio interface for recording bass and guitar.
here's some stuff from when i was trying to do guided by voices in 2021.
these kinds of songs are so fun to make. i'd like to do it again some day. it's easy to hear how bad i am at guitar, but i really did love playing at that time. i still do, i just can't record it, i just get tabs/ ideas out and transpose them to midi, so i'm basically playing fucked up hardcore only right now. or not that fucked up. i love octave chords, and then sliding up a note so it's two notes right beside eachother. i think that's such a pretty sound.
anyways. the current state of jungle makes me sad. guitar music has always only ever made me happy, even when i was sad.
what does it mean, what does it mean.
well it's not entirely true anyways. some guitar music blows and it pisses me off like all the sewerslvt adherents. some electronic music has only ever made me happy. i love everything at the end of the day and it makes me sick i guess.
it's 6 am! i spent all this time reminiscing about my old music, but i have new music, i have to keep its heart alive too. i want to keep every heart i ever found alive. i'm an awful doctor!!
okay anyways #byebye!!
1 note · View note
fafinadvisors · 1 year
Text
0 notes
hhollla · 1 year
Text
0 notes
worldsofzzt · 7 months
Photo
Tumblr media
Source “Demo II: ZZT-OOP Programming Language” by David Pinkston (1996) [DEMO2.ZZT] - “Title screen” Play This World Online
0 notes
2meowmeow · 1 year
Photo
Tumblr media
onewe - studio we: recording 2
2 notes · View notes
Text
HIERARCHICAL
The type of inheritance in which more than one derived class inherits the properties of the same base class is called hierarchical inheritance. There are multiple child classes and a single parent class. All the child classes will inherit the methods and fields present in the parent class.
Tumblr media
class demo { void add() { int a,b,c; a=10; b=20; c=a+b; System.out.Println(c); } } class demo2 extends demo { void sub() { int a,b,c; a=10; b=20; c=a-b; System.out.Println(c); } } class demo3 extends demo { void multi() { int a,b,c; a=10; b=20; c=a*b; System.out.Println(c); } } class demo4 extends demo { void div() { int a,b,c; a=10; b=20; c=a/b; System.out.Println(c); } } class main { public static void main(String args[]) { demo4 obj=new demo4(); obj.add(); obj.div(); demo2 obj2=new demo2(); obj.sub(); demo3 obj3=new demo3(); obj.multi(); } }
0 notes
obitv · 1 year
Text
started falling asleep listening to house of wolves demo2 and had like. the most surreal experience
0 notes
tutorialworld · 2 years
Text
Nagarro Java developer Interview Questions
1. Difference in ArrayList and Hashset Some difference between ArrayList and Hashset are: - ArrayList implements List interface while HashSet implements Set interface in Java. - ArrayList can have duplicate values while HashSet doesn’t allow any duplicates values. - ArrayList maintains the insertion order that means the object in which they are inserted will be intact while HashSet is an unordered collection that doesn’t maintain any insertion order. - ArrayList is backed by an Array while HashSet is backed by an HashMap. - ArrayList allow any number of null value while HashSet allow one null value. - Syntax:- ArrayList:-ArrayList list=new ArrayList(); - HashSet:-HashSet set=new HashSet(); 2. Using Lambda Function print given List of Integers import java.util.*; public class Main { public static void main(String args) { List arr = Arrays.asList(1,2,3,4); arr.forEach(System.out::println); arr.stream().forEach(s->System.out.println(s)); } } Output 1 2 3 4 3. new ArrayList(2); What does this mean? 4. Difference in Synchronization and Lock Differences between lock and synchronized: - with locks, you can release and acquire the locks in any order. - with synchronized, you can release the locks only in the order it was acquired. 5. What is Closeable interface? A Closeable is a source or destination of the data that needs to be closed. The close() method is invoked when we need to release resources that are being held by objects such as open files. The close() method of an AutoCloseable object is called automatically when exiting a try -with-resources block for which the object has been declared in the resource specification header. Closeable is defined in java.io and it is idempotent. Idempotent means calling the close() method more than once has no side effects. Declaration public interface Closeable extends AutoCloseable { public void close() throws IOException; } Implementing the Closeable interface import java.io.*; class Main { public static void main(String s) { try (Demo1 d1 = new Demo1(); Demo2 d2 = new Demo2()) { d1.show1(); d2.show2(); } catch (ArithmeticException e) { System.out.println(e); } } } //Resource1 class Demo1 implements Closeable { void show1() { System.out.println("inside show1"); } public void close() { System.out.println("close from demo1"); } } //Resource2 class Demo2 implements Closeable { void show2() { System.out.println("inside show2"); } public void close() { System.out.println("close from demo2"); } } Output inside show1 inside show2 close from demo2 close from demo1 6. What are Lambda Functions? A lambda expression is a block of code that takes parameters and returns a value.  Syntax of Lambda Expression Lambda expression contains a single parameter and an expression parameter -> expression Lambda expression contains a Two parameter and an expression (parameter1, parameter2) -> expression Expressions cannot contain variables, assignments or statements such as if or for. If you wanted to do some more complex operations, a code block can be used with curly braces. If the lambda expression needs to return a value, then the code block should have a return statement. (parameter1, parameter2) -> { code block } import java.util.ArrayList; public class Main { public static void main(String args) { ArrayList list = new ArrayList(); list.add(1); list.add(2); list.add(3); list.add(4); list.forEach( (n) -> { System.out.println(n); } ); } } Output 1 2 3 4 7. What are @Component and @Service used for? @Component @Component annotation is used across the application to mark the beans as Spring's managed components.  Spring check for @Component annotation and will only pick up and register beans with @Component, and doesn't look for @Service and @Repository in general. @Repository @Repository annotation is used to indicate that the class provides the mechanism for storage, retrieval, search, update and delete operation on objects. @Service We mark beans with @Service to indicate that they're holding the business logic. Besides being used in the service layer, there isn't any other special use for this annotation. 8. Why 'get' type in REST API called idempotent? An idempotent HTTP method is a method that can be invoked many times without the different outcomes. It should not matter if the method has been called only once, or ten times over. The result should always be the same. - POST is NOT idempotent. - GET, PUT, DELETE, HEAD, OPTIONS and TRACE are idempotent. - A PATCH is not necessarily idempotent, although it can be. 9. Working of HashMap HashMap contains an array of Node and Node can represent a class having the following objects :  - int hash - K key - V value - Node next 10. What are different types of method in REST API? Some different types of REST API Methods are GET, POST,  PUT,  PATCH,  DELETE. 11. How to load application.yml file in application. To Work with application.yml file, create a application.yml in the src/resources folder. Spring Boot will load and parse yml file automatically and bind the values into the classes which annotated with @ConfigurationProperties 12. /users/:id and /user/name={"Saurabh"} convert into API 13. What is Transaction Management in Spring? A database transaction is a sequence of actions that are treated as a single unit of work. These actions should either complete entirely or take no effect at all. Transaction management is an important part of RDBMS-oriented enterprise application to ensure data integrity and consistency. The concept of transactions can be described with the following four key properties described as ACID  - Atomicity − A transaction should be treated as a single unit of operation, which means either the entire sequence of operations is successful or unsuccessful. - Consistency − This represents the consistency of the referential integrity of the database, unique primary keys in tables, etc. - Isolation − There may be many transaction processing with the same data set at the same time. Each transaction should be isolated from others to prevent data corruption. - Durability − Once a transaction has completed, the results of this transaction have to be made permanent and cannot be erased from the database due to system failure. 14. How to load application.yml file in application? Ans: @EnableConfigurationProperties 15. Backward Compatibility of Java 1.8 Java versions are expected to be binary backwards-compatible. For example, JDK 8 can run code compiled by JDK 7 or JDK 6. It is common to see applications leverage this backwards compatibility by using components built by different Java version. A Compatibility Guide (explained later) exists for each major release to provide special mention when something is not backwards compatible. The backwards compatibility means that you can run Java 7 program on Java 8 runtime, not the other way around. 16. Input- Output- How to do this? import java.util.*; class Main { public static void main(String s) { ArrayList arr = new ArrayList(Arrays.asList(4,2,6,8,9,1,3,4)); Set set = new HashSet(); set.addAll(arr); arr.clear(); arr.addAll(set); System.out.println(arr); } } Output 17. Default size of HashSet? Default size of HashSet is 16. 18.  What is Functional Interface and it's example? Functional Interface is a Interface that contains only one abstract method. It can contains any number of default, static methods but can have only one abstract method. Abstract method is a method that does not have a body.  @FunctionalInterface interface CustomFunctionalInterface { void display(); } public class Main { public static void main(String args) { CustomFunctionalInterface functionalInterface = () -> { System.out.println("Functional Interface Example"); }; functionalInterface.display(); } } Output Functional Interface Example 19. Difference between Encapsulation and Data Hiding.
Key Differences Between Data Hiding and Encapsulation
- Encapsulation deals with hiding the complexity of a program. On the other hand, data hiding deals with the security of data in a program. - Encapsulation focuses on wrapping (encapsulating) the complex data in order to present a simpler view for the user. On the other hand, data hiding focuses on restricting the use of data, intending to assure the data security. - In encapsulation data can be public or private but, in data hiding, data must be private only. - Data hiding is a process as well as a technique whereas, encapsulation is subprocess in data hiding. 20. What are generics? Generics means parameterized types. The idea is to allow type (Integer, String, … etc., and user-defined types) to be a parameter to methods, classes, and interfaces. Using Generics, it is possible to create classes that work with different data types. An entity such as class, interface, or method that operates on a parameterized type is a generic entity. class GenericTest { T obj; GenericTest(T obj) { this.obj = obj; } public T getObject() { return this.obj; } } class Main { public static void main(String args) { GenericTest obj1 = new GenericTest(10); System.out.println(obj1.getObject()); GenericTest obj2 = new GenericTest("Generic Example"); System.out.println(obj2.getObject()); } } Output 10 Generic Example Read the full article
0 notes
tonkiorganic · 2 years
Text
Polarr photo editor red eye
Tumblr media
POLARR PHOTO EDITOR RED EYE REGISTRATION
– Sync filters between all of your devices – Get started with basic filters, grow with pro filters – Create, customize and share your own filters – Advanced suite of face-editing tools with smart detection – Complete set of masking and local adjustment tools – Dual lens effects and depth adjustments – Custom overlay and complex blending modes Pro photographers will look forward to our layer support, curve tools, local adjustments and so much more. Novices will appreciate that Polarr offers advanced auto-enhance tools and sophisticated filters to edit all the details of your photo. It doesn't matter if you're new to photography or a pro, Polarr has it all. Polarr Photo Editor Pro v5.9.5 圆4 Multilingual | 779 MB Polar Editor Pro Polarr is the only photo editor you need. With Polarr Photo Editor Pro, you can get a convenient photo editing tool by adding all kinds of filters and image effects.
POLARR PHOTO EDITOR RED EYE REGISTRATION
Polarr Photo Editor Pro 5.9.5 + Portable freeload View personal page Contact the author Subscribe to the blog Registration NQNIA Demo2 Tuesday, January 28, 2020. Your computer can become a photo editing lab thanks to Polarr Photo Editor. Thanks to Polarr Photo Editor you can get hold of a very complete and comfortable tool to edit photos by adding all sorts of filters and image effects. Polarr Photo Editor Pro v5.9.5 圆4 Multilingualĩ/10 (48 votes) - Download Polarr Photo Editor Free.
Tumblr media
0 notes