#CodeForce
Explore tagged Tumblr posts
savingcontent · 3 months ago
Text
Atuuk and Wekkarus factions arrives in fourth expansion for Distant Worlds 2 today
Continue reading Atuuk and Wekkarus factions arrives in fourth expansion for Distant Worlds 2 today
0 notes
spacepetroldelta · 9 days ago
Text
i have tasted men in tech, i have tasted men in design
and i would pick non-tech & non-design men any time
3 notes · View notes
girlwithmanyproblems · 2 years ago
Text
Tumblr media
cf contest problem titles are so dramatic
8 notes · View notes
excludedmiddle · 2 years ago
Text
Another one
This is another cute problem - actually easier than the previous one imo:
We have an array and a bunch of additional numbers we could maybe add to the array - how do we add them in a way that avoids increasing the length of the longest increasing subsequence as much as possible?
This is an example of a codeforces problem that is very much a math problem at heart. You are looking to translate your requirements into some property that gets you what you want.
I don't have a lot to say about the solving process here because my intuition found the right answer pretty quickly - first, you sort your set of additional numbers b because you want to add them largest to smallest, and then you add the largest remaining number in b if it's >= the next number of a. If you have numbers left over at the end, add them all into the result from largest to smallest.
I think why this is rated 1700 despite having such a simple solution is that it's a little tricky to prove this intuition. You have to first imagine the largest increasing subsequence(s) preexisting in a (noting that it must still exist in the result), then argue that if you add larger numbers before that you cannot possibly make it longer. Similarly, adding the smallest numbers in reverse order at the end also cannot be worse, because they're all smaller than every element of a and in reverse order, so at best one of them can take the place of the last element of a.
Again let's look at Egor's implementation:
Tumblr media
Particularly instructive here is his use of while let. In my implementation I used a more traditional Pythonic while loop ported over to Rust, but Rust very elegantly lets you unwrap the last element of b while also checking if it exists via pattern matching.
5 notes · View notes
truebusiness · 10 months ago
Text
The Thrill of Competitive Programming: A Deep Dive into Codeforces, AtCoder, and More
In the realm of software development, competitive programming stands out as a thrilling and intellectually stimulating activity. It challenges programmers to solve complex problems under tight time constraints, fostering skills that are invaluable in both academic and professional settings. This blog delves into the world of competitive programming, exploring its significance, benefits, and the…
0 notes
amtva · 2 months ago
Text
Stannis and Daenerys playing a Nim-like game with Westeros cities.
Inspired by an old codeforces task. Commissioned by @lopata-four (deviantart). Previously published on pinterest.
Tumblr media Tumblr media Tumblr media
93 notes · View notes
greenlightllc · 5 months ago
Text
Binary Circuit - Open AI’s O3 Breakthrough Shows Technology is Accelerating Again
OpenAI model o3 outperforms humans in math and programming with its reasoning. It scored an impressive 88% on the advanced reasoning ARC-AGI benchmark. A big improvement from 5% earlier this year and 32% in September. Codeforces placed O3 175th, meaning it can outperform all humans in coding.
Tumblr media
Image Source: ARC Price via X.com
Why Does It Matter? Most notable is the shorter development cycle. O3 launched months after O1, while previous AI models required 18-24 months.
Unlike conventional models that take months of training, o3 enhances inference performance. This means discoveries might happen in weeks, not years. This rapid technology advancement will require organizations to reassess innovation timeframes.
Impact on Businesses Many complex subjects can be developed faster after the o3 breakthrough.
Scientific Research: Accelerate protein folding, particle physics, and cosmic research
Engineering: Quick prototyping and problem-solving
Math: Access to previously inaccessible theoretical domains
Software Development: Enterprise-grade code automation
An Enterprise Competitive Playbook: Implement AI reasoning tools in R&D pipelines quickly. In the coming years, organizations will have to restructure tech teams around AI, and AI-first R&D will become mainstream.
Feel free to visit our website to learn more about Binary Circuit/Green Light LLC and explore our innovative solutions:
🌐 www.greenlightllc.us
2 notes · View notes
raybittechnologies · 5 months ago
Text
DeepSeek-R1: A New Era in AI Reasoning
A Chinese AI lab that has continuously been known to bring in groundbreaking innovations is what the world of artificial intelligence sees with DeepSeek. Having already tasted recent success with its free and open-source model, DeepSeek-V3, the lab now comes out with DeepSeek-R1, which is a super-strong reasoning LLM. While it’s an extremely good model in performance, the same reason which sets DeepSeek-R1 apart from other models in the AI landscape is the one which brings down its cost: it’s really cheap and accessible.
Tumblr media
What is DeepSeek-R1?
DeepSeek-R1 is the next-generation AI model, created specifically to take on complex reasoning tasks. The model uses a mixture-of-experts architecture and possesses human-like problem-solving capabilities. Its capabilities are rivaled by the OpenAI o1 model, which is impressive in mathematics, coding, and general knowledge, among other things. The sole highlight of the proposed model is its development approach. Unlike existing models, which rely upon supervised fine-tuning alone, DeepSeek-R1 applies reinforcement learning from the outset. Its base version, DeepSeek-R1-Zero, was fully trained with RL. This helps in removing the extensive need of labeled data for such models and allows it to develop abilities like the following:
Self-verification: The ability to cross-check its own produced output with correctness.
Reflection: Learnings and improvements by its mistakes
Chain-of-thought (CoT) reasoning: Logical as well as Efficient solution of the multi-step problem
This proof-of-concept shows that end-to-end RL only is enough for achieving the rational capabilities of reasoning in AI.
Performance Benchmarks
DeepSeek-R1 has successfully demonstrated its superiority in multiple benchmarks, and at times even better than the others: 1. Mathematics
AIME 2024: Scored 79.8% (Pass@1) similar to the OpenAI o1.
MATH-500: Got a whopping 93% accuracy; it was one of the benchmarks that set new standards for solving mathematical problems.
2.Coding
Codeforces Benchmark: Rank in the 96.3rd percentile of the human participants with expert-level coding abilities.
3. General Knowledge
MMLU: Accurate at 90.8%, demonstrating expertise in general knowledge.
GPQA Diamond: Obtained 71.5% success rate, topping the list on complex question answering.
4.Writing and Question-Answering
AlpacaEval 2.0: Accrued 87.6% win, indicating sophisticated ability to comprehend and answer questions.
Use Cases of DeepSeek-R1
The multifaceted use of DeepSeek-R1 in the different sectors and fields includes: 1. Education and Tutoring With the ability of DeepSeek-R1 to solve problems with great reasoning skills, it can be utilized for educational sites and tutoring software. DeepSeek-R1 will assist the students in solving tough mathematical and logical problems for a better learning process. 2. Software Development Its strong performance in coding benchmarks makes the model a robust code generation assistant in debugging and optimization tasks. It can save time for developers while maximizing productivity. 3. Research and Academia DeepSeek-R1 shines in long-context understanding and question answering. The model will prove to be helpful for researchers and academics for analysis, testing of hypotheses, and literature review. 4.Model Development DeepSeek-R1 helps to generate high-quality reasoning data that helps in developing the smaller distilled models. The distilled models have more advanced reasoning capabilities but are less computationally intensive, thereby creating opportunities for smaller organizations with more limited resources.
Revolutionary Training Pipeline
DeepSeek, one of the innovations of this structured and efficient training pipeline, includes the following: 1.Two RL Stages These stages are focused on improved reasoning patterns and aligning the model’s outputs with human preferences. 2. Two SFT Stages These are the basic reasoning and non-reasoning capabilities. The model is so versatile and well-rounded.
This approach makes DeepSeek-R1 outperform existing models, especially in reason-based tasks, while still being cost-effective.
Open Source: Democratizing AI
As a commitment to collaboration and transparency, DeepSeek has made DeepSeek-R1 open source. Researchers and developers can thus look at, modify, or deploy the model for their needs. Moreover, the APIs help make it easier for the incorporation into any application.
Why DeepSeek-R1 is a Game-Changer
DeepSeek-R1 is more than just an AI model; it’s a step forward in the development of AI reasoning. It offers performance, cost-effectiveness, and scalability to change the world and democratize access to advanced AI tools. As a coding assistant for developers, a reliable tutoring tool for educators, or a powerful analytical tool for researchers, DeepSeek-R1 is for everyone. DeepSeek-R1, with its pioneering approach and remarkable results, has set a new standard for AI innovation in the pursuit of a more intelligent and accessible future.
4 notes · View notes
Text
DeepSeek-R1: A Technical Analysis and Market Impact Assessment The release of DeepSeek-R1 represents a watershed moment in artificial intelligence development, challenging the dominance of closed-source commercial models while demonstrating comparable or superior performance across key benchmarks. This analysis examines the technical architecture, performance metrics, market implications, and broader impact of this groundbreaking model. Technical Architecture and Innovation Foundation and Evolution DeepSeek-R1 builds upon the DeepSeek V3 mixture-of-experts architecture, representing a significant evolution from its predecessor, DeepSeek-R1-Zero. The model’s development path illustrates a sophisticated approach to combining multiple training methodologies, resulting in a system that rivals or exceeds the capabilities of leading commercial alternatives. Core Architectural Components Base Architecture: Leverages DeepSeek V3’s mixture-of-experts framework Training Pipeline: Implements a hybrid approach combining reinforcement learning with supervised fine-tuning Model Distillation: Successfully incorporates distilled versions of Llama and Qwen models Scaling Strategy: Employs dynamic resource allocation for optimal performance Training Methodology Innovation The training process represents a notable departure from traditional approaches, implementing a multi-stage pipeline that addresses common limitations in language model development: Initial Development Phase (R1-Zero) Pure reinforcement learning implementation Self-evolution through trial-and-error mechanisms Demonstrated significant performance improvements AIME 2024 score increased from 15.6% to 71.0% Enhanced Training Phase (R1) Integration of cold-start data for initial fine-tuning Reasoning-oriented reinforcement learning Implementation of rejection sampling for SFT data creation Incorporation of DeepSeek-V3’s supervised data Comprehensive prompt scenario training Performance Analysis Benchmark Comparisons DeepSeek-R1’s performance across standard benchmarks demonstrates its competitive positioning: Mathematics and Reasoning Benchmark DeepSeek-R1 OpenAI o1 Delta AIME 2024 79.8% 79.2% +0.6% MATH-500 97.3% 96.4% +0.9% MMLU 90.8% 91.8% -1.0% Programming Proficiency Codeforces Rating: 2,029 (96.3rd percentile) Exceeds human programmer average performance Comparable to OpenAI o1’s 96.6% benchmark Cost-Effectiveness Analysis The model’s pricing structure represents a significant market disruption: Token Pricing Comparison Token Type DeepSeek-R1 OpenAI o1 Cost Reduction Input $0.55/M $15/M 96.3% Output $2.19/M $60/M 96.3% Market Implications and Industry Impact Democratization of AI Technology The open-source release of DeepSeek-R1 under an MIT license represents a significant shift in AI accessibility: Academic and Research Impact Enables broader research participation Facilitates reproducibility …………
Read more ........
Cheapest Auto Insurance Oklahoma
4 notes · View notes
sankiago · 11 months ago
Text
Productivity Challenge 3/14
Yesterday, I couldn't complete my 5 hours because I started gossiping with my roommate (gossip is my biggest weakness ngl), and I was super exhausted. Today, there are no excuses—I’ve got to make these 5 hours count:
Build Real Numbers - Set Theory (2h)
Solve Problems 3, 4, and 5 from the Discrete Distributions Problem Set - Probability (1h)
Finish Array e-Lectures and Start Sorting (Algorithms) - 1h
Try the Unsolved Problem and Move on to Easier Codeforces Problems (Competitive Programming) - 1h
Almost all of these are yesterday’s tasks, but today they're gonna be completed.
pd: I seriously thinking of creating another blr to make books/songs reviews and publish non log content
2 notes · View notes
savingcontent · 8 months ago
Text
Witness the Return of the Shakturi in new DLC for real-time 4X grand space strategy game Distant Worlds 2 today
Continue reading Witness the Return of the Shakturi in new DLC for real-time 4X grand space strategy game Distant Worlds 2 today
0 notes
spacepetroldelta · 3 months ago
Text
im gonna win the next codeforces contest it's a div 4 so im hopeful of an impressive performance
1 note · View note
girlwithmanyproblems · 3 months ago
Text
Hey everyone! 👋
I'm inviting you to join a new coding community where we encourage each other to solve at least one problem every day and share our progress. Whether you're working on LeetCode, Codeforces, or any other platform, feel free to post about your daily challenges. Even a simple screenshot of your solution is perfect 📸.
This isn't about following a specific roadmap or set of problems—it's about building discipline and celebrating each other's efforts. Remember, you're smart 🤓—consistency is key.
If you're interested, come be a part of this community and let's grow together through daily problem-solving! 💻✨
0 notes
excludedmiddle · 2 years ago
Text
Back on the horse (a little)
Let's write up a codeforces problem!
A couple important nuances - this problem revolves around fixed points - namely, array elements where the index of the element matches the value of the element (indexing from 1, which codeforces inexplicably loves to do).
My first instinct is that we can conceptualize this as a directed graph. Each permutation of the array is a different node on the graph, and each fixed point transformation is a directed edge from the previous state to the shifted state.
In this model, what we're looking for is a path through the graph with k edges. This could be a cycle or it could be a non-cyclic path, but there are only n nodes in the graph (they are shifts, not true permutations) so either way our answer is bounded.
This alone can get you the solution - construct the graph, find the connections, run DFS - but that solution is slow and annoying to implement.
The missing insight that gets you to a nice solution is that after such a transformation, the fixed point that you used to transform the array must be the last element, simply because of how fixed points work. Then all we have to do is check that element.
Egor has a very nice implementation here:
Tumblr media
4 notes · View notes
extraterrestrial-dust · 2 years ago
Text
9th of July, 2023 [Day 5]
Watched a part of SQL, Models, and Migrations lecture from CS50’s Web Programming with Python and JavaScript course.
Solved 2 problems on Codeforces
The SQL lecture is kinda fun and easy (so far), I was already familiar with SQL and relational databases from uni but didn't really have a real practice with it, also terms like "migrations" and "models" weren't really new to me since I've been subject to these stuff in the Software Engineering project I worked on in the last semester.
As for problem-solving, I started getting the hang of recursion, still struggling a bit maybe, but I'm better now at visualizing the solutions I write using recursive functions.
2 notes · View notes
olehswiftcomstock · 6 days ago
Text
Elizabeth Comstock Horodnitskii, if we had working OKX and I had at least several billions of USD on my accounts in banks, within my reach at distance of debit card, I would write with you several CodeForces rounds right now
0 notes