Cal AI was removed because it used Stripe (via Link) for subscriptions instead of Apple's in-app purchase system.

The payment sheet showed "Pay another way" routing to external billing, which violates Apple's guidelines (3.1.1) for digital goods/subscriptions. Publicly highlighting the higher ARPU setup drew Apple's attention, leading to the takedown. It should be back after they fix it.
Meetup Agenda:
Register for events below:
A peer-reviewed CMU study (ICSE 2026) found 6 million fake stars across 18,617 repositories using 301,000 accounts - with AI/LLM repos the largest non-malicious category.

The definitive account comes from a peer-reviewed study presented at ICSE 2026 by researchers at Carnegie Mellon University, North Carolina State University, and Socket. Their tool, StarScout, analyzed 20 terabytes of GitHub metadata - 6.7 billion events and 326 million stars from 2019 to 2024 - and identified approximately 6 million suspected fake stars distributed across 18,617 repositories by roughly 301,000 accounts.
The problem accelerated dramatically in 2024. By July, 16.66% of all repositories with 50 or more stars were involved in fake star campaigns - up from near-zero before 2022. The researchers' detection proved accurate: 90.42% of flagged repositories and 57.07% of flagged accounts had been deleted as of January 2025, confirming GitHub itself recognized these as illegitimate.
Key Points:
Source: Awesome Agents
You can now pay for Replit with UPI via Razorpay, alongside debit & credit cards.

Use Replit's Razorpay MCP to start accepting payments instantly.
Source: Replit
OpenAI founder Sam Altman says, "To celebrate 3 million weekly codex users, we are resetting usage limits. We will do this every million users up to 10 million."

Source: Sam Altman
In modern software development, creating and maintaining test cases can be time-consuming. AI test generator offers a way to streamline this process by automatically generating tests based on application behavior, requirements, or historical data.
These tools can analyze code, user flows, or APIs and produce relevant test scenarios, reducing manual effort and accelerating the testing cycle. This is especially useful in agile environments where features are added frequently, and maintaining traditional test suites becomes challenging.
AI test generator also helps improve test coverage. By identifying edge cases or scenarios that might be overlooked by humans, they ensure that critical paths and potential vulnerabilities are tested consistently. This leads to more reliable and robust applications.
Another advantage is adaptability. As applications evolve, AI-generated tests can update themselves based on changes in the system, helping teams maintain up-to-date validation without rewriting large portions of the test suite.
By integrating AI test generators into QA workflows, teams can reduce manual effort, enhance coverage, and accelerate delivery while maintaining high-quality standards.
I've been exploring AI video generation tools and recently came across Seedance 2.0. Here's what makes it stand out:
Key Features:
• Text-to-video and image-to-video generation
• Precise motion control with keyframe editing
• High-quality 1080p output
• Multiple aspect ratios support
• Fast generation - videos up to 10 seconds
What impressed me most is the motion control capability. You can actually guide how elements move in the video, which gives much more creative control compared to other AI video tools.
The output quality is solid for marketing content, social media, and product demos. It's particularly useful if you need consistent visual styles across multiple videos.
Check it out: https://www.xmk.com/seedance/seedance-2-pro
Has anyone else tried this? Would love to hear your experiences with AI video generation tools.
Google has officially upgraded the storage for its AI Pro (formerly AI Premium) plan from 2TB to 5TB at no additional cost. This change was announced on April 1, 2026, and is rolling out globally to all subscribers.

Plan Updates:
Included Benefits:
Source: shimrit ben-yair
Razorpay's biometric authentication delivers up to 95% transaction success rates. For businesses, that’s a great relief on the last mile.

Key Details of Razorpay Passkey:
This initiative is aimed at reducing the ~35% of online card transactions that fail due to OTP issues, offering a seamless and secure alternative for digital payments in India.
Source: Razorpay
GLM-5V-Turbo does native multimodal coding, balanced visual and programming capabilities, and deep adaptation for Claude Code and Claw Scenarios.

The model can understand design drafts, screenshots, and web interfaces to generate complete, runnable code, truly achieving the goal of "seeing the screen and writing the code.
GLM-5V-Turbo leads in benchmarks for design draft reconstruction, visual code generation, multimodal retrieval and QA, and visual exploration. It also performs exceptionally well on AndroidWorld and WebVoyager, which measure control capabilities in real GUI environments.
Regarding pure-text coding, GLM-5V-Turbo maintains stable performance across three core benchmarks of CC-Bench-V2 (Backend, Frontend, and Repo Exploration), proving that the introduction of visual capabilities does not degrade text-based reasoning.

The leading performance of GLM-5V-Turbo stems from systematic upgrades across four levels:
Source: Z AI
When applications evolve, even small UI changes can unintentionally affect layouts, styles, or user experience. This is where visual regression testing becomes valuable. It focuses on detecting changes in the appearance of an application after updates, ensuring that the interface remains consistent and user-friendly.
Instead of checking functionality alone, this approach compares visual elements—such as layouts, colors, fonts, and spacing—before and after changes. By identifying differences, teams can quickly spot issues like misaligned components, broken layouts, or unintended design changes that might otherwise go unnoticed.
In practice, teams capture baseline snapshots of the interface and compare them with new versions after updates. These comparisons can be done manually or with automated tools that highlight even minor visual differences. This is especially useful for applications with complex user interfaces or frequent design updates.
Visual regression testing is often integrated into development workflows alongside other testing methods. It adds an extra layer of validation by ensuring that the product not only works correctly but also looks as intended across different devices and environments.
By incorporating visual regression testing into regular workflows, teams can maintain design consistency, catch UI issues early, and deliver a more polished and reliable user experience.
2026 has many layoff in tech companies till date. Below is list of major tech layoffs for past three months.
Sanity testing plays a crucial role in modern software development by ensuring that recent code changes, bug fixes, or minor enhancements do not introduce new issues into an application. In fast-paced development environments where continuous integration and frequent deployments are common, sanity testing provides a quick and focused method to validate that specific functionality works as expected before moving to more extensive testing phases. Unlike full regression testing, which evaluates the entire system, sanity testing concentrates only on the modified components, saving both time and effort while maintaining software stability.
This type of testing is typically performed after minor updates, patches, or bug fixes when developers need quick confirmation that the recent changes did not negatively impact existing functionality. It helps teams detect critical issues early, preventing unstable builds from progressing further in the Software Development Life Cycle (SDLC). Because sanity testing is limited in scope and quick to execute, it allows development teams to maintain productivity without sacrificing quality.
Sanity testing is especially valuable in agile and DevOps environments where rapid releases are frequent. It provides immediate feedback, reduces testing cycles, and improves collaboration between developers and QA teams. By focusing on affected modules, sanity testing minimizes unnecessary testing efforts and helps maintain release timelines. Additionally, it supports continuous delivery pipelines by ensuring that builds remain stable before deployment.
Modern tools such as Selenium, Postman, Cypress, Jenkins, and TestNG are commonly used to automate or assist sanity testing workflows. These tools help teams quickly validate UI components, APIs, and backend services after minor changes.
Overall, sanity testing acts as a safety checkpoint in the development process. By quickly validating recent updates and identifying potential risks early, teams can deliver reliable software faster and with greater confidence. Integrating sanity testing into the development workflow ultimately improves software quality, reduces debugging costs, and enhances the overall user experience.
Large Language Models (LLMs) have significantly transformed the way developers write, debug, and maintain software in 2026. What once started as simple autocomplete suggestions has now evolved into intelligent AI-powered coding assistants capable of understanding complex codebases, generating production-ready code, and helping developers solve challenging programming problems faster. These advanced coding LLMs are becoming an essential part of modern development workflows, improving productivity, reducing errors, and accelerating software delivery.
Modern coding LLMs can assist developers in multiple ways, including generating code snippets, debugging errors, explaining unfamiliar code, refactoring legacy systems, and even creating technical documentation automatically. With the ability to understand natural language prompts, developers can now describe what they want to build, and AI models can generate structured, clean, and optimized code across multiple programming languages. This makes LLMs especially useful for startups, enterprise teams, and individual developers looking to increase efficiency and reduce development time.
Choosing the best LLM for coding in 2026 depends on several important factors such as accuracy, context window size, supported programming languages, integration with development tools, pricing, and privacy requirements. Proprietary models like GPT-5, Claude, and Gemini are known for their strong reasoning abilities, large context windows, and enterprise-grade integrations. These models often deliver highly accurate results and are widely used by professional development teams.
On the other hand, open-source alternatives such as DeepSeek-Coder, Code Llama, StarCoder, and Mistral Codestral are gaining popularity due to their flexibility, cost-effectiveness, and self-hosting capabilities. These models allow developers to maintain privacy, customize workflows, and avoid vendor lock-in.
As AI continues to evolve, coding LLMs are becoming powerful AI pair programmers that help developers build better software faster. This guide explores the best LLMs for coding in 2025 and helps developers choose the right AI coding assistant based on their specific needs and development workflows.
Mistral AI has released Voxtral TTS, a high-performance, 4-billion parameter open-weights text-to-speech model that competes directly with proprietary tools like ElevenLabs.

It runs on 3 GB of RAM locally and is free. It supports nine languages, offers 3-second voice cloning with high similarity, and delivers sub-second, low-latency performance suitable for on-device applications.
Key Features of Voxtral TTS:
This release is part of Mistral's strategy to move into audio and provide open-source alternatives to premium voice AI services.
Source: Mistral