Sora Died: OpenAI Closes Video Generator and Midjourney Dominates the Market
On March 24, 2026, OpenAI officially shut down Sora, its artificial intelligence video generator that had been announced as revolutionary in February 2024. The shutdown was no surprise to those who follow the market closely -- the signs had been there for months. But the impact on the AI media generation ecosystem was immediate and significant.
While OpenAI retreated, other players advanced. Midjourney released its first video model (V1) in web beta, Google DeepMind responded with Veo 3.1 at 1080p, and Flux.1 -- an open source model with 12 billion tometers -- became the most popular imaging model of the year. The generative media landscape in 2026 is radically different from what anyone predicted.
1. Sora's end: what happened and why
Sora was announced by OpenAI in February 2024 with impressive demos: photorealistic videos of up to 60 seconds generated from text prompts. The expectation was that it would revolutionize video production. The problem is that the gap between an impressive demo and a viable product for production turned out to be greater than OpenAI anticipated.
The technical problems
- Prohibitive computational cost:every second of high-quality video cost hundreds of dollars in GPU computing. For a company that already burns billions on infrastructure for language models, Sora was a drain on resources
- Temporal inconsistency:Despite the polished demos, videos generated in real production showed artifacts -- objects that changed shape between frames, inconsistent shadows, impossible physics in scenes with complex movement
- Generation speed:Generating 10 seconds of video could take 20-30 minutes. For professional workflows, this was impractical
- Legal issues:Lawsuits from Hollywood studios over training data copyright have created legal uncertainty
The strategic decision
Internally, OpenAI faced a choice: continue investing billions in Sora or redirect those resources to GPT-5.4 and its autonomous agent platform. The choice was clear. The market for language models and agents generates immediate revenue. The video generation market, at that time, was not.
Sam Altman acknowledged in an interview after the closing: "We learned a lot from Sora, but the timing wasn't right to invest at this scale in video generation when our language models and agents are driving real growth for the company."
2. The $1 billion partnership with Disney that didn't work out
In 2025, OpenAI and Disney announced a $1 billion partnership to use Sora in Disney content production -- from scene previews to background generation and auxiliary visual effects. The partnership was the seal of validation that Sora needed.
In practice, the partnership ran into problems that no one anticipated:
- Insufficient creative control:Disney directors and artists demanded pixel-by-pixel control over each frame. Sora generated results that required so much manual touch-up that the productivity gain was minimal
- Consistency between scenes:Maintaining the same character, lighting, and visual style across multiple generated scenes was extremely difficult. Each generation was essentially independent
- Intellectual property:Disney was uncomfortable with the possibility that Sora's training data included copyrighted material from other studios
- Actual and projected cost:the cost per minute of production quality video was 3-5x higher than projected in the original agreement
The partnership was officially ended along with Sora. Disney has redirected its investments in generative AI to smaller-scale, more controllable internal tools.
Market lesson:Sora and the partnership with Disney have shown that, in professional video production, "almost good" is not enough. Studios need full control over every visual aspect, and generative models don't yet offer that level of precision.
3. Midjourney Video V1: 25x cheaper and aesthetically superior
While Sora died, Midjourney did something few expected: it released its first video model. Midjourney Video V1, available in beta on the web interface for Pro plan subscribers, generates short clips of up to 10 seconds with the same aesthetic quality that has made Midjourney a leader in image generation.
What makes Midjourney Video V1 different
- Cinematic aesthetics:Midjourney has always been recognized for generating images with superior artistic quality. This same aesthetic sensitivity was transferred to video -- lighting, composition and color grading are naturally cinematic
- 25x lower cost:Generating 5 seconds of video on Midjourney V1 costs a fraction of what it did on Sora. This makes it viable forindependent content creatorsand small agencies
- Simple web interface:no API or terminal required. Describe what you want, adjust basic tometers and manage it. Simplicity of use and a real competitive advantage
- Style consistency:the model maintains the visual style throughout the frames with much less artifacts than competitors
Limitations
The V1 is still limited in duration (maximum 10 seconds), does not support native audio and control over camera movements is basic. For professional long-form production, it still does not replace traditional workflows. But for reels, animated thumbnails, visual concepts and social content, it is already unbeatable in terms of cost-quality.
4. Google Veo 3.1: 1080p and professional control
Google DeepMind did not stand still. Veo 3.1, launched in March 2026, is the company's answer to the video generation market. Unlike Midjourney which focuses on aesthetics, Veo 3.1 focuses onresolution and control.
Technical specifications
- Native 1080p resolution:Veo 3.1 generates videos in Full HD without upscaling, with real clarity in each frame
- Duration up to 30 seconds:triple that of Midjourney V1, which allows for more complex scenes with narrative
- Camera control:pan, tilt, zoom, dolly and tracking shot can be specified at the prompt or via dedicated tometers
- Character Consistency:better than any competitor at maintaining character appearance throughout the scene
- Synchronized audio:generation of basic sound effects synchronized with the visual action (steps, environments, impacts)
Veo 3.1 is available via API on Google Cloud and integrated with YouTube Create, YouTube's streamlined editing tool. The integration with YouTube is strategic: it positions Google as the natural AI video provider for the platform's 2 billion monthly users.
Master the tools that are shaping the future
Generation of video, image, code, copy -- everything changes quickly. Professional skills for Claude Code ensure that you adapt to each new development. 748+ ready-made skills.
Ver Mega Bundle — $95. Flux.1: the open source model that dominated 2026
If there is an unexpected success story in generative AI in 2026, it is that of Flux.1. Developed by Black Forest Labs (founded by former Stability AI researchers), Flux.1 is an open source imaging model with 12 billion tometers that quickly became the most popular of the year.
Why Flux.1 exploded
- True open source:Model weights, training code and inference are completely open. Anyone can run locally without paying API
- Quality comtoble to proprietary models:In blind tests, users often cannot distinguish Flux.1 images from Midjourney v7 or DALL-E 3 images
- Massive LoRA community:the community has created thousands of adaptations (LoRAs) for specific styles -- product photography, editorial illustration, character design, architecture, fashion. You can find a LoRA for practically any niche
- Runs on affordable hardware:with quantization, Flux.1 runs on GPUs with 8GB of VRAM. This means that any recent gaming laptop or desktop can generate images locally
- Total privacy:As it runs locally, no data leaves your machine. For companies with privacy requirements, this is decisive
The impact on the market
Flux.1 did with image models what Llama did with language models: it democratized access.Designers and creativesThose who previously relied on expensive subscriptions can now generate professional-quality images at no recurring cost. This puts pressure on companies like Midjourney and Adobe to justify their prices with differentiating features that open source does not offer.
6. Comparison: AI video and image tools in 2026
Video generation
| Tool | Resolution | Max duration | Relative cost | Best for |
|---|---|---|---|---|
| Midjourney Video V1 | 720p | 10s | Low | Aesthetics, social content |
| Google Veo 3.1 | 1080p | 30s | Medium | Production, camera control |
| Runway Gen-4 | 1080p | 16s | Alto | Professional editing, VFX |
| Kling 2.0 | 1080p | 20s | Medium | Realistic movement, lip sync |
| Mochi 2 (open source) | 720p | 8s | Free | Experimentation, privacy |
Image generation
| Tool | Tipo | Quality | Cost | Differential |
|---|---|---|---|---|
| Midjourney v7 | Owner | Excellent | US$10-60/month | Superior aesthetics |
| Flux.1 | Open source | Very good | Free (local) | LoRAs, privacy |
| DALL-E 3 | Owner | Boa | By credit | ChatGPT integration |
| MAI-Image-2 | Owner | Very good | Azure API | Text on images, Office |
| Ideogram 3 | Owner | Very good | US$7-20/month | Perfect typography |
7. What changes for content creators and marketers
The closure of Sora and the emergence of more accessible alternatives change the practical scenario for those working with content and marketing in concrete ways.
Short video for social networks is now viable
With Midjourney Video V1 costing a fraction of Sora, creating short AI videos for reels, stories and TikTok has become economically viable. This doesn't mean that all content will be generated by AI -- it means that visual concepts, transitions and b-roll can be produced in minutes instead of hours.
Product images without photographer
Flux.1 with LoRAs specialized in product photography is already being used by e-commerce companies to generate product image variants in different scenarios. A product can be photographed once and then "placed" in dozens of different environments via AI, without a new shoot.
The barrier to entry has fallen drastically
In 2024, using AI to generate quality visual media required expensive subscriptions and technical expertise. By 2026, anyone with a decent computer can generate professional images for free (Flux.1) and short videos for less than $10/month (Midjourney). This levels the playing field and forces professionals to compete on creativity and strategy, not access to tools.
The workflow has changed: AI as a starting point, not the final product
The most efficient way to use toolsMultimodal AIin 2026 they are not as substitutes for human production, but as accelerators of ideation. Generate 20 visual concepts in 5 minutes, choose the best, manually refine. This workflow is 5-10x faster than starting from scratch and produces results that are uniquely yours.
8. The future of AI video generation
The closure of Sora is not the end of AI video generation -- it is the end of the hype phase and the beginning of the real product phase. Here's what to expect for the rest of 2026:
Longer videos with narrative
Current models generate clips of 10-30 seconds. The next step is generating 1-3 minute scenes with a coherent narrative -- consistent characters, narrative arc, and logical transitions between shots. Runway and Google are closer to this goal.
Integration with existing editing tools
Rather than replacing Adobe Premiere or DaVinci Resolve, AI video tools are integrating as plugins. Generate b-roll within your video editor, without leaving the workflow. Adobe has already integrated AI video models into Premiere Pro via Firefly.
Open source reaching owners
Flux.1 showed that open source can compete in imaging. Mochi 2 is doing the same for video. By the end of 2026, expect open source video models that rival Midjourney V1 in quality, running locally on consumer GPUs.
Regulation approaching
The European Union is finalizing specific regulations for AI-generated media, including watermark requirements, disclosure and limitations of use in advertising. The US remains behind but with bipartisan proposals in progress. Professionals who work with generative media must monitor these changes closely.
9. Sources and references
- Midjourney Drops First Video Model-- TechRadar. Report on the launch of Midjourney Video V1 and comparison with Sora and Veo.
- Sora Video Generator Shutdown--GlobalGPT. Detailed analysis of the technical-financial reasons that led OpenAI to close Sora.
- Best AI Image and Video Generators 2026-- Sects. Complete ranking and comparison of AI image and video generation tools in 2026.
The AI market changes quickly. Your skills need to keep up.
Sora died, Midjourney evolved, Flux.1 exploded. Those who have updated skills adapt. 748+ professional skills for Claude Code. $9.
Quero as Skills — $9FAQ
OpenAI shut down Sora on March 24, 2026 due to operational cost and strategic focus issues. The model consumed enormous computational resources to generate quality videos, and the US$1 billion partnership with Disney did not generate the expected commercial results. OpenAI chose to redirect resources to its language models (GPT-5.4) and autonomous agents.
Midjourney Video V1 is not a direct replacement for Sora, but it occupies the space that Sora left behind. It generates short videos (up to 10 seconds) with superior aesthetic quality than Sora in many scenarios, and costs 25x less per second of video generated. It is available in beta on the Midjourney web interface for Pro plan subscribers.
Flux.1 is an open source image generation model with 12 billion tometers, developed by Black Forest Labs. It has become the most popular image template of 2026 because it can be run locally with no API cost, generates high-quality images comtoble to proprietary templates, and allows fine-tuning for specific styles. The open source community has created thousands of adaptations (LoRAs) for specific niches.
It depends on the use case. For aesthetic quality and short videos, the Midjourney Video V1 leads. For high resolution (1080p) and longer videos, Google Veo 3.1 is the best option. For video production at scale with granular control, Runway Gen-4 is still the benchmark. And for those who need an open source solution, Mochi 2 is the most viable option.