We’ve tested this in our production environment on mobile robots (think quadcopter and ground UGV) and it works really nicely
If this is military related, im terrified of the future. Sci-fi movies with crazy drones from back when are no longer that cute.
7 years ago, this felt like science fiction:
https://www.youtube.com/watch?v=HipTO_7mUOw
Now that we've seen the use of drones in the Ukraine war, 10k+ drone light shows, Waymo's autonomous cars, and tons of AI advancements in signals processing and planning, this seems obvious.
This is important.
I don't want to live on this planet anymore.
We have nuclear weapons.
We already achieved complete destruction potential.
Drones don't change much. It's potentially better for us civilians if drones get used to attack a lot more targeted (think Putin).
This should lead to narrow policies which might be less aggressive
> potentially better for us civilians if drones get used to attack a lot more targeted (think Putin
Putin is well protected, way better than US presidents and candidates. With lower prices and barriers it can actually be you, or any low profile target. Luckily real terrorist are mostly uneducated.
The truly scary part is that it’s a straightforward evolution from this to 1000 fps hyperspectral sensors.
There will be no hiding from these things and no possibility of evasion.
They’ll have agility exceeding champion drone pilots and be too small to even see or hear until it’s far too late.
Life in the Donbass trenches is already hell. We’ll find a way to make it worse.
Then it should be possible to use them to counter and defend. Think of AI powered interceptor drones patrolling the area, anti-drone light machine guns.
As long as you keep paying your gemini anti-drone bill and don't set account limits you'll be fine! </s>
Is this OSS?
Unclear exactly what you're asking. The linked paper describes an algorithm (patent status unclear). That paper happens to link to a GPL licensed implementation whose authors explicitly solicit business licensing inquiries. The related model weights are available on Hugging Face (license unclear). Notably the HF readme file contains conflicting claims. The metadata block specifies apache while the body specifies GPL.
https://github.com/AILab-CVC/YOLO-World
https://huggingface.co/spaces/stevengrove/YOLO-World/tree/ma...
The paper says it is based on YOLOv8, which uses the even stricter AGPL-3.0. That means you can use it commercially, but all derived code (even in a cloud service) must be made open source as well.
They probably mean the algorithm, but nevertheless the YOLO models are relatively simple so if you know what you're doing it's pretty easy to reimplement them from scratch and avoid the AGPL license for code. I did so once for the YOLOv11 model myself, so I assume any researcher worth their salt would also be able to do so too if they wanted to commercialize a similar architecture.
You don't just need to reimplement the architecture (which is trivial even for non-researcher level devs), you need to re-train the weights from scratch. According to the legal team behind Yolo, weights (including modifications via fine tuning) fall under the AGPL as well and you need to contact their sales team for a custom license if you want to deviate from AGPL.
At least for the Ultralytics YOLO models this is also relatively easy (I've done it too). These models are tiny by today's standards, so training them from scratch even on consumer hardware is doable in reasonable time. The only tricky part is writing the training code which is a little more complicated than just reimplementing the architecture itself, but, again, if a random scrub like me can do it then any researcher worth their salt will be able to do it too.
You don't just need the training algorithm, but also the training data. Which in turn might have additional license requirements.
AFAIK their pretrained models just use publicly available datasets. From their README:
> YOLO11 Detect, Segment and Pose models pretrained on the COCO dataset are available here, as well as YOLO11 Classify models pretrained on the ImageNet dataset.
I assume they refer to the academic basis for the algorithm rather than the implementation itself.
Slightly unrelated, how does AGPL work when applied to model weights? It seems plausible that a service could be structured to have pluggable models on the backend. Would that be sufficient to avoid triggering it?
Does GPL still mean anything if you can ask AI to read from code A and reimplement into code B?
The standard for humans is a clean room reimplementation so I guess you'd need 2 AIs, one to translate A into a list of requirements and one to translate that list back into code.
But honestly by the time AI is proficiently writing large quantities of code reliably and without human intervention it's unclear how much significance human labor in general will have. Software licensing is the least of our concerns.
If that's legal then copyright is meaningless which was the original intention of the GPL.
So, uncopyrightable AI generated code is actually a good thing from open source community standpoint?
Presumably depends on the impacts. It's an ideology that seeks user freedom. If you need access to the source code to use as a template that clearly favors proprietary offerings. But if you can easily clone proprietary programs that would favor the end user.
How would this kind of mechanical translation fail to be a violation of copyright?