[Bf-committers] Python sandbox

jonathan d p ferguson jdpf.plus at gmail.com
Thu Mar 18 04:29:42 CET 2010


First off, Leif, thanks for volunteering to study this set of ideas!  
And thanks to all for the excellent comments and opinions expressed so  

I would like to forward what I've been calling my axiom of usability  
in computer security for the last 10 years or so:

"Computer security is inversely related to the usability of a system.  
The more secure, the less usable. The less secure, the more likely it  
is to be usable."

Good computer security, these days, has become a usability problem,  
because computer security is *complicated* and users don't like  
*complicated.* Yet as the malware industry (estimated at $1+ billion  
and growing) continues to grow, the exploits will only become more and  
more sophisticated. End users must be educated, and that's a big  
reason why I'm suggesting a Web of Trust model. That model can be  
limited to any population we wish.

I agree that it should Just Work. Seeing the success of the Web of  
Trust in the Debian and Ubuntu operating systems, it is quite evident  
to me that asking for a clear chain of cryptographically verifiable  
identity through a Web of Trust model, is not only doable, but when  
carefully implemented, will protect end users from malicious code, in  
any language, leaving the blender coders free to make system changes  
as they see fit. Blender is becoming a *distribution.* It is likely to  
rapidly be exiting the realm of an "application" in short order, as  
Leif points out.

Blender is GPL, and that is a very good thing for security, as it  
allows for peer review. AFAIK, Python scripts written to run in  
blender will also be required to be GPL. Concerned users can read over  
the code prior to execution. The injection of a "do you trust X?"  
dialog only solves half of the problem, however. The other half is  
that of verifying the identity of X. Creating a central distribution  
system is also a fine thing--- but it does not address the problem of  
trust fully--- and requires time on the behalf of those who are  
charged with the task of vetting what is allowed in that "plugin  

I am fully in support of a centralized, or decentralized web of plugin  
repositories. It follows, to me, that each extension or plugin or file  
should be independently verifiable, however. That verification should  
*become* automatic for 99% of the cases in the Web of Trust model,  
reducing the requirement of intervention on the behalf of the user.  
Firefox doesn't actually offer any trust metric at all. End users just  
"use the plugins." Further, Firefox is *fundamentally* different in  
form, and function from blender--- it is not a content creation  
tool--- it is a content browser. While one *can* get a plugin to  
create content, that is not, and was not, the primary purpose. That's  
an important distinction. Blender does not have the same luxury of  
limiting script capability.

The goal is to prevent someone from executing a malicious bit of  

This is a very very challenging problem. Computer security experts  
have spent decades and billions of (government) dollars trying to  
arrive at perfect solutions to this problem. As far as I know, no such  
*perfect* solutions exist--- but many solutions do exist, and the  
appropriateness of the solution depends on the use case. Sandboxing (a  
form of mandatory access control) and a GnuPG Web of Trust (whether  
centralized or decentralized), are potentially good solutions--- but  
we need to evaluate the merits of each for the blender community. I've  
"been there, done that" with the sandbox in currently shipping  
commercial software, and don't actually believe that it is the best  
solution for the blender community, especially given how we in that  
community work, and share files.

I believe that blender's community is growing, and that it is a  
cutting-edge community. Many members of the blender community already  
know and trust each other, so making a Web of Trust will only serve to  
strengthen the blender community. Finally, I point out that a Web of  
Trust is not inherently restricted to closed communities at all. Most  
individuals will only need to trust a few sources, the most trusted  
being the centralized source.

Blender, is itself, a very cutting-edge system, and the tight  
integration of python in blender is part of what makes it so cutting  
edge. Bringing blender towards a socially verifiable web of trust,  
will go farther than any other "application" that I know of, to  
validate "plugins" or "extensions" or .blend files.

On Mar 17, 2010, at 9:45 PM, Brecht Van Lommel wrote:

> Hi,
> On Thu, Mar 18, 2010 at 1:51 AM, Benjamin Tolputt
> <btolputt at internode.on.net> wrote:
>>> To sum up my opinion, sandboxing is very hard and not something we  
>>> can
>>> solve once, it requires continuous attention, so let's not even try
>>> it. Instead, the install addon operator should warn about security
>>> problems, and loading a .blend file with scripts should become  
>>> easier
>>> for users.

I fully agree. Avoiding an arms race is wise in any case.

While the principle of sandboxing is easy doing it *well* can be very  
hard--- regardless of the language used due to the depth of  
integration needed for blender. I continue to be very excited about  
blender's generated python API. It will enable many rapid improvements  
in usability for end users. The blender developers have worked hard to  
create a graduated use-curve for making mere users into python script  
writers. This is a wonderful effort, and I applaud the blender project  
for doing so.

I argue that the more sophisticated the system, the harder it is to  
create some kind of Mandatory Access Control layer, of which a sandbox  
is only the beginning. Jails, chroots, application firewalls, trusted  
operating systems, (of which TrustedBSD and SELinux are the only  
remaining examples still under active development) all of these  
solutions lead rapidly down a path of security policy making.

In my experience, such solutions end up shifting the problem to a  
"security expert" who then spends time making sophisticated policies  
to protect a sophisticated API. I remind you that blender developers  
*deliberately* fully exposed the API for a wide variety of very good  
reasons, and a language shift will not, in my opinion, really solve  
much of anything, as the need for the API will continue to exist.

>> Actually, sand-boxing scripts is quite easy provided you use a  
>> platform
>> that supports it.

I agree with Brecht. It might be easy for the first few API calls, but  
as those calls (instructions) become more sophisticated, the level of  
attention, detail, and work required to support that "sandbox" becomes  
ever larger. At the risk of sounding hyperbolic, I do not wish blender  
to enter the malware arms race. Blender will lose. Limiting  
functionality of such an amazing tool is also quite unacceptable to me.

>> Python currently does not support this so the task
>> seems insurmountable; but only so long as you look through the Python
>> lens. It is quite simple to sandbox applications using Lua - simply
>> don't give them access to the unsecure functionality. If you don't  
>> want
>> the user reading/writing files not explicitly handed to them? Then  
>> don't
>> give them access to the "io" library (by excluding it from the VM's
>> initialisation). Don't want to give the person access to the  
>> operating
>> system, don't add the "os" module to those accessible by the VM.  
>> And so on.

> I don't think making the Blender API safe is easy. Some examples of
> things that could be exploited:
> * .blend/image/sound save operators
> * render animation operator
> * physics systems point caches writing to disk
> * open file browser + run file delete operator
> * binding keymap or buttons to an operator
> * compositing file output node
> * register operator to override .obj exporter
> And then there are of course the more complicated ones, memory
> corruption, buffer overruns, etc now become security issues. I bet you
> can find dozens of indirect an non-obvious ways to do damage. Now, you
> can try to figure out all those cases and add checks for them, but
> doing this well is really hard. Security issues in web browsers don't
> happen because they forgot to disable access to the IO module. And as
> new features are added to Blender this does require continuous
> attention.

Again, I fully agree with Brecht. Let's please avoid an arms race  
here. I believe that it will ultimately stall blender development. It  
may not for many years to come, but I have no idea when it will become  
an unmanageably difficult issue.

Finally, I don't hear about malware targeting Maya much. As far as I  
know, Autodesk does nothing to protect a user from malware with  
python. Blender's current default of disabling a script in a .blend  
file goes a good distance towards protecting end users from malicious  
scripts. But it too, is only part of the larger solution.

Thanks for the lively conversation!

have a day.yad

More information about the Bf-committers mailing list