[Bf-committers] Please turn off Auto Run Python Scripts by default

Joshua Leung aligorith at gmail.com
Tue Jun 11 15:16:04 CEST 2013


It's perhaps somewhat irrelevant at this point in time (since we've
now already got a solution in progress), but for posterity, I thought
I'd just mention some thoughts I had about this situation yesterday.

As Miika says, it's clear that trying to do runtime protection (i.e.
sandboxing) is impossible, and that the only solution is to inform
users about scripts that do exist, and to prevent automatic execution
of these (by default) until users have been able to evaluate the risks
involved. In other words, what we're now trying to do / implementing.

However, where I think we could do a bit better than what we're
proposing is to take an extra step and provide users with tools
(integrated into the warning+confirmation mechanism before scripts are
executed) of the assessed risk of the scripts in the file being
dangerous to enable.

Now, perhaps I'm taking an overly optimistic view of the situation,
but if we somewhat tighten the bounds of the types of constructs that
we consider "likely to be legitimate/harmless code" while loosening
our expectation of 100% foolproof good/bad classification of a given
piece of code (e.g. we know in general that obfuscation techniques and
code which goes munging around with methods/members with underscore
names, so therefore, it's we can reliably suspect this snippet is
suspicious, though we can't pass judgement on whether it's really evil
or not), then we can use such judgements as a basis for estimating the
potential danger posed to users.

For more details, see:
http://aligorith.blogspot.co.nz/2013/06/probabilistic-security-another-take-on.html

Regards,
Joshua

On Sun, Jun 9, 2013 at 11:04 PM, Miika Hämäläinen <blender at miikah.org> wrote:
> IMHO the situation is clear: It's practically impossible to detect
> malicious scripts or make script execution fully safe. Implementing a
> safe scripting environment could be a nice long term target, but the
> main issue has to be addressed sooner. Therefore only solution is to
> inform users about a script before it's executed and prevent automatic
> execution by default.
>
> If user trusts the source or expects the file to contain scripts then he
> is free to allow those scripts to run after the notification. If not
> then its his judgement to not trust the script (even if it came from
> official training DVD). There will be users who just click "allow" to
> any dialog we show but at that point it's really their issue not ours.
>
> However, advanced users who care about their security should still be
> able to *open* (not necessarily execute) .blend files without having to
> scan them with some external script scanners or without having to ensure
> they remembered to disable auto-execution when they last time reverted
> to factory settings. :)
>
> As already discussed in this thread, ideally the dialog would both
> indicate that the script can be dangerous but also note that the blend
> file likely won't function as intended without scripts. Naturally
> advanced users should be able to allow automatic execution of any
> scripts as well if they decide so.
>
>
> Anyway, something has to be done. I mean seriously, how many Blender
> users even know there is such a setting? I didn't until last week. I
> always expected one would be safe to open .blend files until executing
> those scripts manually... Right now anyone can create a malicious blend,
> upload it online combined with a nice scene and it executes on 99% of
> users systems without them ever knowing that their personal documents
> were just uploaded online.
>
>
> 9.6.2013 12:54, Campbell Barton wrote:
>> On Sun, Jun 9, 2013 at 5:37 PM, Knapp<magick.crow at gmail.com>  wrote:
>>>>> Sure, not everyone's a programmer that can inspect the scripts, but at some point I feel the responsibility lies with the users and with the sites offering the .blends to inform them of potential dangers, and not so much with the BF trying to create a super-safe environment. Super-safe in this case translating to crippled or unusable for some.
>>>> I think you under estimate how easy it is to hide code in a blend
>>>> file, at the risk of giving people bad ideas...
>>> I like a lock on a door with windows all over the house, adding some
>>> code to blender to stop viruses will never be perfect but it will stop
>>> the neighborhood kid from breaking in. Just knowing that blender has
>>> some protection will slow down most virus attacks or stop them from
>>> ever even trying. We will never be able to stop governments or other
>>> pros from attacking the program but that does not mean we should not
>>> lock the door.
>> It seems like you reply to a different point?
>>
>> I'm only saying that its not so fair to expect users to be able to
>> open a blend and check for malicious python scripts.
>>
>> We could have some tool that extracts scripts from a blend (so at
>> least hiding isn't so much an issue), but this isn't really a
>> solution, just a tool that helps in certain situations.
>> _______________________________________________
>> Bf-committers mailing list
>> Bf-committers at blender.org
>> http://lists.blender.org/mailman/listinfo/bf-committers
>
> _______________________________________________
> Bf-committers mailing list
> Bf-committers at blender.org
> http://lists.blender.org/mailman/listinfo/bf-committers


More information about the Bf-committers mailing list